Alveo Blog Data Management

Data. Delivered.

A SIX and Alveo video series talking about current topics and trends for data supply and data management.

Data. Delivered.

Optimizing the data supply chain to empower business users.
Watch the video below to see and hear Martijn Groot, VP Strategy & Marketing at Alveo speak to Roy Kirby, Head of Core Products, Financial Information at SIX about how the data supply chain can be optimized to empower business users.

Data. Delivered.

Better understand and validate ESG data with time series data from social media.
Alveo’s Boyke Babeolal and Tanya Seajay from Orenda, a SIX company, had an exciting chat about how social media offers real-time data to give more insight into ESG data. It can help investors gain a deeper understanding of traditional news, assist in validating ESG ratings, and add emotions as an influential factor to financial moulders multi-factor models. Watch the video now to find out more.

Data. Delivered.

Get more from your data with the right processes, tools and controls.
SIX’s Sam Sundera and Mark Hermeling from Alveo talk about the need for traceability, data lineage, usage monitoring and other audit metrics and how those have shaped data management solutions until now. In particular, cloud adoption, the continued increase of volume and the range of data, for example, through ESG data, and increased self-sufficiency of business users, look to be continued drivers for change and improvement in data management. Watch the video now to find out more.

Alveo Blog Data Quality

Breaking Down the Barriers

Data Quality and Data Integration Prerequisites to Merge Analytics and Data Capabilities

For the financial services industry, data is a goldmine. It can provide new insights that give a competitive advantage and play a crucial role in answering critical business questions. But like all precious metals, how effectively you mine it and then use it determines the extent of the reward. Collecting data for its own sake is of little use.

To get to new business insights, firms must draw on an increasingly broad set of data, integrate it with internally sourced data and overlay it with their analyses. Easy modelling and onboarding of new data sets are vital requirements here, tying for first place with data aggregation capabilities. Today, many firms have limitations when it comes to joining together datasets or onboarding new data sets. This results in delays in making data operational or adequately utilizing the information already there.

Recent research commissioned by Alveo found that nearly two-thirds (63%) of data scientists in financial services firms say their organization is not currently able to combine data and analytics in a single environment. That’s a severe concern when data can only be maximized when it’s closely linked with analytics. Throw in issues around data quality, volumes of data siloed in data stores and hard-to-access legacy systems that are often hardcoded to specific data formats, and it’s easy to see the scale of the problem.

Ensuring data quality

Data quality is improved when processes and business users use it, and feedback is incorporated to enhance sourcing and screening policies. Insufficient or incorrect data can be damaging, leading to inaccurate analytics, mispriced products and erroneous strategies, to name but a few. Failing to meet strict regulatory demands on data has severe consequences, including reputational damage, financial penalties, or suspension.

Ensuring high data quality is not a one-off project but a continuous process that must start with understanding and validating data. Only then can a robust data quality framework be developed to identify points where quality problems can occur.

The challenges financial firms face with data stretch well beyond ensuring quality. 38% of survey respondents say integrating structured and unstructured data is one of their main challenges. 28% of data scientists say that time-consuming data searches or double sourcing are their top issues. More than three-quarters (77%) say their organization requests the same data multiple times from a single data vendor, leading to unnecessary costs.

A lack of communication and streamlining data sources is also highlighted by 82% of respondents, who say their organization’s front office teams use different vendors than their compliance, risk and operations teams, leading to potentially expensive inconsistencies in market data.

The move to data-as-a-service

Blending data management and analytics can help users leverage multiple data sources and data types. Firms are now moving analytics to where the data lives rather than carrying large stores of often siloed data over to the analytics function.  Financial services organizations want to use these new, integrated capabilities to drive better-informed decision-making. When combined with the latest analytics capabilities, the move to data-as-a-service (“DaaS”) is helping to streamline data sets for operations and provide quality input for analytics.

Combining data management and analytics has vast benefits for quants and data scientists, with 27% highlighting ‘improved productivity as one of the main advantages. Firms and their data scientists can access multiple data sources and also multiple data types. With the help of popular programming languages like Python, users can create a robust and scalable meeting place across their data supply chain to share analytics and develop a common approach to risk and performance management and compliance.

Future focus

Today, technology, process, macro-economic factors, and business awareness contribute to the drive to bring analytics and data together. Financial firms need to see the complete data story for business and operational decision-making.  Bringing data together in one management system offers a new world of opportunity, where costs are optimized, users have better access and visibility on available data sets, and the overall value of the data and the data management function is maximized.

Alveo Blog Compliance

Around the World in 30 Days – Data Management Insights from Around the Globe

Different regions have different financial priorities and initiatives. During our Summer Series, we’re stopping in 6 countries to discuss the top issues they’re facing when it comes to financial services and new regulations.

Scratch your travel itch and come along with us over the next 30 days to gain a new perspective on your approach to data management.

Putting ESG data to work: overcoming data management and data quality challenges

Environmental, Social and Governance (ESG) based investing is growing rapidly. The data landscape to support ESG use cases includes screening indicators such as board composition and energy use, third-party ratings as well as primary data such as waste and emissions. There is a wide range of primary data sources, aggregators and reporting standards. ESG ratings in particular are very dispersed reflecting different methodologies, input data and weights – which means investors need to go to the underlying data for their decision making.

Role of ESG in investment operations

Depending on the investment style, ESG information plays a key role in research, fund product development, external manager selection, asset selection, performance tracking, client reporting, regulatory reporting, as well as voting. In short, ESG data is needed through the entire chain and must be made available to different stakeholders across the investment process.

Increasingly, ESG is becoming an investment factor in its own right. This means ESG indicators and ESG-based selection criteria need to be distilled from a broader set of primary data points, self-declarations in the annual report and third-party assessments. Additionally, ESG information needs to be standardized, to roll up company-based information to portfolio-level information, track ESG criteria against third-party indices or external reporting requirements. However, a lot of corporates do not (yet) report sufficient information causing a need to proxy or estimate missing data points or leaving them outside investment consideration altogether.

Data management challenges

Legislatures are promoting sustainable investment by creating taxonomies that specify which economic activities can be viewed as environmentally sustainable. From a data management perspective, this classification refines and is an additional lens on the traditional industry sector classifications.

Other ingredients are hard numbers such as carbon footprinting (detailing scope 1, 2 and 3 emissions, clarifying whether scope 3 is upstream or downstream and so on), gender diversity, water usage and board composition. More qualitative data elements include sustainability scores, ratings and other third-party assessments that use some condensed statistics. A key requirement is the accurate linking of financial instruments to entities.

As ESG investment criteria become operationalized, ESG data management is rapidly evolving. Whenever new data categories or metrics are introduced, data management practices typically start with improvisation through desk level tools including spreadsheets, local databases and other workarounds. This is gradually streamlined, centralized, operationalized and ultimately embedded into core processes to become BAU. Currently, the investment management industry is somewhere halfway in that process.

ESG data quality issues

Given the diversity in ESG data sources  and the corresponding variety in data structures, as well as different external reporting requirements, ESG data quality issues prevent effective integration into the end-to-end investment operation.

In the table below, we highlight some of the more common data quality and metadata considerations with typical examples of those in financial services and how they surface in the ESG data space.

Table 2: example ESG data management challenges

What is required to fully embed ESG data into investment operations?

To overcome these data quality issues, firms need a process that seamlessly acquires, integrates and verifies ESG information. The data management function should facilitate the discoverability of information and effective integration into business user workflows. In short, data management should service users from the use case down, not from the technology and data sets up.

ESG data management capabilities should facilitate the easy roll-up of information from instrument to portfolio and blend ESG with pricing and reference data sets, so it becomes an integral part of the end-to-end investment management process.

Data derivation capabilities and business rules can spot gaps and highlight outliers, whether it concerns historical patterns or outliers within a peer group, industry or portfolio. Additionally, historical data to run scenarios can help with adequate risk and performance assessment of ESG factors. Having these capabilities in-house is good news for all users across the investment management process.

Risk Mitigation: Maximising Market Data ROI

Watch the video below to hear our CEO Mark Hepsworth, sit down for a discussion with 3di CEO John White, as they discuss risk mitigation and how institutions can truly ensure max ROI.

Interview Questions:

  1. What are some of the major issues you are seeing from clients around market data and have these issues changed over the past few years?
  2. Most institutions are increasing their spending on market data, but how do they ensure they maximize the ROI on this spend?
  3. How important is data lineage in allowing clients to use market data efficiently?
  4. As clients are moving more market data infrastructure and services to the cloud, how is this impacting their use of market data?
  5. Are you seeing organizations looking at both market data licensing and data management together and if so why?

Post-Brexit, post-pandemic London

For the City of London, the last few years have been eventful, to say the least. Midway through the worldwide Covid pandemic, Brexit finally landed with a free trade agreement agreed on Christmas eve 2020. A Memorandum of Understanding on Financial Services was agreed upon at the end of March. However, this remains to be signed and is entirely separate from any decisions on regulatory equivalence.

Large international banks prepared for the worst and the possibility of a hard Brexit by strengthening their European operations in the years leading up to Brexit. However, the discussion on the materiality of EU-based operations will continue to rage for some. ESMA adopted decisions to recognize the three UK CCPs under EMIR. These recognition decisions took effect the day following the end of the transition period and continue to apply while the equivalence decision remains in force until 30 June 2022. One immediate effect of Brexit was a sharp drop in share trading volumes in January, with volume moving to continental Europe. For other sectors, Singapore and New York are well-positioned to nibble at the City’s business.

Financial services, together with industries such as fisheries, remain one of the most politicized of topics in the EU – UK relationship. The U.K. government must consider to what extent it should diverge from the EU’s system of financial services regulation. It is unlikely that any announcement on equivalence decisions will be forthcoming in the short term. A decision to grant full regulatory equivalence would depend upon UK alignment to EU regulation on a forward-looking basis – which would defeat the whole point of Brexit. Equivalence may not be worth the loss of rulemaking autonomy that is likely to be a condition of any EU determination. The longer equivalence decisions are delayed, the less valuable they are as firms adapt to the post-Brexit landscape.

As the financial services sector is coming to terms with the post-Brexit reality, it must prepare for regulatory divergence with the level of dispersion still an open question. Differences can emerge in clearing relationships, pre-and post-trade transparency, investor protection, requirements on (managed services) providers, derivatives reporting, solvency rules, and future ESG disclosure requirements. Having a flexible yet rigorous data management infrastructure in place and using suppliers with operations in the UK and the EU will mitigate this divergence and prepare firms for the future.

FRTB: the need to integrate data management and analytics

After some delays, the deadline for FRTB implementation is now approaching fast. As of January 1, 2023, banks are expected to have implemented newly required processes and begin reporting based on the new Fundamental Review of the Trading Book (FRTB) standards. With Libor’s transition taking place over the next years, it is a busy market data world.

FRTB poses material new demands on the depth and breadth of market data, risk calculations, and data governance. A successful FRTB implementation will need to address new requirements in market data, analytical capabilities, organizational alignment, supporting technology and overall governance. In this blog, I focus on the need for integrated data management and analytics.

FRTB requires additional market data history and sufficient observations for internal model banks to ascertain whether risk factors are modellable. These observations can be committed quotes or transactions and sourced from a bank’s internal trading system and supplemented with external sources. Apart from trade-level data, additional referential information is needed for liquidity horizon and whether risk factors are in the reduced set or not.

The market data landscape continues to broaden. Apart from the traditional enterprise data providers, many firms that collect market data and trade level information as part of their business now offer this data directly. This includes brokerages, clearinghouses and central securities depositories. Different data marketplaces have been developed, providing further sourcing options for market data procurement. Effectively sourcing the required additional data and monitoring its usage to get the most out of its market data spend is becoming a key capability.

Organizational alignment between front office, risk and finance is required as well. Many firms still run different processes to acquire, quality-proof and derive market data. This often leads to failures in backtesting and in comparing front-office and mid-office data. FRTB causes the cost of inconsistency to go up. Regulatory considerations aside, clearly documenting and using the same curve definitions, cut-off times to snap market data prices and models to calculate risk factors can reduce operational cost as well. Clean and consistent market data makes for more effective decision-making and risk and regulatory reporting.

FRTB accelerates the need for market data and analytics to be more closely integrated. Advanced analytics is no longer mostly used at the end-point of data flows (e.g. by quants and data scientists using desk-level tools); it is now increasingly used in intermediate steps in day-to-day business processes, including risk management.

Data quality management, too, is increasingly getting automated. Algorithms can deal with many exceptions (e.g. automatically triggering requests to additional data sources). Using a feedback loop as pictured above, the proportion of them requiring human eyes can go down. To successfully prepare data for machine learning, data management is a foundational capability. Regulators take a much closer look at data quality and the processes that operate on the data before it is fed into a model, scrutinizing provenance, audit and quality controls.

Important to improve any process is to have a feedback loop that provides built-in learning to change the mix of data sources and business rules. In data quality management, this learning has to be both:

  • Continuous and bottom-up. Persistent quality issues should lead to a review of data sources. For example, using false positives or information from subsequent manual intervention to tune the screening rules. Rules that look for deviations against market levels taking into account prevailing volatility, will naturally self-adjust.
  • Periodic and top-down. This could, for example, include looking at trends in data quality numbers, the relative quality of different data feeds and demands of different users downstream. It also includes a review of the SLA and KPIs of managed data services providers.

If you cannot assess the accuracy, correctness and timeliness of your data sets or access it, slice and dice it and cut them up as granular as you need for risk and control purposes, then how can you do what matters: make the correct business calls based on that same data?

Data management and analytics are both key foundational capabilities for any business process in banks but most definitely for risk management and finance, which are all the functions where all data streams come together to enable enterprise-level reporting.

The Importance of Data as an Asset

Watch the video below to hear our Sales Director of the APAC region, Daniel Kennedy, discuss why the way in which we look at data is changing. Data is universally seen as an asset, but as is the case with other assets, they can depreciate quickly if you don’t manage them. So what does it take to keep your data value?

Interview Questions:

  1. Why is data considered a new asset class today?
  2. In your experience, what are the critical elements of data life cycle management?
  3. What else do firms need to consider when dealing with this highly valuable asset?

Engineering Trends in Financial Data Management

Martijn Groot is speaking from Berlin with Mark Hermeling about how data management technology advances rapidly to help financial services firms onboard, process and propagate data effectively so firms get the most out of their content. Would you know which are the best open sources, standards, or could strategies for you?

2021 Summer Series eBook
Free Download

FRTB and optimal market data management Whitepaper

Discusses the challenges of FRTB as well as their overlap with other risk and valuation needs and business user enablement.
Alveo Blog Data Governance

7 Data Sins: Insufficient model risk management

Stef Nielen Red Swan Risk Vlog

As a continuation of our 7 Data Sins series, Stef Nielen, Director Strategic Business Development at Alveo speaks with John Matway, CEO and founding partner at Red Swan Risk. During the discussion, Stef and John explore whether data models and data assignments are reliable enough to be trusted to navigate you through risky waters.

Q1: What are the challenges around modelling securities – why is it so challenging? I mean, when a company has just bought a risk system, doesn’t it deal with coverage out of the box?

A: Sometimes there is no suitable model or the right data might not be readily at hand (yet), which prompts one to resort to proxying. Here one wants to tread even more carefully to avoid creating additional model risk. Most generally speaking, model risk occurs when models don’t behave as they ought to. This may be due to an insufficient analytical model, misuse of the model, or plain input errors such as bad market data or incorrect terms & conditions or simply wrongfully chosen reference data such as sector classifications, ratings, etc.

Why is this so important?

Models can misbehave at the security level for long periods before showing up at the portfolio level.  Perhaps the size of the hedge was small and has grown larger, or the volatility suddenly changed.  This may suddenly create distortions at the portfolio, benchmark, or higher aggregate level. These problems often surface during times of market stress and can be very resource-intensive to troubleshoot at a critical time.

Q2: Why is it so resource-intensive to change, troubleshoot, and manage data?

A: When rules are hardcoded or implemented in an inflexible manner (i.e. model queries and scripts are being based on rigid and narrowly defined model schemas and inputs with too few degrees of freedom)  the problem is often exacerbated, making it truly difficult to interrogate and correct changes, when they are critically required.  Too often, the developer or analyst is given a set of functional requirements that are too narrowly defined, based on the current state of holdings and securities.

Given the dynamic nature of portfolio holdings, OTC instruments, available market data, and model improvements, it is essential to have a very flexible mapping process with and transparent and configurable rules that make it much easier to identify modeling issues and resolve them more efficiently.  A unified data model that tracks the data lineage of both model inputs and outputs (including risk stat, stress tests, and simulations), model choices, mapping rules, and portfolio holdings provides a highly robust and efficient framework for controlling this process. The benefit of working with a commercial tool is that it has been designed to address a very wide range of instrument types, data fields, and market data sources so you won’t outgrow its utility. So, in essence, having a unified model and data lineage capabilities combined together implies less digging and troubleshooting for the business user

Q3: Can we discuss some real-life examples perhaps?

A: Some examples are…

  1. Corporate bond credit risk derived from equity volatility using the credit grades model can cause significant distortions. A more direct method uses the observed pricing of single-name CDS prices or a sector-based credit curve. However, these must be properly assigned to the security with either the correct CDS red code or a waterfall structure for assigning the sector credit curve.  In the case of capital structure arbitrage where there are corporate bonds at various seniority and CDS, it is very important to be consistent in the mapping rules so that both the bond and the CDS have the same market data inputs.
  2. A similar issue occurs when using constant maturity commodity curves for convenience. This is easier to maintain than assigning the correct futures data set each time.  Calendar spread risk is underestimated with constant maturity curves that share data.  The negative front-month crude prices that occurred in March are an example of why constant maturity would have underestimated the risk significantly.  (I like this example because PassPort is a good solution for managing commodity future curve names in RiskMetrics).
  3. Changing over to the new Libor curves will likely be a very painful process for banks unless they have a very flexible mapping process that can easily be configured to assign the new curves to the right security types. (This is a simple procedure with the Map Editor and PassPort).
  4. But perhaps a more benign example is that of modelling one’s complete book with the right mapping for each individual security (i.e.: choosing the right risk factors as well as the correct reference data, such as ratings and sector classifications), whilst skipping to model all this stuff for its benchmark. This modelling inconsistency between portfolio and benchmark will introduce a TE-risk which can be contributed completely to inconsistent data mapping, rather than true market dynamics.

In summary, to model things properly – be it a simple proxy or something more granular and exact- one needs a setup that can dynamically configure the users ‘modelling choices and data mapping logic’. And as market conditions and data availability evolves over time, one should have a system that can adapt. Both Alveo and Red Swan allow the users to control their model and data mapping choices in a very flexible, transparent, user-friendly, and visual way. This doesn’t’ just help you during a setup or implementation phase but perhaps, more importantly, it drastically improves your ever-evolving modelling choices and (proxy) coverage over time as well as ongoing operational efficiencies. In short, it enables greater control over your model risk management.

Alveo Blog Data Governance

7 Data Sins Series: Serving Multiple Masters

There are different paradoxes in data management. One is that, quite often, firms have multiple different “master” databases for their price data, their customer data and the terms and conditions of the products they invest in, trade or issue. The record we have seen is a firm that had 32 different widely used databases just to keep financial product terms and conditions. And this is not even counting a large number of small local databases and spreadsheets that also stored some of this information.

The “sin” here is clear: avoid the redundant storing of information! Having multiple places where you store information leads to the need to reconcile and cross-compare and, in general, causes uncertainty as to the validity of data points. At best, you could hope to be consistent across your different databases. But at what cost?

There have been several reasons why firms set up multiple databases with essentially the same data:

  • Decision making and budgeting across departmental lines made it easier to do something at the local level rather than establishing firm-wide services
  • Lack of sound data governance and the tracking of metadata historically made it difficult to track permissions and data usage when consolidating onto a single database or single (external) service
  • Departments often added their own custom fields to their datasets, which could be specific identifiers for homegrown applications, specific product taxonomies, industry classifications or derived data elements.
  • Departments may have wanted privileged access to a dataset or may have had performance concerns that caused them to have their own local copy.

Needless to say, departments that rely the most on aggregated, enterprise-wide information such as the risk and finance departments have suffered the most from a fragmented approach to data management and data storage causing endless rework, reconciliation and verification of data sets.

Setting up departmental level stores of data may have made some sense ten or even five years ago.

However, with today’s managed data services this is no longer needed and here’s why:

  • Managed data services have come a long way in offering concrete business user enablement and easy data access via browsers, enterprise search, easy integration into user workflows and APIs for those needing programmatic access.
  • Today’s managed data services include a comprehensive approach to tracking metadata including permissions, usage rights, quality checks performed and data lineage information – which provides a full explanation of what sources, formulas or human actions led to a certain data value.
  • New cloud-based services provide the required scalability and uptime requirements to serve different departments.
  • Providers such as Alveo via their Business Domain Models provide the capability of using a firmwide data set with different local requirements to cater to idiosyncratic needs – all in the same service.

Keeping data stored in redundant copies may have made sense at some point to prevent resource conflicts and stop applications or users from waiting for access. However, the flipside of different master databases also means redundant entry points of commercial data feeds into organizations – often leading to avoidable data spend. In our experience, teams can best be connected through shared and transparent data assets, that easily integrate into their existing workflows with the capability to augment data sets to cater to local requirements. Our PaSS managed data service does exactly that.

Alveo Blog Data Governance

7 Data Sins Series: Achieving and keeping Data Quality from one-off to a continuous process

Moderator: Alexis Bisagni

Speaker: Boyke Baboelal

As a continuation of our 7 Data Sins series, Boyke Baboelal, Strategic Solutions Director in the Americas speaks with Alexis Bisagni about data quality and whether it’s a continuous fight against uncertainty. This surprise factor in data can arise from poor data quality management, and not keeping track of metadata such as changes, permissions, and quality checks.

Q: Leaning on your experience in financial data management – what have you observed with respect to data quality efforts? (Timemarker 2:00)

A: What I have observed is that there is a wide range of Data Quality maturity within organizations. Some organizations run regular data cleansing activities against their database (which requires manual effort and planning), some have triggers that check data when it is stored (but these systems are difficult to maintain and scale), and others have an Enterprise Data Management system that manages the entire data flow – but this is often still suboptimal.

Why is that? Data management teams have been downscaled in the last decade, while data volumes, types, and complexity have increased. There is a strong day-to-day focus in operations with little information where structural issues or bottlenecks are. This results in work performed in less optimal and reactive ways. In addition, organizations are under more scrutiny from regulators, requiring more controls, and from data vendors, who want to make sure entitlements are adhered to. All of this makes data management more complex. Existing EDM solutions are NOT able to meet new requirements in a cost-effective way.

Q: In your opinion, what is needed to make existing EDM solutions capable of meeting new requirements in a cost-effective way? (Timemarker 3:50)

A: Data management implementations and EDM platforms focus on automating the entire data flow end-to-end. However, simply processing data is not enough to ensure operational efficiency, transparency, and compliance. The critical component here is more information that can be used to understand what is going well and what can be improved.  Meta-data, operational metrics, usage statistics, audit trails, and data lineage information are key in taking data management to the next level.

Q: Where does an organization even start to get a grip on this? (Timemarker 5:05)

A: The first thing to do is to understand what is needed. A lot of organizations start with an inventory of what they currently have and the requirements from the driver for the change, for example, a regulatory requirement. This approach results in being less adaptable to future requirements. So how can we do better? First, it is important to have a data quality framework, including Data Governance. Starting with a Data Quality Framework forces you to look beyond your current needs, view the requirements from different angles. A framework also puts you in a mindset to continuously improve. A proper data management solution should support a data quality framework and collect all the meta-data.

Q: Do you think that buy vs build is a relevant question? (Timemarker 6:26)

A: No, in my opinion, this is not a relevant question. The reason for that is that data management is over-simplified due to a lack of understanding of data quality in a larger context. While I agree that if you need a small number of fields for your Securities from a specific vendor, every day, that would be easy to implement. Taking a moment to think through the concept, building a data management system in-house for today’s needs requires significant effort and detailed knowledge. Even with 20+ years of experience as a Financial Engineer in the Risk and Data Management space, when I think of building a system from scratch, I get anxious. The reason is that building a system in-house would involve large project risks, and the sad thing is that the system will most likely not be future-proof or benefit from the experience of peers in the industry. An adaptable off-the-shelf system will reduce a lot of that risk.

Q: When you have operational, usage, and lineage data, what comes next? (Timemarker 8:42)

A: This is when the magic starts. What I mean by that is it opens data management to the world of intelligence, analytics, and further automation. Having this information will give you more insight into your operations, what works well, and what doesn’t. The result is that you will gain more intelligence in your operations and that intelligence will enable you to comply with regulatory requirements, vendor agreements, and internal control frameworks. Having all this insight will allow your operations and data quality to get better day-by-day, resulting in continuous improvement.

Q: Continuous improvement sounds nice, but what about the bottom line? (Timemarker 10:18)

A: Increased operational efficiency, improved data quality, reduced data risks, compliance with regulatory requirements, vendor agreements, internal control frameworks, and SLAs, will in the end reduce overall TCO.

To summarize, for the financial services industry in the current environment, making the most of their data assets is not a nice to have – it is a critical must-have. Firms not only need to manage increasing volumes and diversity of data sources, they also need to keep close track of their metadata, i.e. different quality aspects that help with determining whether it is fit for purpose, optimizing sourcing and validation processes and, in general, help operational efficiency