Alveo Blog Data Management

Alveo’s Data Integration Services: the case of WM Daten’s transition to EDDy_neu

Financial services firms require accurate, timely, and complete information regarding securities identification, terms and conditions, corporate actions, and pricing for their pre-and post-trade processes. With Alveo’s Security Master solutions, firms can quickly onboard new data or consuming applications, track and improve data quality, and even view data consumption and distribution in real-time.

Alveo provides its customers with a single user experience and dashboard to interact, intervene and visualize the market and reference data, data models, business rules, and workflows we process on their behalf. It allows our clients to seamlessly access and self-serve data without the need for IT support.

The world of financial information products can sometimes be as dynamic as the financial markets themselves. New information products regularly come up to address new content and new use cases – for example in ESG data management or alternative data sets using a credit card, POS, geospatial or other information to provide additional color on projections/financial products.

Existing feeds undergo regular change as well – in particular, the enterprise data products that (by definition) cater to a range of use cases and cover comprehensive information on financial instruments, corporate actions, and issuers. This could include information on new trading venues, new types of financial products, regulatory information, going back a bit further these were tax information fields such as EUSD or FATCA indicators.

Part of Alveo’s standard service is maintained integration with data providers. We sync with the release schedule of the data providers – we have arrangements with them for support, documentation, and the use of their data in development/test. Alveo’s industry-standard data model follows suit as it reflects all content covered in our integration and is regularly expanded.

The number of changes in the vendor data models runs in the 1000s per year. The bulk of these are static attribute changes and mappings but new pricing fields –any data that is changing frequently – come up regularly as well. This includes for example new ratings, macro-economic indicators, news sentiment, or climate information.

Alveo Industry Data model changes include fields in enumerations, e.g. industry sectors, rating scales, and currency codes. Static means any referential or master data, terms and conditions of products, information on legal entities, tax, and listings. And then there are mapping changes. Representing new domain values accurately in our standard model for example. In some cases, these are fairly straightforward 1-1 mappings where we standardize terminology and naming conventions. In other cases, there will be interdependencies between different fields and it is not a straightforward mapping.

During the mapping, we represent data and cross-ref to our standard form. In this process, we also make links between object types (ultimate parent – issuer – issues – listings – corporate actions) explicit. The standard model as mentioned is extensible by clients.

One of the many data providers supported in Alveo’s data management solution is Alveo’s standard integration with WM Daten. On occasion, there is a very large-scale upgrade or entire feed replacement in the financial data industry. One example of that is the upcoming change of the data management infrastructure of WM Daten, i.e. the upcoming transition from EDDy (“enhanced data delivery”) to EDDy_neu. See https://eic.wmdaten.com/index.php/home for more information from WM Daten on this project.

Essentially, WM Daten is overhauling its data delivery infrastructure and moving from EDDy to EDDY_neu. From March 2022 there is a parallel run and from April 2023 the old platform will be shut down.

The change is material and brings numerous changes to the feed format – as well as increased flexibility in data consumption to users of WM’s data.

Reference data changes include:

  • Replacement of primary key (issuer identification number)
  • Newmarket identifier (enabling multicurrency listings)
  • New number ranges, new variables, and obsolete existing fields
  • Corporate actions changes
  • New fields/identifiers
  • New order for sorting data records

Included in Alveo’s data management solution is full maintenance of any changes data providers make to their data products. On top of that, Alveo abstracts from specific data formats in a standard business domain model which helps our clients combine different sources.

To discover how Alveo can help you with the WM Daten transition or any other pricing and reference data management challenges, please click here.

Alveo Blog ESG Data Management

Tackling growing pains: ESG Data Management is coming of age

Tackling growing pains: ESG Data Management is coming of age

Until recently, investing according to ESG criteria was the remit of specialist companies known as green or impact investors. These investors would have their own in-house data collection processes and their proprietary screening or selection criteria to assess potential investments. Although there were different reporting frameworks in places such as the PRI and GRI standards, the absence of standard data collection, integration, and reporting solutions required them to create their own “ESG data hub” to provision their own analysts, front office, and client reporting teams. As ESG investing has become mainstream due to both a regulatory push as well as an investor pull, ESG information management is fast becoming mainstream for research, asset allocation, performance measurement, operations, client reporting, and regulatory reporting.

With the deadline for key ESG regulations like SFDR fast approaching, asset managers and asset owners must do more to anchor ESG data into their end to end workflow processes. Simply having a source of ESG data to feed to the front office is not sufficient as businesses need this data from across the organisation to integrate into the whole investment management process – from research to client and regulatory reporting.

 

ESG integration is needed across buy-side and sell-side business processes

Any firm that sells or distributes investment products into the European Union will have to follow the SFDR regulation. SFDR requires firms to report on 18 mandatory Principal Adverse Impact (PAI) Indicators as well as some optional ones. Paradoxically, the reporting requirements for publicly listed companies that asset managers invest in lag behind the SFDR timetable. This causes an information gap and the need to supplement corporate disclosures with third party ESG scores, expert opinion as well as internal models to come to an overall assessment of ESG criteria.

There is also a need for ESG-data on the sell-side of financial services. For instance, in corporate banking, ESG data is increasingly crucial to support customer onboarding and, in particular, Know Your Client (KYC) processes. Banks will have to report their ‘green asset ratio’ – in essence, the make-up of their loan book in terms of business activities of the companies they lend to according to the EU Taxonomy.

In the future, if a company signs up to get a loan from a bank as part of the screening criteria, it will be asked to disclose what kinds of business activities it is involved in and what kinds of sustainability benchmarks it has in place.

Banks and other sell-side financial services firms will also frequently screen their suppliers, as part of a process called Know Your Third Party (KY3P). They will want to know who they are doing business with, so they can then report this in their own Annual Report. Banks will also want to climate stress test the products they hold in their trading book for their own investment against certain climate scenarios. The ECB, MAS as well as the Bank of England have all incorporated climate stress test scenarios in their overall stress testing programmes to gauge the solvency and resilience of banks.

ESG data also has a role to play in the way banks manage their mortgage book as they are increasingly looking for geospatial data, for example to work out the flood risk of the properties they finance.

Both sell-side and buy-side financial services companies will also need to integrate ESG data with data from the more traditional pricing and reference providers to give a composite view, incorporating not just the prices of instruments and the terms and conditions but also the ESG characteristics.

ESG data now needs to spread across the whole of the organisation, integrating with all the different data sets to provide a composite picture, becoming a key source of intelligence, not just for the front office but also for multiple business functions.

ESG data challenges

Common ESG data challenges firms encounter as they develop their ESG capabilities include data availability, usability, comparability and workflow integration. Many corporates do not report the information investment managers require for their decision making or indeed their regulatory reporting. This leads to the need to combine corporate disclosures with third-party estimates and scores, as well as internal assessments.

Usability issues include the disparity in methodologies third-party firms use to estimate or score firms on ESG criteria. Rating firms have their own input sets, models and weights and often come to different conclusions. Compared to credit ratings, the correlation between the scores given to a firm by different rating agencies is lower. However, credit analysis is as old as the banking industry and the metric gauged (probability of default) is clear. It could be that, with increased global disclosure standards under IFRS, ESG scores will converge.

Comparability issues in ESG are exacerbated by different standards, different reporting frequencies or calendars and also the lack of historical data to track progress and benchmark performance over a longer time period.

The biggest issue however is how to anchor the ESG data in a range of different business processes to put users on a common footing – which requires the capability to quickly onboard users, reports and business applications onto a common set of quality-vetted ESG data.

Looking ahead

Accessing ESG data and ensuring it is of good quality, comparable with other ESG data sets and well-integrated within existing workflows can be difficult.

Organisations will need to cross-reference, match and combine the data, as well as assimilate it with traditional data on companies and their financial products. Traditional prices and security terms and conditions of financial services providers will help build a composite picture from those different sources.

However, data management solutions and Data-as-a-Service offerings are now available to help firms get the ESG information they need, the capabilities to quality-check, supplement and enrich it with their own proprietary data or methods and the integration functionality to put users and applications on a common footing. This will enable firms to have an ESG data foundation for their end-to-end investment management processes on which they can build – for asset allocation, operations, client reporting and regulatory reporting alike.

Click HERE to find out more about Alveo’s ESG solution.

Alveo Blog Data Management

Data. Delivered.

A SIX and Alveo video series talking about current topics and trends for data supply and data management.

Data. Delivered.

Optimizing the data supply chain to empower business users.
Watch the video below to see and hear Martijn Groot, VP Strategy & Marketing at Alveo speak to Roy Kirby, Head of Core Products, Financial Information at SIX about how the data supply chain can be optimized to empower business users.

 

Data. Delivered.

Better understand and validate ESG data with time series data from social media.
Alveo’s Boyke Babeolal and Tanya Seajay from Orenda, a SIX company, had an exciting chat about how social media offers real-time data to give more insight into ESG data. It can help investors gain a deeper understanding of traditional news, assist in validating ESG ratings, and add emotions as an influential factor to financial moulders multi-factor models. Watch the video now to find out more.

 

Data. Delivered.

Get more from your data with the right processes, tools and controls.
SIX’s Sam Sundera and Mark Hermeling from Alveo talk about the need for traceability, data lineage, usage monitoring and other audit metrics and how those have shaped data management solutions until now. In particular, cloud adoption, the continued increase of volume and the range of data, for example, through ESG data, and increased self-sufficiency of business users, look to be continued drivers for change and improvement in data management. Watch the video now to find out more.

Alveo Blog Compliance

Around the World in 30 Days – Data Management Insights from Around the Globe

Different regions have different financial priorities and initiatives. During our Summer Series, we’re stopping in 6 countries to discuss the top issues they’re facing when it comes to financial services and new regulations.

Scratch your travel itch and come along with us over the next 30 days to gain a new perspective on your approach to data management.

Putting ESG data to work: overcoming data management and data quality challenges

Environmental, Social and Governance (ESG) based investing is growing rapidly. The data landscape to support ESG use cases includes screening indicators such as board composition and energy use, third-party ratings as well as primary data such as waste and emissions. There is a wide range of primary data sources, aggregators and reporting standards. ESG ratings in particular are very dispersed reflecting different methodologies, input data and weights – which means investors need to go to the underlying data for their decision making.

Role of ESG in investment operations

Depending on the investment style, ESG information plays a key role in research, fund product development, external manager selection, asset selection, performance tracking, client reporting, regulatory reporting, as well as voting. In short, ESG data is needed through the entire chain and must be made available to different stakeholders across the investment process.

Increasingly, ESG is becoming an investment factor in its own right. This means ESG indicators and ESG-based selection criteria need to be distilled from a broader set of primary data points, self-declarations in the annual report and third-party assessments. Additionally, ESG information needs to be standardized, to roll up company-based information to portfolio-level information, track ESG criteria against third-party indices or external reporting requirements. However, a lot of corporates do not (yet) report sufficient information causing a need to proxy or estimate missing data points or leaving them outside investment consideration altogether.

Data management challenges

Legislatures are promoting sustainable investment by creating taxonomies that specify which economic activities can be viewed as environmentally sustainable. From a data management perspective, this classification refines and is an additional lens on the traditional industry sector classifications.

Other ingredients are hard numbers such as carbon footprinting (detailing scope 1, 2 and 3 emissions, clarifying whether scope 3 is upstream or downstream and so on), gender diversity, water usage and board composition. More qualitative data elements include sustainability scores, ratings and other third-party assessments that use some condensed statistics. A key requirement is the accurate linking of financial instruments to entities.

As ESG investment criteria become operationalized, ESG data management is rapidly evolving. Whenever new data categories or metrics are introduced, data management practices typically start with improvisation through desk level tools including spreadsheets, local databases and other workarounds. This is gradually streamlined, centralized, operationalized and ultimately embedded into core processes to become BAU. Currently, the investment management industry is somewhere halfway in that process.

ESG data quality issues

Given the diversity in ESG data sources  and the corresponding variety in data structures, as well as different external reporting requirements, ESG data quality issues prevent effective integration into the end-to-end investment operation.

In the table below, we highlight some of the more common data quality and metadata considerations with typical examples of those in financial services and how they surface in the ESG data space.

Table 2: example ESG data management challenges
Table 2: example ESG data management challenges

What is required to fully embed ESG data into investment operations?

To overcome these data quality issues, firms need a process that seamlessly acquires, integrates and verifies ESG information. The data management function should facilitate the discoverability of information and effective integration into business user workflows. In short, data management should service users from the use case down, not from the technology and data sets up.

ESG data management capabilities should facilitate the easy roll-up of information from instrument to portfolio and blend ESG with pricing and reference data sets, so it becomes an integral part of the end-to-end investment management process.

Data derivation capabilities and business rules can spot gaps and highlight outliers, whether it concerns historical patterns or outliers within a peer group, industry or portfolio. Additionally, historical data to run scenarios can help with adequate risk and performance assessment of ESG factors. Having these capabilities in-house is good news for all users across the investment management process.

Risk Mitigation: Maximising Market Data ROI

Watch the video below to hear our CEO Mark Hepsworth, sit down for a discussion with 3di CEO John White, as they discuss risk mitigation and how institutions can truly ensure max ROI.

Interview Questions:

  1. What are some of the major issues you are seeing from clients around market data and have these issues changed over the past few years?
  2. Most institutions are increasing their spending on market data, but how do they ensure they maximize the ROI on this spend?
  3. How important is data lineage in allowing clients to use market data efficiently?
  4. As clients are moving more market data infrastructure and services to the cloud, how is this impacting their use of market data?
  5. Are you seeing organizations looking at both market data licensing and data management together and if so why?

Post-Brexit, post-pandemic London

For the City of London, the last few years have been eventful, to say the least. Midway through the worldwide Covid pandemic, Brexit finally landed with a free trade agreement agreed on Christmas eve 2020. A Memorandum of Understanding on Financial Services was agreed upon at the end of March. However, this remains to be signed and is entirely separate from any decisions on regulatory equivalence.

Large international banks prepared for the worst and the possibility of a hard Brexit by strengthening their European operations in the years leading up to Brexit. However, the discussion on the materiality of EU-based operations will continue to rage for some. ESMA adopted decisions to recognize the three UK CCPs under EMIR. These recognition decisions took effect the day following the end of the transition period and continue to apply while the equivalence decision remains in force until 30 June 2022. One immediate effect of Brexit was a sharp drop in share trading volumes in January, with volume moving to continental Europe. For other sectors, Singapore and New York are well-positioned to nibble at the City’s business.

Financial services, together with industries such as fisheries, remain one of the most politicized of topics in the EU – UK relationship. The U.K. government must consider to what extent it should diverge from the EU’s system of financial services regulation. It is unlikely that any announcement on equivalence decisions will be forthcoming in the short term. A decision to grant full regulatory equivalence would depend upon UK alignment to EU regulation on a forward-looking basis – which would defeat the whole point of Brexit. Equivalence may not be worth the loss of rulemaking autonomy that is likely to be a condition of any EU determination. The longer equivalence decisions are delayed, the less valuable they are as firms adapt to the post-Brexit landscape.

As the financial services sector is coming to terms with the post-Brexit reality, it must prepare for regulatory divergence with the level of dispersion still an open question. Differences can emerge in clearing relationships, pre-and post-trade transparency, investor protection, requirements on (managed services) providers, derivatives reporting, solvency rules, and future ESG disclosure requirements. Having a flexible yet rigorous data management infrastructure in place and using suppliers with operations in the UK and the EU will mitigate this divergence and prepare firms for the future.

FRTB: the need to integrate data management and analytics

After some delays, the deadline for FRTB implementation is now approaching fast. As of January 1, 2023, banks are expected to have implemented newly required processes and begin reporting based on the new Fundamental Review of the Trading Book (FRTB) standards. With Libor’s transition taking place over the next years, it is a busy market data world.

FRTB poses material new demands on the depth and breadth of market data, risk calculations, and data governance. A successful FRTB implementation will need to address new requirements in market data, analytical capabilities, organizational alignment, supporting technology and overall governance. In this blog, I focus on the need for integrated data management and analytics.

FRTB requires additional market data history and sufficient observations for internal model banks to ascertain whether risk factors are modellable. These observations can be committed quotes or transactions and sourced from a bank’s internal trading system and supplemented with external sources. Apart from trade-level data, additional referential information is needed for liquidity horizon and whether risk factors are in the reduced set or not.

The market data landscape continues to broaden. Apart from the traditional enterprise data providers, many firms that collect market data and trade level information as part of their business now offer this data directly. This includes brokerages, clearinghouses and central securities depositories. Different data marketplaces have been developed, providing further sourcing options for market data procurement. Effectively sourcing the required additional data and monitoring its usage to get the most out of its market data spend is becoming a key capability.

Organizational alignment between front office, risk and finance is required as well. Many firms still run different processes to acquire, quality-proof and derive market data. This often leads to failures in backtesting and in comparing front-office and mid-office data. FRTB causes the cost of inconsistency to go up. Regulatory considerations aside, clearly documenting and using the same curve definitions, cut-off times to snap market data prices and models to calculate risk factors can reduce operational cost as well. Clean and consistent market data makes for more effective decision-making and risk and regulatory reporting.

FRTB accelerates the need for market data and analytics to be more closely integrated. Advanced analytics is no longer mostly used at the end-point of data flows (e.g. by quants and data scientists using desk-level tools); it is now increasingly used in intermediate steps in day-to-day business processes, including risk management.

Data quality management, too, is increasingly getting automated. Algorithms can deal with many exceptions (e.g. automatically triggering requests to additional data sources). Using a feedback loop as pictured above, the proportion of them requiring human eyes can go down. To successfully prepare data for machine learning, data management is a foundational capability. Regulators take a much closer look at data quality and the processes that operate on the data before it is fed into a model, scrutinizing provenance, audit and quality controls.

Important to improve any process is to have a feedback loop that provides built-in learning to change the mix of data sources and business rules. In data quality management, this learning has to be both:

  • Continuous and bottom-up. Persistent quality issues should lead to a review of data sources. For example, using false positives or information from subsequent manual intervention to tune the screening rules. Rules that look for deviations against market levels taking into account prevailing volatility, will naturally self-adjust.
  • Periodic and top-down. This could, for example, include looking at trends in data quality numbers, the relative quality of different data feeds and demands of different users downstream. It also includes a review of the SLA and KPIs of managed data services providers.

If you cannot assess the accuracy, correctness and timeliness of your data sets or access it, slice and dice it and cut them up as granular as you need for risk and control purposes, then how can you do what matters: make the correct business calls based on that same data?

Data management and analytics are both key foundational capabilities for any business process in banks but most definitely for risk management and finance, which are all the functions where all data streams come together to enable enterprise-level reporting.

The Importance of Data as an Asset

Watch the video below to hear our Sales Director of the APAC region, Daniel Kennedy, discuss why the way in which we look at data is changing. Data is universally seen as an asset, but as is the case with other assets, they can depreciate quickly if you don’t manage them. So what does it take to keep your data value?

Interview Questions:

  1. Why is data considered a new asset class today?
  2. In your experience, what are the critical elements of data life cycle management?
  3. What else do firms need to consider when dealing with this highly valuable asset?

Engineering Trends in Financial Data Management

Martijn Groot is speaking from Berlin with Mark Hermeling about how data management technology advances rapidly to help financial services firms onboard, process and propagate data effectively so firms get the most out of their content. Would you know which are the best open sources, standards, or could strategies for you?

2021 Summer Series eBook
Free Download

FRTB and optimal market data management Whitepaper

Discusses the challenges of FRTB as well as their overlap with other risk and valuation needs and business user enablement.
Alveo Blog Data Management

Achieving Data Alpha: Top FAQ’s in financial data management

Financial services has always been a data-driven business. Achieving accurate and timely data and achieving an information advantage over the competition has long driven the industry. From carrier pigeons to early automation and from the low-latency race to using modern-day data integration, data governance, and data accessibility technologies to fuel user productivity and informed decision-making.

With an explosion of data sources (the alt data boom), the opportunity and the challenges to achieve and maintain an information advantage are immense. We call this challenge achieving data alpha.

In this blog series, we list some common questions we are often asked to help firms on their way to improving their data management and improve data alpha.

Q: What does a financial data management solution do?

A: A financial data management solution helps financial services firms effectively source, onboard, cross-reference, quality-check, and distribute financial data such as prices for valuation, historical price data for risk and scenario management, and master or reference data such as legal entity data, index data, ESG data, calendar data, financial product terms and conditions and corporate actions including changes in company structures as well as income events such as dividends. Simply put, a data management solution should make sure users and applications are effectively supplied with the data they need to do their jobs. See our Solutions Guide for more information.

Q: How do I improve the data quality?

A:  To an extent, data quality depends on the use case. There are different aspects of quality that can be measured, including timeliness, accuracy, and completeness, and often there are trade-offs between them. For use in the front-office, speed is paramount. In risk and financial reporting, the turnaround time for decision making is longer, and a different trade-off will be made. Generally put, a data management system normally keeps track of gaps or delays in incoming data feeds and any manual interventions that occur. It should differentiate between false positives and overlooked mistakes and feedback into the configuration of screening rules in such a system. Reporting on Data Quality Intelligence will help optimize the mix of data sources, business rules, and operations specialists. See our Data Quality Intelligence Brochure for more information.

Q: How do I reduce my data cost?

A: Financial data costs have been sky-rocketing and has reached 32B$ in 2019 (see https://www.tradersmagazine.com/am/global-spend-on-market-data-totals-record-32b-in-2019/ ). Data management solutions can help keep tabs on costs simply by streamlining data sourcing and supply – preventing multiple independent entry points. Also, they can warehouse data to prevent unnecessary repeat requests. Due to the quality metrics mentioned above, these solutions can help make more informed data sources. Another aspect of data cost control is that data management solutions can also track usage permissions to ensure firms do not breach content license agreements. Lastly, through tracking consumption and other data flows, firms can better match and map costs to individual users and departments. See our Smart Sourcing and Smart Data whitepaper, for more information.

Q: What is data governance?

A: Data governance is a rapidly developing concept that speaks to organizational capabilities to ensure high-quality data and appropriate controls on that data. It covers a range of topics, including the accessibility of data, clarity on the data assets a firm has through a proper inventory, and documentation on metadata aspects leading to transparency on where those data sets can be used. For instance, it can include documentation and monitoring of quality metrics, content licensing restrictions, and the sensitivity or regulatory constraints. Data governance counters poor quality and improves the awareness of available data to improve business operations and Data ROI. See our Data Quality Intelligence Brochure for more information.

Q: What is data lineage?

A: Data lineage refers to the ability to track and trace data flows, not just from source to destination but also from end result back upstream. Concretely put: data lineage should explain the values of verified data points in terms of identifying and exposing the process that led to these values, including which sources played a role, which business rules were enacted, and any user interventions that happened on the way. Data lineage is a tool for diagnostics on data errors and helps field any questions from customers, internal audit, risk, regulators, or other users. Increasingly it is a regulatory requirement and a common practice in supplying analytical models as firms realize that the best models in the world will fall flat when fed with poor data. See our Data Lineage fact sheet for more information.

We hope you find this blog insightful and helpful in your journey towards achieving data alpha. Let us know of any other data management questions you have via info@alveotech.com, and stay tuned for another post soon!

Alveo Blog Uncategorised

Business User Enablement in Financial Data Management

Watch the video of our Head of Product Management Neil Sandle talk about business user enablement and facilitating easy data access for business users.

Alveo Blog Data Governance

7 Data Sins: Insufficient model risk management

Stef Nielen Red Swan Risk Vlog

As a continuation of our 7 Data Sins series, Stef Nielen, Director Strategic Business Development at Alveo speaks with John Matway, CEO and founding partner at Red Swan Risk. During the discussion, Stef and John explore whether data models and data assignments are reliable enough to be trusted to navigate you through risky waters.

Q1: What are the challenges around modelling securities – why is it so challenging? I mean, when a company has just bought a risk system, doesn’t it deal with coverage out of the box?

A: Sometimes there is no suitable model or the right data might not be readily at hand (yet), which prompts one to resort to proxying. Here one wants to tread even more carefully to avoid creating additional model risk. Most generally speaking, model risk occurs when models don’t behave as they ought to. This may be due to an insufficient analytical model, misuse of the model, or plain input errors such as bad market data or incorrect terms & conditions or simply wrongfully chosen reference data such as sector classifications, ratings, etc.

Why is this so important?

Models can misbehave at the security level for long periods before showing up at the portfolio level.  Perhaps the size of the hedge was small and has grown larger, or the volatility suddenly changed.  This may suddenly create distortions at the portfolio, benchmark, or higher aggregate level. These problems often surface during times of market stress and can be very resource-intensive to troubleshoot at a critical time.

Q2: Why is it so resource-intensive to change, troubleshoot, and manage data?

A: When rules are hardcoded or implemented in an inflexible manner (i.e. model queries and scripts are being based on rigid and narrowly defined model schemas and inputs with too few degrees of freedom)  the problem is often exacerbated, making it truly difficult to interrogate and correct changes, when they are critically required.  Too often, the developer or analyst is given a set of functional requirements that are too narrowly defined, based on the current state of holdings and securities.

Given the dynamic nature of portfolio holdings, OTC instruments, available market data, and model improvements, it is essential to have a very flexible mapping process with and transparent and configurable rules that make it much easier to identify modeling issues and resolve them more efficiently.  A unified data model that tracks the data lineage of both model inputs and outputs (including risk stat, stress tests, and simulations), model choices, mapping rules, and portfolio holdings provides a highly robust and efficient framework for controlling this process. The benefit of working with a commercial tool is that it has been designed to address a very wide range of instrument types, data fields, and market data sources so you won’t outgrow its utility. So, in essence, having a unified model and data lineage capabilities combined together implies less digging and troubleshooting for the business user

Q3: Can we discuss some real-life examples perhaps?

A: Some examples are…

  1. Corporate bond credit risk derived from equity volatility using the credit grades model can cause significant distortions. A more direct method uses the observed pricing of single-name CDS prices or a sector-based credit curve. However, these must be properly assigned to the security with either the correct CDS red code or a waterfall structure for assigning the sector credit curve.  In the case of capital structure arbitrage where there are corporate bonds at various seniority and CDS, it is very important to be consistent in the mapping rules so that both the bond and the CDS have the same market data inputs.
  2. A similar issue occurs when using constant maturity commodity curves for convenience. This is easier to maintain than assigning the correct futures data set each time.  Calendar spread risk is underestimated with constant maturity curves that share data.  The negative front-month crude prices that occurred in March are an example of why constant maturity would have underestimated the risk significantly.  (I like this example because PassPort is a good solution for managing commodity future curve names in RiskMetrics).
  3. Changing over to the new Libor curves will likely be a very painful process for banks unless they have a very flexible mapping process that can easily be configured to assign the new curves to the right security types. (This is a simple procedure with the Map Editor and PassPort).
  4. But perhaps a more benign example is that of modelling one’s complete book with the right mapping for each individual security (i.e.: choosing the right risk factors as well as the correct reference data, such as ratings and sector classifications), whilst skipping to model all this stuff for its benchmark. This modelling inconsistency between portfolio and benchmark will introduce a TE-risk which can be contributed completely to inconsistent data mapping, rather than true market dynamics.

In summary, to model things properly – be it a simple proxy or something more granular and exact- one needs a setup that can dynamically configure the users ‘modelling choices and data mapping logic’. And as market conditions and data availability evolves over time, one should have a system that can adapt. Both Alveo and Red Swan allow the users to control their model and data mapping choices in a very flexible, transparent, user-friendly, and visual way. This doesn’t’ just help you during a setup or implementation phase but perhaps, more importantly, it drastically improves your ever-evolving modelling choices and (proxy) coverage over time as well as ongoing operational efficiencies. In short, it enables greater control over your model risk management.

Alveo Blog Data Governance

7 Data Sins Series: Serving Multiple Masters

There are different paradoxes in data management. One is that, quite often, firms have multiple different “master” databases for their price data, their customer data and the terms and conditions of the products they invest in, trade or issue. The record we have seen is a firm that had 32 different widely used databases just to keep financial product terms and conditions. And this is not even counting a large number of small local databases and spreadsheets that also stored some of this information.

The “sin” here is clear: avoid the redundant storing of information! Having multiple places where you store information leads to the need to reconcile and cross-compare and, in general, causes uncertainty as to the validity of data points. At best, you could hope to be consistent across your different databases. But at what cost?

There have been several reasons why firms set up multiple databases with essentially the same data:

  • Decision making and budgeting across departmental lines made it easier to do something at the local level rather than establishing firm-wide services
  • Lack of sound data governance and the tracking of metadata historically made it difficult to track permissions and data usage when consolidating onto a single database or single (external) service
  • Departments often added their own custom fields to their datasets, which could be specific identifiers for homegrown applications, specific product taxonomies, industry classifications or derived data elements.
  • Departments may have wanted privileged access to a dataset or may have had performance concerns that caused them to have their own local copy.

Needless to say, departments that rely the most on aggregated, enterprise-wide information such as the risk and finance departments have suffered the most from a fragmented approach to data management and data storage causing endless rework, reconciliation and verification of data sets.

Setting up departmental level stores of data may have made some sense ten or even five years ago.

However, with today’s managed data services this is no longer needed and here’s why:

  • Managed data services have come a long way in offering concrete business user enablement and easy data access via browsers, enterprise search, easy integration into user workflows and APIs for those needing programmatic access.
  • Today’s managed data services include a comprehensive approach to tracking metadata including permissions, usage rights, quality checks performed and data lineage information – which provides a full explanation of what sources, formulas or human actions led to a certain data value.
  • New cloud-based services provide the required scalability and uptime requirements to serve different departments.
  • Providers such as Alveo via their Business Domain Models provide the capability of using a firmwide data set with different local requirements to cater to idiosyncratic needs – all in the same service.

Keeping data stored in redundant copies may have made sense at some point to prevent resource conflicts and stop applications or users from waiting for access. However, the flipside of different master databases also means redundant entry points of commercial data feeds into organizations – often leading to avoidable data spend. In our experience, teams can best be connected through shared and transparent data assets, that easily integrate into their existing workflows with the capability to augment data sets to cater to local requirements. Our PaSS managed data service does exactly that.

Alveo Blog Data Governance

7 Data Sins Series: Achieving and keeping Data Quality from one-off to a continuous process

Moderator: Alexis Bisagni

Speaker: Boyke Baboelal

As a continuation of our 7 Data Sins series, Boyke Baboelal, Strategic Solutions Director in the Americas speaks with Alexis Bisagni about data quality and whether it’s a continuous fight against uncertainty. This surprise factor in data can arise from poor data quality management, and not keeping track of metadata such as changes, permissions, and quality checks.

Q: Leaning on your experience in financial data management – what have you observed with respect to data quality efforts? (Timemarker 2:00)

A: What I have observed is that there is a wide range of Data Quality maturity within organizations. Some organizations run regular data cleansing activities against their database (which requires manual effort and planning), some have triggers that check data when it is stored (but these systems are difficult to maintain and scale), and others have an Enterprise Data Management system that manages the entire data flow – but this is often still suboptimal.

Why is that? Data management teams have been downscaled in the last decade, while data volumes, types, and complexity have increased. There is a strong day-to-day focus in operations with little information where structural issues or bottlenecks are. This results in work performed in less optimal and reactive ways. In addition, organizations are under more scrutiny from regulators, requiring more controls, and from data vendors, who want to make sure entitlements are adhered to. All of this makes data management more complex. Existing EDM solutions are NOT able to meet new requirements in a cost-effective way.

Q: In your opinion, what is needed to make existing EDM solutions capable of meeting new requirements in a cost-effective way? (Timemarker 3:50)

A: Data management implementations and EDM platforms focus on automating the entire data flow end-to-end. However, simply processing data is not enough to ensure operational efficiency, transparency, and compliance. The critical component here is more information that can be used to understand what is going well and what can be improved.  Meta-data, operational metrics, usage statistics, audit trails, and data lineage information are key in taking data management to the next level.

Q: Where does an organization even start to get a grip on this? (Timemarker 5:05)

A: The first thing to do is to understand what is needed. A lot of organizations start with an inventory of what they currently have and the requirements from the driver for the change, for example, a regulatory requirement. This approach results in being less adaptable to future requirements. So how can we do better? First, it is important to have a data quality framework, including Data Governance. Starting with a Data Quality Framework forces you to look beyond your current needs, view the requirements from different angles. A framework also puts you in a mindset to continuously improve. A proper data management solution should support a data quality framework and collect all the meta-data.

Q: Do you think that buy vs build is a relevant question? (Timemarker 6:26)

A: No, in my opinion, this is not a relevant question. The reason for that is that data management is over-simplified due to a lack of understanding of data quality in a larger context. While I agree that if you need a small number of fields for your Securities from a specific vendor, every day, that would be easy to implement. Taking a moment to think through the concept, building a data management system in-house for today’s needs requires significant effort and detailed knowledge. Even with 20+ years of experience as a Financial Engineer in the Risk and Data Management space, when I think of building a system from scratch, I get anxious. The reason is that building a system in-house would involve large project risks, and the sad thing is that the system will most likely not be future-proof or benefit from the experience of peers in the industry. An adaptable off-the-shelf system will reduce a lot of that risk.

Q: When you have operational, usage, and lineage data, what comes next? (Timemarker 8:42)

A: This is when the magic starts. What I mean by that is it opens data management to the world of intelligence, analytics, and further automation. Having this information will give you more insight into your operations, what works well, and what doesn’t. The result is that you will gain more intelligence in your operations and that intelligence will enable you to comply with regulatory requirements, vendor agreements, and internal control frameworks. Having all this insight will allow your operations and data quality to get better day-by-day, resulting in continuous improvement.

Q: Continuous improvement sounds nice, but what about the bottom line? (Timemarker 10:18)

A: Increased operational efficiency, improved data quality, reduced data risks, compliance with regulatory requirements, vendor agreements, internal control frameworks, and SLAs, will in the end reduce overall TCO.

To summarize, for the financial services industry in the current environment, making the most of their data assets is not a nice to have – it is a critical must-have. Firms not only need to manage increasing volumes and diversity of data sources, they also need to keep close track of their metadata, i.e. different quality aspects that help with determining whether it is fit for purpose, optimizing sourcing and validation processes and, in general, help operational efficiency

Alveo Blog Data Management

7 Data Sins Series: Metadata matters

Tracking the contextual information of financial data

Asset managers and other financial services firms are faced with massively increasing amounts of data, both in the investment process as well as in client and regulatory reporting processes. Providing easy access for different user types in terms of reporting, querying, discovery and modelling is perhaps the most important data management function.

In our seven data sins series, we have been exploring different aspects and challenges of data management. One area which is not often the primary focus or driver of improvement initiatives is that of tracking the metadata surrounding basic financial information such as issuer data, corporate actions, terms and conditions, and, above all, market data.

Tracking and exposing contextual information can happen top-down – at a data set level – as well as bottom-up – looking at the metadata of individual data attributes. To cater to the requirements of different stakeholders, firms need to do both.

From a top-down perspective, it is critical to know what the data sets are that a firm has at its disposal. These data sets include commercial data sets from market and reference data providers but also public data sources, internally produced (proprietary) data and data that comes from business relationships such as customer data.

Different Channels of Sourcing Financial Information

To properly harness all these different data sets requires, first of all, to put stakeholders within a firm on a common footing exposing what data is available, through a data catalog or other inventory of data sets. Contextual information at a data set level includes usage permissions as well as any license restrictions when it comes to commercially acquired data. This can also include any geographic restrictions on data usage or transfer imposed by different legislations. Mostly, metadata at a data set level is about where a data set can be used, which use cases, user roles, business applications, departments, or geographies. Metadata at this level could also include sourcing frequency, destinations including current usage by users and applications as well as any quality checks set on a data set level as well as data derivation rules or models that the data set feeds into. As the number and diversity of data sets continue to grow, keeping track of what data is already available is critical to increasing productivity, turnaround time in getting the data you need, and preventing redundant data sourcing.

From a bottom-up perspective, tracking metadata includes tracing the individual actions that took place on an attribute level. This would include tracing the lineage of a data field: for example, what sources, business rules, and validations went into a price used to value a specific position. Increasingly, firms need to document the data points that went into any decision. Clients, regulators, and auditors alike may dig into the background as to the value of individual data fields. Regulation such as MiFID II has imposed further requirements on documenting decision making around order execution.

The trend towards closer integration of data and analytics has further increased the need to document the properties of data sets. Advances in analytics have sped up automated decision making and assessment of information, increasing the risk of financial models and algorithms going off the rails if fed with inappropriate data without the ability to explain what happened as certain algorithms are black boxes.

Adopting a strategy of Data Quality Intelligence, i.e. tracking the data quality rules that acted on the data as well as their results will help both to continuously improve data operations as well as to shed light on whether a data set is fit for purpose in a specific context. Tracking the impact of exceptions raised, i.e. whether they were false positives or led to manual intervention, helps to calibrate rules and optimize business logic. Furthermore, tracking the set of operations to derive data that a data set feeds into will help document its proper use cases.

Data quality has different aspects including timeliness, completeness, and accuracy, and different use cases can require different trade-offs. Tracking the rules, how they have changed over time as well as the changes to the actual data values are required to have a complete picture.

Alveo has recently launched its Ops360 solution which provides users with a complete overview of the pricing information, reference data and other data sets, its sourcing status, and any exceptions flagged by business rules. It also provides the configuration of different workflows to make sure data is properly used. Through our data lineage capabilities, we provide complete insight into the origin of any data field. A quick intro video can be found here.

Metadata matters. Tracking and easily exposing user permissions, quality rules, sources, and destinations as well as changes over time is increasingly part and parcel of core capabilities in data management.