Alveo Blog Compliance

Around the World in 30 Days – Data Management Insights from Around the Globe

Different regions have different financial priorities and initiatives. During our Summer Series, we’re stopping in 6 countries to discuss the top issues they’re facing when it comes to financial services and new regulations.

Scratch your travel itch and come along with us over the next 30 days to gain a new perspective on your approach to data management.

Putting ESG data to work: overcoming data management and data quality challenges

Environmental, Social and Governance (ESG) based investing is growing rapidly. The data landscape to support ESG use cases includes screening indicators such as board composition and energy use, third-party ratings as well as primary data such as waste and emissions. There is a wide range of primary data sources, aggregators and reporting standards. ESG ratings in particular are very dispersed reflecting different methodologies, input data and weights – which means investors need to go to the underlying data for their decision making.

Role of ESG in investment operations

Depending on the investment style, ESG information plays a key role in research, fund product development, external manager selection, asset selection, performance tracking, client reporting, regulatory reporting, as well as voting. In short, ESG data is needed through the entire chain and must be made available to different stakeholders across the investment process.

Increasingly, ESG is becoming an investment factor in its own right. This means ESG indicators and ESG-based selection criteria need to be distilled from a broader set of primary data points, self-declarations in the annual report and third-party assessments. Additionally, ESG information needs to be standardized, to roll up company-based information to portfolio-level information, track ESG criteria against third-party indices or external reporting requirements. However, a lot of corporates do not (yet) report sufficient information causing a need to proxy or estimate missing data points or leaving them outside investment consideration altogether.

Data management challenges

Legislatures are promoting sustainable investment by creating taxonomies that specify which economic activities can be viewed as environmentally sustainable. From a data management perspective, this classification refines and is an additional lens on the traditional industry sector classifications.

Other ingredients are hard numbers such as carbon footprinting (detailing scope 1, 2 and 3 emissions, clarifying whether scope 3 is upstream or downstream and so on), gender diversity, water usage and board composition. More qualitative data elements include sustainability scores, ratings and other third-party assessments that use some condensed statistics. A key requirement is the accurate linking of financial instruments to entities.

As ESG investment criteria become operationalized, ESG data management is rapidly evolving. Whenever new data categories or metrics are introduced, data management practices typically start with improvisation through desk level tools including spreadsheets, local databases and other workarounds. This is gradually streamlined, centralized, operationalized and ultimately embedded into core processes to become BAU. Currently, the investment management industry is somewhere halfway in that process.

ESG data quality issues

Given the diversity in ESG data sources  and the corresponding variety in data structures, as well as different external reporting requirements, ESG data quality issues prevent effective integration into the end-to-end investment operation.

In the table below, we highlight some of the more common data quality and metadata considerations with typical examples of those in financial services and how they surface in the ESG data space.

Table 2: example ESG data management challenges

What is required to fully embed ESG data into investment operations?

To overcome these data quality issues, firms need a process that seamlessly acquires, integrates and verifies ESG information. The data management function should facilitate the discoverability of information and effective integration into business user workflows. In short, data management should service users from the use case down, not from the technology and data sets up.

ESG data management capabilities should facilitate the easy roll-up of information from instrument to portfolio and blend ESG with pricing and reference data sets, so it becomes an integral part of the end-to-end investment management process.

Data derivation capabilities and business rules can spot gaps and highlight outliers, whether it concerns historical patterns or outliers within a peer group, industry or portfolio. Additionally, historical data to run scenarios can help with adequate risk and performance assessment of ESG factors. Having these capabilities in-house is good news for all users across the investment management process.

Risk Mitigation: Maximising Market Data ROI

Watch the video below to hear our CEO Mark Hepsworth, sit down for a discussion with 3di CEO John White, as they discuss risk mitigation and how institutions can truly ensure max ROI.

Interview Questions:

  1. What are some of the major issues you are seeing from clients around market data and have these issues changed over the past few years?
  2. Most institutions are increasing their spending on market data, but how do they ensure they maximize the ROI on this spend?
  3. How important is data lineage in allowing clients to use market data efficiently?
  4. As clients are moving more market data infrastructure and services to the cloud, how is this impacting their use of market data?
  5. Are you seeing organizations looking at both market data licensing and data management together and if so why?

Post-Brexit, post-pandemic London

For the City of London, the last few years have been eventful, to say the least. Midway through the worldwide Covid pandemic, Brexit finally landed with a free trade agreement agreed on Christmas eve 2020. A Memorandum of Understanding on Financial Services was agreed upon at the end of March. However, this remains to be signed and is entirely separate from any decisions on regulatory equivalence.

Large international banks prepared for the worst and the possibility of a hard Brexit by strengthening their European operations in the years leading up to Brexit. However, the discussion on the materiality of EU-based operations will continue to rage for some. ESMA adopted decisions to recognize the three UK CCPs under EMIR. These recognition decisions took effect the day following the end of the transition period and continue to apply while the equivalence decision remains in force until 30 June 2022. One immediate effect of Brexit was a sharp drop in share trading volumes in January, with volume moving to continental Europe. For other sectors, Singapore and New York are well-positioned to nibble at the City’s business.

Financial services, together with industries such as fisheries, remain one of the most politicized of topics in the EU – UK relationship. The U.K. government must consider to what extent it should diverge from the EU’s system of financial services regulation. It is unlikely that any announcement on equivalence decisions will be forthcoming in the short term. A decision to grant full regulatory equivalence would depend upon UK alignment to EU regulation on a forward-looking basis – which would defeat the whole point of Brexit. Equivalence may not be worth the loss of rulemaking autonomy that is likely to be a condition of any EU determination. The longer equivalence decisions are delayed, the less valuable they are as firms adapt to the post-Brexit landscape.

As the financial services sector is coming to terms with the post-Brexit reality, it must prepare for regulatory divergence with the level of dispersion still an open question. Differences can emerge in clearing relationships, pre-and post-trade transparency, investor protection, requirements on (managed services) providers, derivatives reporting, solvency rules, and future ESG disclosure requirements. Having a flexible yet rigorous data management infrastructure in place and using suppliers with operations in the UK and the EU will mitigate this divergence and prepare firms for the future.

FRTB: the need to integrate data management and analytics

After some delays, the deadline for FRTB implementation is now approaching fast. As of January 1, 2023, banks are expected to have implemented newly required processes and begin reporting based on the new Fundamental Review of the Trading Book (FRTB) standards. With Libor’s transition taking place over the next years, it is a busy market data world.

FRTB poses material new demands on the depth and breadth of market data, risk calculations, and data governance. A successful FRTB implementation will need to address new requirements in market data, analytical capabilities, organizational alignment, supporting technology and overall governance. In this blog, I focus on the need for integrated data management and analytics.

FRTB requires additional market data history and sufficient observations for internal model banks to ascertain whether risk factors are modellable. These observations can be committed quotes or transactions and sourced from a bank’s internal trading system and supplemented with external sources. Apart from trade-level data, additional referential information is needed for liquidity horizon and whether risk factors are in the reduced set or not.

The market data landscape continues to broaden. Apart from the traditional enterprise data providers, many firms that collect market data and trade level information as part of their business now offer this data directly. This includes brokerages, clearinghouses and central securities depositories. Different data marketplaces have been developed, providing further sourcing options for market data procurement. Effectively sourcing the required additional data and monitoring its usage to get the most out of its market data spend is becoming a key capability.

Organizational alignment between front office, risk and finance is required as well. Many firms still run different processes to acquire, quality-proof and derive market data. This often leads to failures in backtesting and in comparing front-office and mid-office data. FRTB causes the cost of inconsistency to go up. Regulatory considerations aside, clearly documenting and using the same curve definitions, cut-off times to snap market data prices and models to calculate risk factors can reduce operational cost as well. Clean and consistent market data makes for more effective decision-making and risk and regulatory reporting.

FRTB accelerates the need for market data and analytics to be more closely integrated. Advanced analytics is no longer mostly used at the end-point of data flows (e.g. by quants and data scientists using desk-level tools); it is now increasingly used in intermediate steps in day-to-day business processes, including risk management.

Data quality management, too, is increasingly getting automated. Algorithms can deal with many exceptions (e.g. automatically triggering requests to additional data sources). Using a feedback loop as pictured above, the proportion of them requiring human eyes can go down. To successfully prepare data for machine learning, data management is a foundational capability. Regulators take a much closer look at data quality and the processes that operate on the data before it is fed into a model, scrutinizing provenance, audit and quality controls.

Important to improve any process is to have a feedback loop that provides built-in learning to change the mix of data sources and business rules. In data quality management, this learning has to be both:

  • Continuous and bottom-up. Persistent quality issues should lead to a review of data sources. For example, using false positives or information from subsequent manual intervention to tune the screening rules. Rules that look for deviations against market levels taking into account prevailing volatility, will naturally self-adjust.
  • Periodic and top-down. This could, for example, include looking at trends in data quality numbers, the relative quality of different data feeds and demands of different users downstream. It also includes a review of the SLA and KPIs of managed data services providers.

If you cannot assess the accuracy, correctness and timeliness of your data sets or access it, slice and dice it and cut them up as granular as you need for risk and control purposes, then how can you do what matters: make the correct business calls based on that same data?

Data management and analytics are both key foundational capabilities for any business process in banks but most definitely for risk management and finance, which are all the functions where all data streams come together to enable enterprise-level reporting.

The Importance of Data as an Asset

Watch the video below to hear our Sales Director of the APAC region, Daniel Kennedy, discuss why the way in which we look at data is changing. Data is universally seen as an asset, but as is the case with other assets, they can depreciate quickly if you don’t manage them. So what does it take to keep your data value?

Interview Questions:

  1. Why is data considered a new asset class today?
  2. In your experience, what are the critical elements of data life cycle management?
  3. What else do firms need to consider when dealing with this highly valuable asset?

Engineering Trends in Financial Data Management

Martijn Groot is speaking from Berlin with Mark Hermeling about how data management technology advances rapidly to help financial services firms onboard, process and propagate data effectively so firms get the most out of their content. Would you know which are the best open sources, standards, or could strategies for you?

2021 Summer Series eBook
Free Download

FRTB and optimal market data management Whitepaper

Discusses the challenges of FRTB as well as their overlap with other risk and valuation needs and business user enablement.
Alveo Blog Compliance

How we plan to optimize customer service through the current crisis

These are uncertain and difficult times, and we hope that you and your teams remain safe and well. As the coronavirus (Covid-19) outbreak continues to progress, we remain committed to ensuring the continuity of our services across the world.

Alveo is a resilient and stable company that is in a good position to operate through a period of extended disruption.  We are a global leader in providing reliable, high performance software solutions and managed services for financial data management. Our headquarters are in London but we also have offices in New York, the Netherlands, Singapore and São Paulo and have long worked with geographically dispersed teams.

The stability and reach this provides enables us to continue to provide the highest level of service to customers throughout the current situation. We understand the impact the coronavirus is having on organisations and have the capability to deliver our normal service levels through the current unsettling times. In particular, we appreciate that markets have been extremely volatile and this is a very challenging time for our clients who may need additional support at this time.

We invoked our business continuity plan in advance of the remote-working guidelines and new product development, maintenance enhancements and customer support are proceeding as usual. With safety everyone’s top priority, we will operate in line with World Health Organisation (WHO) and government guidelines. While we will maintain contact with customers over phone and email rather than face-to-face, we are focused on making it as close to business as usual as possible, without any reduction in, or break in, the service we deliver.

Should you require further information, please don’t hesitate to get in touch with us either by contacting your account manager or by directly reaching out to me at [email protected].

Alveo Blog Compliance

How to reduce cost and improve Service Levels using AI and Machine Learning with Data Management Automation

By Boyke Baboelal

Process automation

Financial Data Management has focused mostly on day-to-day operations to deliver quality data to critical downstream applications. Little has been done to take advantage of AI and Machine Learning to further improve service levels and reduce TCO. However, Financial Data Management contains many use cases where new algorithms can improve operations and quality of data, ranging from new instrument set-ups using NLP, allowing to process product sheets, emails and pdf files, to data enrichment activities such as proxying incomplete data with information from related risk factors, found using recommender systems built using explainable techniques. Using Data Management Automation operational efficiency is improved and risks are reduced with the elimination of manual activities, data quality is improved using algorithms that process more data in a better way, and turn-around time is reduced, reducing risks of not meeting SLAs.

Exception handling and anomaly detection

One of the key controls in Financial Data Management is the detection of suspect data and validation thereof. Validation can be a very time-consuming activity depending on the universe of instruments, but also on the performance of rules, i.e. the number of false positives that are generated. The use of Machine Learning allows for advanced rules that compare more related information with each other and therefore reducing the number of false positives, for example using similar instruments to confirm price movements or verify descriptive values, but also to check frequency of updates, combinations of values, and/or completeness of data. Data Management Automation contains building blocks for organizations to build efficient data checks using Machine Learning.

Advanced controls and monitoring

In the data flow, from acquiring source data to distribution of instruments, there are many data risks that can impact the quality of delivered data. As stated before, Exception Handling is an important control, but there are many more. For example, controls should monitor if all instruments have rules applied to them and if these rules are consistently applied? Data Management Automation and Data Quality Intelligence provide frameworks and building blocks for organizations to methodologically identify data risks, create controls, and monitor risks through KRIs.

Compliance checking

Data management processes require many manual activities and these activities should be performed according to documented procedures, for example to allow for transparency, lineage, and further analysis. Do resources follow procedures and show sufficient diligence? Are changes clearly documented? Do all instruments have the proper ownership and security settings? Anomaly detection, NLP, and automated process discovery can help organization check and improve compliance to internal guidelines.

In short, data management capabilities can be significantly expanded using Machine Learning. ML techniques can reduce workload and cost by automating manual processes, improve data quality with better exception handling, comply with internal and external guidelines with advanced analytics, and reduce risks with advanced controls.

Links from this page

Free download

Complex Data Risks with the Adoption of AI

Boyke Baboelal, Strategic Solutions Director Americas at Alveo shares his insights in the January edition of PRMIA’s Intelligent Risk