Skip to main content

Data Quality and Supervising By Example

Boyke Baboelal

ESMA gains actionable intelligence through data quality frameworks and processes

 

Introduction

On April 15th, 2021, the European Securities and Markets Authority (ESMA) published its first Data Quality Report that summarizes its supervisory activities and ongoing efforts to improve the quality of data reported to Trade Repositories (TRs) under the European Market Infrastructure Regulation (EMIR) and the Securities Financing Transactions Regulation (SFTR).

ESMA actively uses EMIR and SFTR data to monitor systemic and financial stability risks and therefore assesses and monitors data quality of regulatory reporting using frameworks, processes and tools that consider the different data quality dimensions.

 

ESMA’s approach

ESMA and National Competent Authorities (NCAs) started a major project in 2014 to improve the quality and usability of transaction data reported by counterparties and made available by the TRs. The project, the Data Quality Action Plan (DQAP), is to address all data risks that can impact the quality of data and therefore affect policy work, NCA’s supervision of reporting counterparties and ESMA’s supervision of TRs.

ESMA created a Data Quality Assessment Framework that identifies control methods and techniques to detect issues across key data quality dimensions of the EMIR and SFTR regimes. Controls include the Data Quality Review, an annual quantitative assessment of the quality of data reported by counterparties, performed by NCAs.  In addition, ESMA performs more complex validation checks on the values that are reported by counterparties as part of its Abnormal Values process, given that certain validation rules are challenging to implement at TR level and to support the NCAs in their supervisory activities.

To incorporate EMIR and SFTR data user feedback, ESMA set up an issue log to capture data quality issues observed and to ensure the problems are followed up on, either directly or through NCAs. ESMA underlines the importance of working with users further to enhance the detection and remediation of data quality issues.

 

Data Quality Insights

To improve the quality of data, it is critical to define what data quality means and how to measure it so that progress can be tracked, the effectiveness of approaches can be determined, and remediation actions can be taken in case of dropping levels. In ESMA’s case, a valuable KPI for the quality of trade data submitted to the TRs is the rejection rate of that data by the TRs (see chart 25 from ESMA’s 2020 Data Quality Report). For SFTR submissions, the rejection rate dropped significantly since reporting obligations came into effect and continues to decrease gradually over time, indicating a clear improvement of data quality.

SFTR monthly submission volumes and rejections chart

The ability to slice and dice data quality information is essential to identify patterns or trends in issues, identify structural data quality problems and can help determine what areas to focus on for improvement. Chart 19 from ESMA’s 2020 Data Quality Report illustrates which fields fail validation rules most and how rejections for those fields are trending over time. In chart ten from ESMA’s 2020 Data Quality Report, a change in the trend of open contracts indicated a reporting issue. When resolved, the expected trend continued.

top failing fields from re-validation from aggregated data
Open contracts reported by one TR chart

Viewing and analyzing exceptions generated by controls such as validation rules and/or reviews is not enough. New types of issues may not be picked up by existing controls and may pass through without detection. That’s why it is critical to have a feedback loop from end-users as part of the framework, monitor the effectiveness of controls, improve controls when needed, and enable continuous improvement. ESMA’s Data Quality Log is an excellent example of this feedback mechanism.

 

Takeaways

Although ESMA’s Data Quality Report describes a specific use case, it touches upon critical components needed for data quality management for a wide range of data types such as market data, reference data, corporate actions, ESG data, fundamentals, holdings, for a wide range of use cases such as risk management, portfolio management, trade support and hedging.

Over time, many organizations have organically improved their processes to track data quality. However, the ability to measure, monitor, and improve is often less developed and without clear frameworks, resulting in suboptimal performance.

Executing a data & risk discovery process (figure 1), having a data quality framework (figure 2), supported by data management systems that provide the needed controls, processing flows, and analytics, will help:

  • Improve data quality and service levels
  • Reduce operational and data costs
  • Reduce regulatory and vendor compliance risks

In addition, having robust data management practices will save significant time for all downstream use cases and can alleviate downstream users from having to perform mundane tasks, especially when considering that a considerable amount of time (some say up to 80%) is spent on data preparation for calculations and analytics. Automating data quality management and – where possible – remediation will allow business users to focus on genuine anomalies.

The goal should be a feedback loop where metrics on data flows are translated into KPIs on quality and turnaround time which can be analyzed over time, leading to corrective actions to improve the operation.

Good data management can increase competitive advantage, especially when dealing with unstructured or still maturing data categories such as ESG. Good data management lowers the cost of change of integrating new data sets into business processes. In addition, new and faster insights will be made possible using a variety of sources and types of data, which are normalized, readily linked and/or aggregated, and available in consumable formats for further use.