Alveo Blog Data Governance

7 Data Sins Series: Achieving and keeping Data Quality from one-off to a continuous process

Moderator: Alexis Bisagni

Speaker: Boyke Baboelal

As a continuation of our 7 Data Sins series, Boyke Baboelal, Strategic Solutions Director in the Americas speaks with Alexis Bisagni about data quality and whether it’s a continuous fight against uncertainty. This surprise factor in data can arise from poor data quality management, and not keeping track of metadata such as changes, permissions, and quality checks.

Q: Leaning on your experience in financial data management – what have you observed with respect to data quality efforts? (Timemarker 2:00)

A: What I have observed is that there is a wide range of Data Quality maturity within organizations. Some organizations run regular data cleansing activities against their database (which requires manual effort and planning), some have triggers that check data when it is stored (but these systems are difficult to maintain and scale), and others have an Enterprise Data Management system that manages the entire data flow – but this is often still suboptimal.

Why is that? Data management teams have been downscaled in the last decade, while data volumes, types, and complexity have increased. There is a strong day-to-day focus in operations with little information where structural issues or bottlenecks are. This results in work performed in less optimal and reactive ways. In addition, organizations are under more scrutiny from regulators, requiring more controls, and from data vendors, who want to make sure entitlements are adhered to. All of this makes data management more complex. Existing EDM solutions are NOT able to meet new requirements in a cost-effective way.

Q: In your opinion, what is needed to make existing EDM solutions capable of meeting new requirements in a cost-effective way? (Timemarker 3:50)

A: Data management implementations and EDM platforms focus on automating the entire data flow end-to-end. However, simply processing data is not enough to ensure operational efficiency, transparency, and compliance. The critical component here is more information that can be used to understand what is going well and what can be improved.  Meta-data, operational metrics, usage statistics, audit trails, and data lineage information are key in taking data management to the next level.

Q: Where does an organization even start to get a grip on this? (Timemarker 5:05)

A: The first thing to do is to understand what is needed. A lot of organizations start with an inventory of what they currently have and the requirements from the driver for the change, for example, a regulatory requirement. This approach results in being less adaptable to future requirements. So how can we do better? First, it is important to have a data quality framework, including Data Governance. Starting with a Data Quality Framework forces you to look beyond your current needs, view the requirements from different angles. A framework also puts you in a mindset to continuously improve. A proper data management solution should support a data quality framework and collect all the meta-data.

Q: Do you think that buy vs build is a relevant question? (Timemarker 6:26)

A: No, in my opinion, this is not a relevant question. The reason for that is that data management is over-simplified due to a lack of understanding of data quality in a larger context. While I agree that if you need a small number of fields for your Securities from a specific vendor, every day, that would be easy to implement. Taking a moment to think through the concept, building a data management system in-house for today’s needs requires significant effort and detailed knowledge. Even with 20+ years of experience as a Financial Engineer in the Risk and Data Management space, when I think of building a system from scratch, I get anxious. The reason is that building a system in-house would involve large project risks, and the sad thing is that the system will most likely not be future-proof or benefit from the experience of peers in the industry. An adaptable off-the-shelf system will reduce a lot of that risk.

Q: When you have operational, usage, and lineage data, what comes next? (Timemarker 8:42)

A: This is when the magic starts. What I mean by that is it opens data management to the world of intelligence, analytics, and further automation. Having this information will give you more insight into your operations, what works well, and what doesn’t. The result is that you will gain more intelligence in your operations and that intelligence will enable you to comply with regulatory requirements, vendor agreements, and internal control frameworks. Having all this insight will allow your operations and data quality to get better day-by-day, resulting in continuous improvement.

Q: Continuous improvement sounds nice, but what about the bottom line? (Timemarker 10:18)

A: Increased operational efficiency, improved data quality, reduced data risks, compliance with regulatory requirements, vendor agreements, internal control frameworks, and SLAs, will in the end reduce overall TCO.

To summarize, for the financial services industry in the current environment, making the most of their data assets is not a nice to have – it is a critical must-have. Firms not only need to manage increasing volumes and diversity of data sources, they also need to keep close track of their metadata, i.e. different quality aspects that help with determining whether it is fit for purpose, optimizing sourcing and validation processes and, in general, help operational efficiency