The stringent, often bureaucratic data governance rules of regulated industries have long been seen as obstacles to innovation. However, in the enterprise, responsible stewardship isn’t negotiable. It is here that CDAOs are emerging as leaders who can accelerate innovation, thanks to the very controls they’ve often been blamed for enforcing.
Finance, healthcare, and other regulated industries know well the benefits that AI creates for their organizations. These industries have been using AI and machine learning for a long time, long before the generative AI revolution.
However, when talking about AI in 2026, most people refer to Generative AI (GenAI). GenAI has shown its potential to transform human-machine interactions: from making it easier to find and summarize information to speeding up creative processes, and from better understanding customer satisfaction and employee engagement to enabling more accurate supply chain risk management. Without a doubt, GenAI is excellent at processing large amounts of information and producing human-like outcomes.
Despite its massive penetration with consumers and a handful of enterprise use cases, GenAI is still stuck today in isolated experiments and proofs of concept (POCs). The friction comes not from a lack of enthusiasm for the technology, but from a lack of AI-ready data and the guardrails that enforce the type of regulations these organizations are subject to.
The Enterprise AI Bottleneck
Here are the factors handicapping GenAI in the enterprise based on our experience in 2025:
- Data Fragmentation: Older organizations have been acquiring tools and systems over the years. This has created a proliferation of systems where data is housed. To make matters worse, the data isn’t only siloed but also stored in legacy formats. Without a practical way to aggregate both structured and unstructured data in a centralized place, it is impossible to produce insights based on the full available context.
- Data Quality: Attend any CDAO conference and the most repeated phrase you’ll hear is garbage in, garbage out. Some institutions have amassed decades, or even a century, of data. This data is not only trapped in legacy systems, but also stored in formats that aren’t conducive to AI. Before institutions are able to feed this type of data to AI, they must be able to extract, catalogue, tag with proper metadata, normalize naming conventions, and format it in ways that AI can consume. What’s exciting is that AI can speed up data quality efforts. For example, cataloguing can be accelerated using classifiers trained on existing data dictionaries.
- Data Privacy: Data can contain personally identifiable information (PII), or be subject to regulated management procedures such as those mandated by GDPR or CCPA. Without mechanisms that enable data stewards to govern the way data is used—pre- and post-AI processing—organizations simply can’t go forward with some of these promising AI projects.
- Information Security: Who sees which information matters. Even the most carefully safeguarded data in a database can suddenly become visible to the wrong people in a dashboard. Without a mechanism to enforce access controls in all of the tools touching the data, organizations run the risk of exposing information to unauthorized users.
- Auditability: The ability for organizations to explain how AI produces the outcomes it produces is non-negotiable in regulated industries. This kind of transparency can be difficult to achieve when outcomes are the result of compounded AI processes, each producing outputs that lack explainability. Without a systematic way of tracing data lineage, and inspecting the inputs, outputs, and the rationales driving them, organizations are not only unable to meet governance requirements, but internal adoption stalls due to a lack of trust in the AI-generated outcomes.
- Data and AI Literacy: For those using AI daily, it is hard to imagine how one would not use it. However, many workers are still lagging behind. Organizations must ensure that everyone is benefiting from the technology at least at some level, and that those who do, follow best practices to maximize benefits and reduce risks. The benefit isn’t only the increase in productivity and employee morale, but the harnessing of subject matter expertise held by these workers that can improve future AI-driven outcomes.
- Operationalization: While POCs are able to demonstrate value, they are often operating in a carefully controlled set of data. This often means that the mechanisms to operationalize the process need to be figured out later. Without considering the ETL and APIs necessary to move data from source systems and to disseminate insights to the business, AI will remain experimental.
- Strategic Alignment: AI advocates are mostly found in technology teams; however, its impact is on the business teams it benefits. Starting AI projects based on the technology capability as opposed to a clear business case leads to validated technological capabilities rather than to real value creation. Organizations must start asking which business processes are generating most of the low-value work and target those processes for AI-driven automation.
The Way Forward
Up until now, governance has been seen as an obstacle to innovation; it is time it is seen as an innovation catalyst thanks to its ability to provide the building blocks that make innovation possible in regulated industries. Governments and regulated industries, in particular, make decisions that affect people’s livelihoods in important ways; they must not do away with sound governance and proper human-in-the-loop mechanisms. In these industries, organizations will be better off viewing CDAOs and their teams as business enablers that can help accelerate the adoption of responsible AI at scale.


