Articles
How to set your organization up to successfully embrace the power of AI
This is the second in a two-part series on how organizations can successfully embrace AI. In the first piece, we explored why many organizations remain stuck in fragmented AI activity without achieving scale. Here, we turn to the more important question: what does it take to structure an organization to capture AI value in practice?
Once AI is understood in those terms, the structural question moves to the center of the discussion. If AI is to be delivered safely and repeatedly across the enterprise, what structure is actually needed? Where should prioritization sit? Who owns trade-offs across business value, reuse, speed, control and risk? Who decides which initiatives move forward, which are exceptions, and which need to be stopped? Who is accountable when an AI solution moves into production and something goes wrong?
These are not secondary governance questions. They sit at the heart of whether AI remains a fragmented set of experiments or becomes a repeatable organizational capability. The structure an organization chooses is therefore not a cosmetic choice. It is one of the main determinants of whether AI becomes embedded into the way the enterprise operates.
That is also why the debate about whether to appoint a Chief Data and AI Officer, create a time-bound AI executive role, or establish an AI Control Tower matters. But it matters in context. Those are not the starting point of the discussion. They are possible answers to the broader operating-model question. The danger is to jump too quickly to titles. A new executive role can sound like decisive action while leaving the underlying operating model untouched. But the opposite mistake is just as risky: assuming that AI can simply be absorbed into existing structures without changing how the enterprise coordinates, governs and delivers it.
Why a standalone AI leader is not always the best answer
For some organizations, appointing a senior executive such as a CDAIO or CAIO may be the right move. That is most likely where the enterprise genuinely lacks a single point of authority across strategy, prioritization, governance, adoption and production accountability. In those circumstances, a senior role can create focus, remove ambiguity and force alignment across functions that do not naturally move together.
But it is important to be precise about what that role is solving. A technology-led AI leader, sitting primarily as an extension of the tech stack, does not automatically resolve the harder business questions around workflow redesign, process ownership, role evolution, value realization and frontline adoption. If the problem is enterprise transformation, then even a senior AI leader will struggle unless the remit is explicitly broader than technology.
The wider market evidence points in the same direction. Gartner reported in 2025 that 70% of chief data and analytics officers have primary responsibility for building their organization’s AI strategy and operating model. That suggests organizations do want identifiable leadership for AI, but many are solving the problem by evolving existing roles rather than defaulting to a standalone Chief AI Officer.
That is why a standalone AI leader is not always the best answer. The risk is not only duplication with existing technology or data roles, although that is real. The bigger risk is that the organization treats AI as if it belongs to a specialist office rather than to the enterprise. Once that happens, business leaders can distance themselves from the redesign challenge, operational teams can assume the AI team owns delivery, and technology becomes the default lens through which the transformation is run. That is precisely the wrong outcome for a shift that is fundamentally about business model, workflow, control and human performance.
The case for an AI Control Tower
This is where the notion of an AI Control Tower becomes particularly useful. Properly understood, it is not just another forum, a center of excellence with no teeth, or a rebranding of existing committees. It is an orchestration mechanism for enterprise AI. It exists to provide the connective tissue the organization is otherwise missing: one portfolio view, one prioritization logic, one place for exception decisions, one reusable blueprint library and clear accountability for what is deployed into production.
In many institutions, the core ingredients are already present. Data and technology functions own platforms, infrastructure and engineering reliability. Business and transformation functions own adoption, process redesign and value realization. What is often missing is a single spine that can sequence priorities, arbitrate trade-offs, manage exceptions and assign production accountability across the whole system. The Control Tower is designed to provide exactly that.
But it only works if it carries real authority. Without that, an AI Control Tower quickly becomes a talk shop: a place where standards are discussed, priorities are debated and risks are noted, but where no one is compelled to follow the decisions made. If it is to work, it must have decision rights that are visible and binding. It must be able to shape sequencing, force initiatives into a common portfolio, adjudicate exceptions and assign accountability across the organization. In effect, it needs to be an operating mechanism, not an advisory body. Authority is what prevents coherence from collapsing back into fragmentation.
The harder question: can the core move fast enough?
Most leadership teams will understandably want to capture AI value from within the core business, and in most cases that should remain the ambition. The core holds the customers, the operational assets, the regulatory infrastructure and the institutional advantages that matter. The first objective should therefore be to redesign the existing organization so it can move with enough speed, discipline and clarity to capture the value at stake.
But leadership teams should also be honest about how difficult this transformation is. In some organizations, the real barrier is not strategy or technology, but the weight of the incumbent model itself: the incentives, governance routines, approval layers, cultural habits and legacy ways of working that make reinvention too slow. The point is not that every enterprise should create a separate vehicle or a newco. Most will not, and in many cases should not. The point is that the mere existence of that option should concentrate the mind. If the core cannot move with enough speed and conviction, the market will not wait for it to catch up.
This matters because the value from AI is unlikely to be evenly distributed. BCG’s 2025 research found that only 5% of companies are achieving substantial AI value at scale, and that this small group materially outperforms laggards on revenue growth, shareholder returns and EBIT margin. The implication is clear: organizations that combine strategic clarity with speed, scale and operating-model redesign are likely to capture a disproportionate share of the value, while those that hesitate may find the market concentrating around faster-moving competitors.
No structural option works if AI is treated as a tech problem
Whether an organization appoints a CDAIO, establishes a time-bound AI executive mandate, or creates a Control Tower, the same underlying truth remains: this is not simply a technology problem.
A CDAIO will fail if the role is treated as an extension of the tech function rather than a lever for business redesign. A temporary AI executive will fail if the mandate is limited to experimentation and does not reshape how value is governed and delivered. A Control Tower will fail if it lacks authority or if it is disconnected from the business processes it is meant to influence. The success condition in all three cases is the same: the structure must be designed to change how the organization operates, not just how it experiments with AI.
That is the point many organizations still underestimate. AI is not merely about new tooling. It is about redesigning the conditions under which work gets done. It changes how decisions are made, how expertise is distributed, how controls are embedded and how humans and machines collaborate. If the organization treats it as a specialist technology agenda, it will almost certainly under-deliver.
Regulation matters, but it does not change the core question
The regulatory environment is relevant, especially in financial services, but it should remain in proportion. The important point is not that one specific rulebook determines the correct structural model. It is that AI is increasingly being drawn into an existing web of obligations around privacy, fairness, model risk, conduct, governance and operational resilience. That raises the importance of disciplined operating-model choices, but it does not turn the question into a purely compliance-driven exercise. Regulation reinforces the need for clarity, control and accountability. It does not substitute for them.
The questions leadership teams should be asking
The organizations that succeed with AI are rarely the ones with the most pilots, the loudest strategy statements, or the most impressive titles. They are the ones willing to ask harder questions about how the enterprise is set up to deliver change of this magnitude.
1. Are we driving toward specific business outcomes, or simply generating AI activity?
A long list of use cases is not a strategy. A clear view of where AI should change economics, customer experience, control, or productivity is.
2. Are we deploying AI into existing workflows, or genuinely redesigning around what AI makes possible?
This is often the line between incremental improvement and structural advantage. If workflows, roles and handoffs remain untouched, most of the value will remain inaccessible.
3. Do we have the authority structure required to move beyond PoC purgatory?
Who decides what gets prioritized? Who has the right to stop duplicative efforts? Who owns exceptions? Who is accountable in production? If the answers are vague, fragmented, or dependent on informal influence, scale will be elusive.
4. Are our most experienced people genuinely embracing AI-augmented ways of working, or merely complying while protecting the old model?
Human adoption is not a side issue in AI transformation. It is one of the central determinants of whether value is ever realized.
5. Is our governance built for the capabilities we are actually deploying, rather than for a previous generation of technology?
As AI systems become more autonomous, embedded and business-critical, the operating model around oversight, controls and accountability has to evolve with them.
6. Is our investment and operating plan calibrated to where AI capability is heading, not just where it is today?
AI is advancing too quickly for static, multi-year planning assumptions to hold. The organizations that win will be the ones able to absorb change without organizational drift.
7. Can we capture this opportunity at the speed required from within the core, or are our current structures preventing that?
For some organizations, the greatest barrier to AI transformation is not the technology but the incumbent model itself. Leadership teams need to ask, honestly, whether the core can move fast enough to capture the value at stake.
The real choice
The real choice facing leadership teams is not whether to do AI. That question has already been answered. The real choice is whether the organization is prepared to put in place the operating model, authority and leadership discipline required to redesign work around what AI makes possible.
That is what separates organizations using AI from organizations being transformed by it. The winners will not be those with the most pilots or the most elegant organizational charts. They will be the ones that recognize AI for what it is: not a software upgrade, but a business transformation challenge that demands a structural response.
The organizations that succeed will be the ones that make the operating-model choices early, assign real authority, move beyond AI theater and build the conditions for AI to create measurable value in production. In some cases, that will mean redesigning the core. In others, it may mean creating space outside it. But in all cases, the winners will be those that recognize that the value at stake is large, unevenly distributed and unlikely to wait for slow-moving institutions to catch up.



