Too many corporates expect too much, too soon from their AI investments. Patience, strategic thinking and cultural change are crucial.

“The question is not what can AI do for me? The question is what’s our strategy and how we bring AI into it?” says Aarti Samani, CEO of AI consultancy Shreem Growth Partner.

Samani says too many companies are investing in AI simply because of a fear of missing out — without a clear idea of how they would implement the technology.

“2023 was the year people were still thinking this too shall pass. But it didn’t pass,” she says. “2024 has been the year where…a panic has set in: if we don’t do this, what are we missing out on?”

The main problem is that they’re going in without a sense of intention or clear strategy, and too often they’re throwing spaghetti against the wall – piloting, testing and experimenting with the hopes that something will stick, as opposed to starting with a clear roadmap.

“I get a lot of companies or heads of AI coming to us and saying, okay, we’ve tested Copilot, what should be the next step?” says Claire Dardignac, head of Europe for innovation consultancy The Bakery.  

“Perhaps you shouldn’t even have started with testing code. What would be great to do before you even launch into that is thinking about maybe identifying where your organisation should play.”

Dardignac calls them “technology battlefields”, the key areas of opportunity for a business that, given their unique profile and challenges, will yield returns and impact. Doing some tinkering is fine but defining these areas before deciding to invest money and manpower will save you both. Where is the startup ecosystem playing? Where are investors putting their money? Where is your infrastructure best suited to perform?

Excessive expectations

What boards and C-suites need is more patience, but many are displaying anything but. The expectations to realise short-term gains are massive regardless of how well-equipped companies are to accommodate them.

A 2024 Deloitte survey of high-level leadership found that three-quarters of respondents expect generative AI to drive substantial organisational transformation within three years. However, only 22% believe their organisations are highly prepared to address the talent-related issues of bringing AI on board.

Another 2024 study by Upwork shows that 85% of company leaders are either encouraging the use of AI or making it mandatory this year, while 96% of C-suite executives say they expect AI to increase their companies’ productivity levels.

Company leaders see other successful case studies and expect similarly quick results, underestimating the time it takes, and overestimating what they can quickly achieve.

“You’re going to have to go through probably three or four quarters before you’re able to see the gains.”

Claire Dardignac, The Bakery

“You’re going to invest a lot in these technologies upfront, but the gains that you’re going to get are going to come after 12 months, so you’re going to have to go through probably three or four quarters before you’re able to see the gains,” says Dardignac. She describes a hockey stick where shorter-term productivity will actually decrease as the workforce gets used to the new tools.

Samani has seen the same thing. “This is where I have people saying, okay, here are 20 use cases, can you help us do a POC by the end of December so we can demonstrate and create a business case of quantifiable productivity gains? And I’m like, I’m sorry, I’ll burst your bubble, but you won’t have productivity gains immediately.”

They might be able to get some low-level automation process over the line in that time, she says, but nothing of particularly high quality.

According to Upwork’s survey, 45% of workers say they have no idea how to achieve the AI-powered productivity gains their bosses want, and 77% say that using the AI tools has decreased their productivity and added to their workload. A fifth are being asked to do more work when they have less time because they’re spending it doing things like moderating AI-generated content or learning new tools.

Much of the focus right now is on how to harness AI to bring productivity gains – improving the knowns and the existing processes of the companies to drive down costs, as opposed to finding new revenue streams or innovating on the business model.

This is partly because it’s easier to improve existing systems than create new revenue-generative ones, but also largely due to the current macro climate, companies are less keen on disrupting themselves.

Proven communicators

Expectations will need to be tempered whether the AI is being developed internally or brought in from the outside. Here is where CVCs, at least for the externally procured technology, has experience.

“CVCs are actually really, really well placed in an organisation to handle some of that communication,” says Dardignac.

A big part of a CVC’s job is already trying to show the C-suite and the board what they can expect from venture activity – the returns will probably take longer than they’d like, strategic benefits can be translated into financial gains elsewhere – and framing things in terms of the benefits over the nearer-term costs.

CVCs should communicate their prospects in comparison to other success stories, explains Dardignac – this is how long it took company X to start seeing results from their investment, and we’re about two-thirds of the way there.

“I think what leadership wants to know is that you’re in control, and it’s really hard for an innovation leader or a CVC leader to show that control because, in effect, there is no control,” she says. “Sometimes you’re investing in something that is extremely, extremely uncertain – the only thing that you can say is that you’re doing it in a way that is aligned with best practices with the organisation’s goals and that in all likelihood this is where we should get to by the end of this journey.”

Culture and process change

The biggest change AI makes to a company might not be to revenue or productivity but to culture.

There will need to be rethinking around how employees are evaluated. If, for example, in a marketing team, the content sub-team is using a lot of generative AI daily while the strategy team is having to do a lot of first principles thinking, how should KPIs be structured? How would you assess the productivity of two managers when one of their teams uses a lot of AI and the other does not?

To effectively integrate AI, leadership will also have to find ways to effectively communicate how AI will affect people’s jobs, and what the roadmap is for upskilling and transitioning them to new roles.  

“Human instinct is, to keep yourself secure and to extend the longevity of your employability in an organisation, you want to hoard information, you want to hoard knowledge, and you want to make yourself indispensable,” says Samani.

“A side effect of that is that you are maybe subconsciously not helping the organisation to implement AI because you’re worried.”

Even if a company doesn’t plan to integrate AI itself, the security risks it poses from the outside will require a culture change for some companies.

The growing sophistication of AI-enabled fraud and deepfakes – which tend to be five years ahead of security solutions – are especially dangerous for the more hierarchical organisations where they can exploit workers’ reluctance to second-guess the requests of higher.

“The wider the gap in the power dynamic between executive and employee, the more open you are to being targeted,” says Samani.

There doesn’t yet exist much by way of a technological counter to many of the security risks AI poses externally. Humans will still be the first line of defence, and training the human workforce in security best practices will be crucial.  

Data and governance

Underpinning any AI system, of course, is data. Keeping this in silos is incompatible with maximising AI’s potential. To be of any use in training algorithms, data needs to be centralised, structured and cleaned.

“You can’t anymore say I’m a CMO or a chief product officer and I have my own database and I’m going to guard it, or I’m a chief revenue officer, so I’ll have everything in my CRM system,” says Samani.

“That foundational layer of data and infrastructure is very, very important, otherwise everything else is just a house of cards that will fall apart”

The entire leadership needs to buy into a common data strategy. The result, ideally, will be the ability to glean insights unavailable through independently kept databases. It’s also becoming more important to have someone responsible and accountable for bringing together a company’s AI activity, which is why we’re seeing a lot more job titles like “head of AI” or “chief AI officer”.

It’s not just the data, but the AI-related governance, risks, ethics, budgets, and roadmap design, which can’t all be handled by committee lest things fall through the cracks.   

Fernando Moncada Rivera

Fernando Moncada Rivera is a reporter at Global Corporate Venturing and also host of the CVC Unplugged podcast.