Technology Executive Council

A.I. is now the biggest spend for nearly 50% of top tech executives across the economy: CNBC survey

Key Points
  • 47% of top technology officers across the economy, encompassing both chief information security officers and chief technology officers, say that artificial intelligence is their No. 1 budget item over the next year.
  • That's according to the latest bi-annual survey of top tech executives across sectors conducted among members of the CNBC Technology Executive Council.
  • Just under half of these executives also say that breakthroughs including generative AI like ChatGPT will create more jobs than they destroy, but the rest are less optimistic.

In this article

A.I. to top tech spending at many companies: CNBC survey
VIDEO1:3001:30
A.I. to top tech spending at many companies: CNBC survey

In a signal of just how quickly and widely the artificial intelligence boom is spreading, nearly half of the companies (47%) surveyed by CNBC say that AI is their top priority for tech spending over the next year, and AI budgets are more than double the second-biggest spending area in tech, cloud computing, at 21%. 

That's according to the latest CNBC Technology Executive Council bi-annual survey, which includes responses from top technology executives at companies beyond the tech sector itself, including chief information officers, chief technology officers and chief information security officers from areas of the economy including marketing, pharmaceuticals, telecom and utilities, and from public sector entities. 

Overall, nearly two-thirds say their AI investments are accelerating, and it is a bigger piece in a smaller overall pie: a little over half of tech executives (53%) say the rise in interest rates has caused them to slow overall spending. Still, as AI has boomed in 2023 and tech stocks have led the market higher, there was also a sharp drop since the last time TEC members were surveyed — the second half of 2022 — in tech executives saying that cost-cutting pressures due to fears of a recession were their top tech challenge over the next year. That was down from over 30% last year to 16% now, and there was a big rise in survey respondents who said the biggest tech challenge now was meeting customer demand for tech-driven products and solutions, up from 9% to 26%.

The CNBC survey was conducted from May 15-June 20, encompassing the period of trading during which Nvidia passed the $1 trillion market cap mark for the first time, after it forecast soaring sales of $11 billion for the second quarter of its fiscal 2024, citing demand for its graphics processors that power artificial intelligence applications like the ones at Google, Microsoft and ChatGPT maker OpenAI.

Laurence Dutton | E+ | Getty Images

"It's hard to think of an area that this couldn't help," said Diogo Rau, Eli Lilly chief information and digital officer. 

He said Lilly already is using generative AI to write patient safety reports and clinical narratives, and ultimately, it will play a role in drug discovery. "What I'm excited about is what machines can come up with that no human might have imagined, such as new molecules for medicines," Rau said. 

One of the most anticipated uses for generative AI is in customer relationship management, and that is happening at more companies. Eddie Fox, chief technology officer at telecom company MetTel, said it has built AI functionality into its care center to read incoming client emails, interpret the intent, and then take action. He said this is making employees significantly more productive and efficient personally, and providing service more quickly to customers. "It's had a tremendous impact on incident related tasks and gave our team about 380 hours extra (to really concentrate on care) per month," Fox said. 

Other members of the TEC indicated they are using generative AI to remove bias from job descriptions, create images for marketing, manage social media, as well as IT and HR tickets. It's also seen as a tool to get expert information to younger employees more quickly. Others noted their firms are in the early days of rolling out code generation tools using gen AI, as well as AI "co-pilots" across many roles, and using generative AI to help make investment decisions.  

Some described their efforts as still preliminary. "We are just in the learning and exploring phase," said Nicole Coughlin, chief information officer for the town of Cary, North Carolina, a tech startup hub that boasts firms including Fortnite maker Epic Games.  

Even as firms across the economy spend more on AI, many of their strategic technology aims are not possible with the cloud computing infrastructure already built and still being enhanced. Cloud computing remains the most critical tech area for most companies, with 63% of TEC members citing the cloud as critically important for their company's tech strategy over the next 12 months, but it barely edged out the 58% of respondents who cited AI. Cybersecurity also continues to be a major threat, with 42% of respondents saying ransomware is a bigger concern today than a year ago.  

The latest advances in AI are being applied to the challenging cybersecurity landscape. Jim Richberg, vice president of information security at Fortinet and its field chief information security officer, said his firm has been using AI for over a decade, not only to improve large (multi-billion node) generative AI models, but to identify that subset of the model that generates most of its predictive power. "When you look cumulatively at trillions of pieces of data, much of the accuracy comes from a fraction of the data," he said. 

The volume of data and complex relationships currently make it difficult to manage and customize cyber defense. "Most organizations either react when a problem becomes severe enough to be noticed, or they rely on implementing best practices/implementing required practices. Generative AI could enable a more customized and pro-security posture for organizations," he said. 

One reason AI needs to be deployed in cybersecurity more broadly is because it is being used by hackers already, and they can gain an early advantage. At least in the short-term, Richberg said, generative AI will increase the ability of malicious actors to create social engineering content that makes it harder for users to distinguish it from legitimate data. A malicious actor may steal email traffic as well as a victim's address book, enabling spear-phishing messages that focus on the content of the victim's recent conversations with each of their contacts, and uses the language and syntax for each.  

"An email to your boss vs. your mother would talk about different things and use different language and tone. This will make it harder to distinguish malicious content based on cues such as awkward language or topics," he said. Similarly, generative AI will enable voice and even video facsimiles to become harder to distinguish from legitimate ones. "Most people already find it harder to apply the same cognitive filters to real-time interaction that they can use in assessing email. The cumulative effect will be to make it harder to rely on user training to detect malicious lures and avoid compromise," Richberg said. 

Joe Levy, technology group president at cybersecurity firm Sophos, said it has been developing large language models and deep-learning AIs for many years, and is now able to use AI to detect malicious software, stop business email compromise (BEC) emails and phishing attempts, and predict and interrupt incipient ransomware attacks.  

"What's most exciting about this new generation of generative AIs is its capacity to make every employee in our company more efficient in their work," he said. That includes more effective "threat hunts," but goes beyond core cybersecurity work to improved customer service response and contract review in legal. "Technological advancements have always helped organizations to scale, but have never really offloaded or augmented human intelligence. This time, it's not just a technological advancement, but an intelligence that we can partner with in a broad set of tasks, across many knowledge domains," Levy said. 

Many tech executives have taken the position that AI will not destroy more jobs than it creates. In the TEC survey, 47% of respondents said they think AI technologies will create more jobs than they destroy. But another 26% said it will destroy more jobs than it creates, while an equal 26% said it is too soon to know.  

"We always underestimate the social impact of any new technology. Can you think of a technology that didn't change how people interact with each other? From the car to the radio to the internet to the phone, what really changed was social interaction," Rau said. But he worries more about firms that shy away from AI use based on fears. "Maybe I need to start a fund that shorts companies that ban ChatGPT use and go long on ones that encourage it," he said. 

Levy said there is good reason to worry more this time. "While we have generations of historical precedents for technological advancements and the positive and negative impacts they've had on society, we have no real precedent for a technology that is effectively an alien intelligence, so there's a lot we can't predict about the impacts of generative AI. That doesn't mean we should panic, but it does mean that we should take care when it comes to considerations like its capacity to hallucinate, emergent intellectual property concerns, future legislative actions designed to protect against inevitable 'dual-use' abuses, and the effects that it will have on the future composition of workforces," he said.

But he added, "It's not so much a worry as a cautious optimism."Â