Technology Executive Council

Why waiting for A.I. laws, regulations from government could be a catastrophic mistake

Key Points
  • The warnings are here — artificial intelligence can have catastrophic effects on society, if companies, technology leaders and the government don't intervene.
  • But even with government intervention, responsibility for AI guardrails will still fall into the laps of companies, where usage will grow as AI leaders like Microsoft begin to more aggressively add AI within core enterprise apps including Teams, Excel and Word.
  • Just like ESG investments, companies need to track AI efforts and disclose it publicly, said Asha Palmer, senior vice president of compliance solutions at Skillsoft.
Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 
Eric Lee | Bloomberg | Getty Images

The warnings are here — artificial intelligence can have catastrophic effects on society if there's no intervention.

Sam Altman, chief executive officer of OpenAI, parent company of ChatGPT, warns AI poses risk of human extinction, and Geoffrey Hinton, known as the "godfather of AI," cautions that AI can bring a dangerous future. These AI leaders and others support intervention from the federal government and other industry leaders before AI proliferates throughout society. "We need regulation," Altman told CNBC earlier his year.

But even if the government passes AI regulation, experts are skeptical of its reach. Based on the extent and limitations of government intervention, responsibility will still fall to companies.

"When you think about anti-bribery and corruption, other fields similar to AI, there's government regulation on the books, but the enforcement of it is slow, and the resources to enforce, track, monitor and test it are limited," said Asha Palmer, senior vice president of compliance solutions at digital learning platform Skillsoft. "So, we'll need to rely on the business community, and they must regulate," she said.

Disclose A.I. usage and develop transparency metrics

The number of companies investing in AI continues to grow, with generative AI chatbots from Google, its Bard, and Microsoft's investment in OpenAI and ChatGPT, topping the list. With rapid growth, Palmer said companies of all sizes need to track AI efforts and disclose the information publicly. The number of companies using AI will be growing rapidly too, with Microsoft announcing this week a $30 per month Microsoft 365 AI subscription for users of apps like Teams, Excel and Word.

While some growing companies, like graphic design firm Canva, are currently integrating ChatGPT into tools for clients, other industry stalwarts, like Samsung, are restricting usage among their own employees.

Among startups embracing chatbots for staff is Genies, a technology company that develops cartoon-like, digital avatars for use on social media and in the metaverse. Genies not only uses ChatGPT across product, engineering, finance, and HR departments, but CEO Akash Nigam said the company is fully moving its onboarding process to ChatGPT.

"We create avatars for the future of the internet and augmented reality, and we're targeting a core demographic of users that most people getting hired here can't immediately relate to, so it's best to consume [onboarding] information in an interactive experience," Nigam said. "We made custom [chat]bots for onboarding, and we've found it to be very meaningful so far."

Companies shouldn't stop with chatbot training, however. "Transparency with AI is crucial to our future, including transparency metrics that you measure, or vendors measure if they're the one developing it, to make sure AI development is fair," Palmer said. "This can be checking models for bias and making sure there are controls built in so people can't misappropriate trade secrets."

Tools like the Organization for Economic Cooperation and Development's AI Policy Observatory's list of metrics for how to use and develop AI are out there, but the market needs small and medium-sized enterprises to provide guidance.

"In an ideal world, all AI developers right now will get together and agree on some transparency metrics that we'll all measure and communicate to the market, because they're the ones developing it," Palmer said.

They know where bias can exist, how bias can occur, and how people can use and misuse AI, she said, so companies should look to these experts for a path forward.

Assess highest A.I. risk areas within the workplace

ChatGPT has led the way when it comes to early mass adoption and experimentation with generative AI, reaching 100 million monthly active users only two months after it launched last November, making it the fastest-growing consumer application in history (before Meta's Threads surpassed that record this month, building off its huge existing Instagram base.)

It's no secret that chatbots have infiltrated the workplace, and companies are continuing to grapple with the implications. After a company has established metrics for tracking AI, or even while it's figuring that out, companies need to do a risk assessment for AI, Palmer said.

"Ask yourself: What are your company's risks? How do they manifest themselves? What are we doing to put controls in place? What are our policies and procedures?" she said. "What are our guardrails? What is the infrastructure around those guardrails, policies and procedures? Are they clear, communicated and documented?"

Then, companies need to train employees about those risks. In a risk assessment, a company should be able to identify which groups of employees are more likely to be the most exposed to AI risks, such as sales, the IT department, or customer-facing roles.

"Within most companies, AI doesn't have an owner," Palmer said. "It's like how ESG was about 18 months ago."

AI needs an owner that rallies together all the stakeholders within the company that have a hand in how AI is developed and used, she said. Then, that person builds a framework and infrastructure in conjunction with those stakeholders and individuals.

Self-regulation of A.I. won't be easy

One of the lessons from the rise of the internet is that new technology like generative AI requires self-regulation, but it won't necessarily be easy to create, and even though compliance experts like Palmer are skeptical about relying on the government, regulatory policies enacted in collaboration with government will be an important part of achieving safer AI usage for the public.

"Companies will need to create a unified regulatory framework," said François Candelon, global director at Boston Consulting Group's Henderson Institute. "Generative AI will likely be classified as a high-risk use case, so [companies] can start to ask themselves: How does it fit with our business? What would it mean to have a high-risk use case? Can I deal with that?"

Even companies in the same industry are going to have different perspectives on what requirements or policies should be put in place, Candelon said, so it's imperative for companies to band together to formalize their AI protocols in a cohesive way.

"In Japan, many people criticized the video game industry about violent and sexual content, so before the industry was regulated, companies said: Let's try to unite together and create something that would become the index and be clear," Candelon said.

He believes this same kind of co-regulation can work with AI. "It was a success [in the Japanese gaming industry] not only because it improved the public image, but because the Japanese video game industry worked hand-in-hand with governments on terms and standards."

The U.S. is at that same inflection point, Candelon said. "I believe that it's good for companies to share their experience and their expertise with [government] regulators."

OpenAI's Sam Altman on AI regulation: We can manage this for sure
VIDEO0:5200:52
OpenAI's Sam Altman on AI regulation: We can manage this for sure