Workforce Wire

To join the CNBC Workforce Executive Council, apply at cnbccouncils.com/wec.

Workforce Wire

On-the-job generative AI training is already critical for workers. Here's how to get started

Key Points
  • As more companies plan to integrate generative AI into their business and workforce, having employees proficient in artificial intelligence is critical.
  • That is making AI on-the-job training even more important, so that workers aren't unprepared or are left behind.
  • But the key for employees is to understand how to prompt AI platforms to produce the outputs needed, not to understand every technical aspect of generative AI.
Jaap Arriens | Nurphoto | Getty Images

For those concerned about workers having solid generative AI skills, there's one way to take control: on-the-job training.

"Your job will not be replaced by AI. Your job will be replaced by someone else who uses AI if you don't," said John Blackmon, chief AI officer of custom training organization ELB Learning, which has products used by companies like Google, Mastercard and GM to upskill their workforces.

The difference between proficiency and expertise, Blackmon said, lies in understanding what's going on behind the scenes.

Much like the intricacies of telephone or internet communications, it's the programmers and engineers who need to be true experts on generative AI. For the general workforce, Blackmon said, the key is knowing how to prompt platforms to produce the outputs you need.

"Prompting is the new way to speak," Blackmon said. It's organizations that have a sizable stake in employees becoming comfortable producing generative AI inputs.

Training is essential because employees are going to use generative AI regardless of whether their leaders enable them to do so responsibly and efficiently. "Any new employee today is already showing up with ChatGPT in their back pocket," said Bryan Kirschner, vice president of strategy at DataStax, a vector database solution for generative AI applications. "What do they get when they sit down at their desk or at their remote workstation at home?"

According to a recent AI skills report from TalentLMS, 58% of HR managers will use upskilling and reskilling initiatives to overcome the AI-induced skills gap. Here are some of the key concepts in that effort.

Crawl, walk, and then run with AI

"The path to training," said Kirschner, "is thinking about crawl, walk, run. But crawl really soon."

Kirschner said no generative AI walks alone. Much like you wouldn't have an employee without a manager or coach, you wouldn't have generative AI without a copilot.

Whereas employees may be naïve about the use of generative AI in the beginning, they have the capacity to become ingenious through training. Kirschner said it is important for employees to get to a place where they "have points of view" and are able to be creative in thinking about how to apply generative AI use cases to customer needs.

Kirschner's team at DataStax produced an AI maturity model that measures the sophistication of a company's AI usage. There are four main threads in the model: context, culture, architecture, and trust. The model includes an arc that showcases the best ways to implement generative AI from the beginning until workers feel savvy.

For example, the context thread of the maturity model begins with privacy by design and works up to continuous learning and real-time adaptation. Meanwhile, the culture thread starts with a values-driven organization and works up to variable compensation that rewards ethical decisions and regulator engagement. For architecture, automated code generation and testing inform sophistication. Trust includes factors like internal and external transparency.

Adopt AI responsibly and slowly

Blackmon said upskilling for generative AI is similar to other types of training. "You have to have clear goals," he said. "You have to know exactly what you're trying to do. You have to know the audience that you're talking to."

Internal working groups help many companies inform approved generative AI use cases and help employees responsibly navigate the process of training and adoption. Typically, stakeholders from different corners of the organization will be present to create a well-rounded approach.

Hasnain Malik, HR director at Brainchild Communications, said that companies should choose a small group of users and target a narrow set of functionality. "Expand the trial incrementally until you're comfortable rolling it out to everyone," Malik said. 

Data security software company BigID published a course, How to Accelerate AI Initiatives, that shared what organizations need to do to adopt AI responsibly, from governing large language models to prevent data leaks, to avoiding non-compliance and reducing risks associated with employing generative AI usage. This includes classifying all data that goes into an LLM — "If it's junk going in, it's junk going out," Blackmon said unique company data sets.

The BigID course suggests setting automatic flags for policy violations when sensitive or regulated data ends up in the wrong place. These can include existing policies but should also include new policies that manage and monitor potential risks specifically associated with generative AI (as long as those policies don't contradict any existing ones).

Carrie Hoffman, partner and labor and employment attorney with Foley & Lardner, said that generative AI policies don't exist in a vacuum. "Everything you do from an AI perspective needs to be done in conjunction with making sure we're still being compliant," Hoffman said.

Shabbi Khan, a patent attorney at Foley & Lardner, said there's a balance between being permissive and monitoring activities connected to AI. "If you're too strict, employees may end up using it on their personal devices," Khan said. Additionally, a concise best practices guide ensures employees actually read what you're asking of them, such as having your employees turn off training mode and avoiding inputting confidential information.

Keeping a guard up around AI

As part of training, it's crucial to remind all generative AI users to keep their guard up and confirm the accuracy of outputs. Hallucinations, or seemingly made-up outputs unverified by the model's data set, are real — even if rare.

"As these models are getting better and better, the hallucinations are harder to detect," Khan said. "People will start assuming that the outputs are accurate because you don't find as many mistakes."

This is called automation bias, where there's an assumption the machine is right, but it's still just as important to retain critical oversight.

Kirschner, who has been in the tech strategy game for multiple revolutions, said, "We used to say in the early days of mobile, it's becoming the front door to your business, and I think generative AI will become the front door to your business."

With that in mind, it only makes sense to provide upskilling that enables your people to power the organization in a world that looks different than it did yesterday.

Straight Talk About AI and the Workforce
VIDEO26:2826:28
Straight Talk About AI and the Workforce