Inherently biased artificial intelligence programs can pose serious problems for cybersecurity at a time when hackers are becoming more sophisticated in their attacks, experts told CNBC.
Bias can occur in three areas — the program, the data and the people who design those AI systems, according to Aarti Borkar, a vice president at IBM Security.
"One is the algorithm itself," she told CNBC, referring to the lines of codes that teach an AI program to carry out specific tasks. "Is it biased in the way it's approached, and the outcome it's trying to solve?"
A biased program may end up focusing on the wrong priorities and could miss the real threats, she explained.
"If you're trying to solve the wrong outcome, and the outcome is biased, then your algorithm is biased," Borkar said.
The role of AI is expanding in cybersecurity. Many CEOs see cyber attacks as the biggest threat to the global economy over the next decade.
Firewalls and antiviruses are viewed as tools of antiquity as the digital threat constantly evolves, and hackers are now using more advanced technologies, such as AI, to launch complex attacks against businesses.
Once they are able to breach a system, many attackers maintain a low profile, which makes it harder for IT teams to detect their presence. Some would quietly sniff around the network for sensitive data while others may slowly alter important information without anyone noticing — a scenario that experts say can have serious implications over time.
Combating such situations require relying on artificial intelligence to build security systems that can automatically respond to threats, according to industry professionals.
In fact, a study commissioned by tech giant Microsoft found 75% of companies surveyed have either adopted, or are looking to adopt, AI in their cybersecurity plans.
AI enables us to learn more quickly where the problems are, in situations where it would be difficult for humans to process all the data being generated, Diana Kelley, Cybersecurity Field Chief Technology Office at Microsoft, told CNBC's "Squawk Box" on Tuesday.
AI systems typically require large volumes of so-called training data to learn their functions.
If the data used is biased, then the artificial intelligence is going to understand only a partial view of the world and make decisions based on that narrow understanding, Borkar said.
In the same way, if the people who design the program come from a similar culture or background, and are like-minded, then the cognitive diversity would be low.
"That is when you start creating tunnel vision and echo chambers," she added.
Inherent bias can lead to situations where AI systems misidentify issues that can slow down the business process, cause trust issues and affect a company's bottomline. On a larger scale, it can also affect a company's brand.
More worrying is the fact that it can lead to situations where the program may miss out on identifying a serious threat altogether, Borkar added.
Cybersecurity threats come from all sides, which make them "naturally diverse" and unbiased, according to her.
Hence, companies need AI systems that are "equally diverse and equally unbiased — otherwise, something creeps in, so, that's how the vulnerabilities come in, that's how you miss something ... because you weren't thinking as broadly as the opponent," she said.
Still, Microsoft's Kelley explained while it might be difficult to fully eradicate inherent biases, and minimizing such instances "takes planning and careful supervision of the data that is fed" to the AI.
"It's not like the bad guys are waiting for us to learn how to do this. So, the faster we get there, the better off (we are)," Borkar added.