Camera apps have become increasingly sophisticated. Users can elongate legs, remove pimples, add on animal ears and now, some can even create false videos that look very real. The technology used to create such digital content has quickly become accessible to the masses, and they are called "deepfakes."
Deepfakes refer to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real.
Such videos are "becoming increasingly sophisticated and accessible," wrote John Villasenor, nonresident senior fellow of governance studies at the Center for Technology Innovation at Washington-based public policy organization, the Brookings Institution. "Deepfakes are raising a set of challenging policy, technology, and legal issues."
In fact, anybody who has a computer and access to the internet can technically produce deepfake content, said Villasenor, who is also a professor of electrical engineering at the University of California, Los Angeles.
The word deepfake combines the terms "deep learning" and "fake," and is a form of artificial intelligence.
In simplistic terms, deepfakes are falsified videos made by means of deep learning, said Paul Barrett, adjunct professor of law at New York University.
Deep learning is "a subset of AI," and refers to arrangements of algorithms that can learn and make intelligent decisions on their own.
But the danger of that is "the technology can be used to make people believe something is real when it is not," said Peter Singer, cybersecurity and defense-focused strategist and senior fellow at New America think tank.
Singer is not the only one who's warned of the dangers of deepfakes.
Villasenor told CNBC the technology "can be used to undermine the reputation of a political candidate by making the candidate appear to say or do things that never actually occurred."
"They are a powerful new tool for those who might want to (use) misinformation to influence an election," said Villasenor.
A deep-learning system can produce a persuasive counterfeit by studying photographs and videos of a target person from multiple angles, and then mimicking its behavior and speech patterns.
Barrett explained that "once a preliminary fake has been produced, a method known as GANs, or generative adversarial networks, makes it more believable. The GANs process seeks to detect flaws in the forgery, leading to improvements addressing the flaws."
And after multiple rounds of detection and improvement, the deepfake video is completed, said the professor.
According to a MIT technology report, a device that enables deepfakes can be "a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections."
In fact, "AI tools are already being used to put pictures of other people's faces on the bodies of porn stars and put words in the mouths of politicians," wrote Martin Giles, San Francisco bureau chief of MIT Technology Review in a report.
He said GANs didn't create this problem, but they'll make it worse.
While AI can be used to make deepfakes, it can also be used to detect them, Brookings' Villasenor wrote in February. With the technology becoming accessible to any computer user, more and more researchers are focusing on deepfake detection and looking for a way of regulating it.
Large corporations such as Facebook and Microsoft have taken initiatives to detect and remove deepfake videos. The two companies announced earlier this year that they will be collaborating with top universities across the U.S. to create a large database of fake videos for research, according to Reuters.
"Presently, there are slight visual aspects that are off if you look closer, anything from the ears or eyes not matching to fuzzy borders of the face or too smooth skin to lighting and shadows," said Singer from New America.
But he said that detecting the "tells" is getting harder and harder as the deepfake technology becomes more advanced and videos look more realistic.
Even as the technology continues to evolve, Villasenor warned that detection techniques "often lag behind the most advanced creation methods." So the better question is: "Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?"
Update: This story has been revised to reflect an updated quote by John Villasenor from the Brookings Institution.