The "Zao" app, which is the phonetic sound of the Chinese character "to create," was developed by a unit that's majority-owned by online social networking platform Momo. Users can pretend to be starring in blockbuster movies by uploading pictures to the app, which then create simulated video clips using machine learning.
It's the latest example of "deepfake" — a term that refers to manipulated videos or other digital representation produced by sophisticated artificial intelligence that yield seemingly realistic, but fabricated images and sounds. The rapid development of deepfake technology has raised concerns about how it could be used to influence elections or for some other malicious activity.
Zao was first released on Aug. 30 and jumped quickly to the top of free mobile downloads on both Android and iPhone app stores in China.
But users soon began to criticize Zao for its loose data privacy protections — including giving the company perpetual and transferable rights to uploaded data, according to Chinese media reports of Zao's original user agreement.
As a result, the ubiquitous Chinese messaging app WeChat reportedly banned users from sharing videos created using the app. Tencent, WeChat's parent, did not respond to a CNBC request for comment.
A screenshot of the iOS App Store in China
In a statement Tuesday on Weibo, China's version of Twitter, the start-up said it will not store personal biometric information, which data has become key to personal security given its use for passwords.
It's rare for a Chinese company to publicly address privacy issues so quickly, said Ziyang Fan, head of digital trade at the World Economic Forum.
Zao also said the "face-swapping" effect is created by a technical overlay, which the company clarified meant the machine-generated images are approximations rather than integrations of actual facial data.
The company added that once a user deletes an account, it will follow the "required rules and laws" in handling that user's information. However, it was not clear whether that meant data would be completely wiped out, or if the new terms also applied to previously uploaded data.
Zao did not publicly respond to concerns raised by users below its statement on Weibo.
Shares of Zao's parent, Momo, which is traded on the Nasdaq, fell 1.6% to $36.18 a share in Tuesday's session in New York. A representative for Momo was not available for comment, and Zao did not respond to a CNBC request for comment via Weibo.
"Zao is (in) a completely uncharted territory," said Jennifer Zhu Scott, founding principal of Radian Partners, a private investment firm focusing on artificial intelligence. The company's future will not likely be easy due to rising distrust of technology, she added.
Users gravitated to Zao initially for the speed and accuracy by which it integrated ordinary people's faces into the bodies of movie stars. As the modified clips proliferated, it showed just how quickly and convincingly deepfakes can spread false information.
The technology caught public attention in April 2018 when comedian Jordan Peele created a video pretending that former President Barack Obama insulted President Donald Trump in a speech.
In June 2019, Facebook came under fire for not properly identifying a fake video that suggested House Speaker Nancy Pelosi was stumbling through a speech when, in reality, she did not.
"Deepfakes weaponize information in a way that takes maximum advantage of the dynamics of a social media ecosystem that prizes traffic above nearly all else," John Villasenor, nonresident senior fellow, governance studies, at the Brookings' Center for Technology Innovation, wrote in June.
"While deepfakes require some investment of time and work to create," he said, "like other digital content, they can be easily distributed via social media to an audience that — with the right combination of planning, timing, and luck — could reach into the millions."
Zao's overnight popularity and relatively loose handling of personal data is especially worrisome given the company falls under the jurisdiction of the Communist Chinese government, which can compel private entities to share sensitive information with it.
In addition to that, the company will share user information with authorities if content is considered a threat to "national security" and "public health."
Scott said in an email that the clause on national security was interesting, but "who decides that?" she asked.