A couple years ago, a friend invited Carl Perez to a virtual world promising online discourse free of Nazis.
That world was Germany.
Perez, who uses gender-neutral pronouns, didn't fly from their home in Colorado to escape the hatred they saw online. Instead, Perez simply changed their Twitter account location.
"Since then, I've seen pretty much no nationalist content," they said.
Perez is not alone in trying to escape a sea of hate by virtually jumping ship to Germany. But local residents and researchers say German Twitter is not exactly the internet utopia some imagine.
"We are not the paradise of social media without any hate speech whatsoever," said Stephan Dreyer, a senior media law and governance researcher at the Hans-Bredow-Institut in Germany.
While the most obvious expressions of Nazism and racism may be harder to find on Twitter accounts with their locations set to Germany, there is still plenty of coded content that slips through the cracks, Dreyer said.
Twitter users often point to the company's content policy in Germany to argue it should be able to identify and remove Nazis from the platform in other regions. When Maureen Colford learned about the location setting "hack" to filter out Nazis, she said she was "amazed that somehow Twitter manages to do this in Germany," and wondered, "why can't they do this everywhere?"
This theory assumes Twitter has a filter it uses to detect hate speech in Germany and chooses not to implement it elsewhere. But that's not how it works. Twitter is simply required by German law to remove some forms of hateful content expeditiously.
Following the Holocaust, Germany passed laws cracking down on hate speech and praise of Nazism. In 2017, German lawmakers approved a new bill called the Network Enforcement Act, which was meant to bring concrete guidelines to online speech. Under the law, commonly known as NetzDG, social networking companies can face fines of 50 million euros if they fail to remove "manifestly unlawful" content within 24 hours after it is flagged or within 7 days in less clear-cut cases.
In a statement to CNBC, a Twitter spokesperson said its enforcement in Germany "is based on reports under NetzDG (which users can file per the law's requirements) and valid legal process that is brought to our attention from law enforcement, legal entities etc."
Twitter expanded its global policies to prohibit accounts affiliated with violent extremist groups in December 2017, the spokesperson said. When content is withheld from a country due to its local laws, Twitter includes a disclosure on the affected tweet or account indicating why it can't be shown.
That means even in Germany, there's no catchall filter that screens for hateful content. Twitter still relies heavily on users to report content they believe violates its policies and the law.
"It's absolutely true that there's less Nazi content available here [than in the U.S.]," said Jillian York, the Germany-based director for international freedom of expression at the Electronic Frontier Foundation. "But like I said, you have to actually report it under that queue to take it down."
Having a law that regulates hate speech, both online and offline, comes with its own set of problems. As tech companies including Twitter, Facebook and Google field allegations of political censorship from U.S. lawmakers, Germany provides a telling example of what it looks like when tech companies are subject to the government's moderation standards, rather than their own.
Shortly before the white nationalist rally in 2017 in Charlottesville, Va. that ended in violence, Colford had tried everything she could think of to turn off the hate she saw on Twitter, even momentarily. She muted keywords and hashtags on Twitter. She used third-party programs that didn't work well on mobile. And she even installed a virtual private network on her phone, but found it blocked her from using other apps as well.
"So I tried this," Colford said about switching her profile's location setting to Germany. "It's a good break."
Twitter users who have taken a virtual getaway to Germany report mixed results. Some say they don't notice much of a difference, if only for the fact that they didn't follow people engaging with hateful content in the first place. Others, like Perez, noticed they no longer saw white nationalist accounts when viewing the replies to tweets in their feeds.
Colford said she ultimately switched her location setting back from Germany, not because it didn't work, but because it worked too well. She felt compelled to make the virtual return from Germany out of a sense of obligation to bear witness and report harassment.
"I stopped doing it because I felt guilty about meeting people out there who … needed help," Colford said, estimating she only had the setting switched on for four to five days.
"I'm realizing, oh my God, this person, I didn't even know," Colford said. "I didn't know that they were being attacked."
When it comes to content moderation, lawmakers and tech companies alike have struggled with a key question: Who should be in charge of adjudicating speech on the internet?
Laws in the U.S. and Germany answer this question in a way that largely reflects their values surrounding free speech. In Germany, the national history of Nazism and genocide has profoundly impacted free speech laws.
"To protect democracy, you would sometimes have to limit those very important democratic rights," said Jens Pohlmann, a postdoctoral fellow at the German Historical Institute in Washington, D.C., describing the German mindset. "And that is certainly based on the experience of German democracy being hijacked and turned into Nazi dictatorship."
Pohlmann, a German researcher who has lived in the U.S. for ten years, has seen both sides of the spectrum. In the U.S., he said, there is a greater sense that "more and more speech will lead to a positive outcome."
The U.S. does not have a national law banning hate speech. American lawmakers, especially conservatives, have typically been inclined to leave private companies to their own devices, short of illegal conduct. Advocates of this hands-off approach argue it allows innovation to thrive and is part of the reason American tech companies have risen to the status of global leaders.
But recently, lawmakers on both sides of the aisle have asked whether it's time to pull in the reins on tech companies. At congressional hearings on tech censorship, representatives have floated the idea of revising a key piece of legislation known as Section 230 of the Communications Decency Act. The law grants tech companies immunity from their users' content, treating them as distinct from publishers, which are liable for the content they post.
The decision to leave speech moderation to corporate leaders may also reflect Americans' declining trust in the government. According to a March 2019 Pew Research Center Survey of adults in the U.S., only 17% of respondents said they "trust the federal government to do what is right just about always/most of the time."
Americans are comparatively trusting of private institutions. A May 2018 Pew Survey found that 28% of U.S. adult respondents said they "can trust major technology companies to do what is right" most or just about all of the time.
These sentiments are reversed in Germany. In a 2017 Pew survey, 26% of German respondents said they had "a lot" of trust in their national government to do what was right, compared to 15% of U.S. respondents in that same survey.
Citizens in both countries have different expectations from their governments based on cultural norms. In many European countries, "the government is going to protect your speech and protect you, too, from potentially harmful kinds of speech that might damage society," said Anna Boch, a PhD candidate in sociology at Stanford researching at Germany's Hans-Bredow-Institut.
York, from the EFF, warned of the danger of a government enforcing platforms to moderate speech as a way to protect their citizens, especially when automated tools fail to pick up on satire and parody.
"If we believe that there's value to counterspeech … then I think it's incredibly troublesome that people would want companies to crack down harder without having the accurate tools to do so," York said, suggesting tech firms should instead focus on providing users with more control over their online experiences.
On that point, both countries may find some common ground. When it comes to content moderation, Pohlmann said, "Europeans are rather afraid of professional companies making these decisions."