As Facebook users around the world are coming to understand, some of their favorite technologies can be used against them. It's not just the scandal over psychological profiling firm Cambridge Analytica getting access to data from tens of millions of Facebook profiles. People's filter bubbles are filled with carefully tailored information – and misinformation – altering their behavior and thinking, and even their votes.
People, both individually and as a society at large, are wrestling to understand how their newsfeeds turned against them. They are coming to realize exactly how carefully controlled Facebook feeds are, with highly tailored ads. That set of problems, though, pales in comparison to those posed by the next technological revolution, which is already underway: virtual reality.
On one hand, virtual worlds hold almost limitless potential. VR games can treat drug addiction and maybe help solve the opioid epidemic. Prison inmates can use VR simulations to prepare for life after their release. People are racing to enter these immersive experiences, which have the potential to be more psychologically powerful than any other technology to date: The first modern equipment offering the opportunity sold out in 14 minutes.
More from The Conversation:
Regulating Facebook won't prevent data breaches
Psychographics: the behavioural analysis that helped Cambridge Analytica know voters' minds
Cambridge Analytica scandal: legitimate researchers using Facebook data could be collateral damage
In these new worlds, every leaf, every stone on the virtual ground and every conversation is carefully constructed. In our research into the emerging definition of ethics in virtual reality, my colleagues and I interviewed the developers and early users of virtual reality to understand what risks are coming and how we can reduce them.