On the Internet, things get old fast. One prime candidate for the digital dustbin, it seems, is the current approach to protecting privacy on the Internet.
It is an artifact of the 1990s, intended as a light-touch policy to nurture innovation in an emerging industry. And its central concept is “notice and choice,” in which Web sites post notices of their privacy policies and users can then make choices about sites they frequent and the levels of privacy they prefer.
But policy and privacy experts agree that the relentless rise of Internet data harvesting has overrun the old approach of using lengthy written notices to safeguard privacy.
These statements are rarely read, are often confusing and can’t hope to capture the complexity of modern data-handling practices. As a result, experts say, consumers typically have little meaningful choice about the online use of their personal information — whether their birth dates, addresses, credit card numbers or Web-browsing habits.
“There are essentially no defenders anymore of the pure notice-and-choice model,” said Daniel J. Weitzner, a senior policy official at the National Telecommunications and Information Administration of the Commerce Department. “It’s no longer adequate.”
So if the current model is broken, how can it be fixed? There are two broad answers: rules and tools.
Rules would mean new regulations. And Congress and the Federal Trade Commission are looking at further rules that could limit how personal information is used. For example, the government might ban the use of recorded trails of a person’s Web-browsing behavior — so-called click streams — in employment or health insurance decisions.
Still, the next round of online privacy regulation needs to proceed carefully, policy experts warn. They say that online data collection and analysis is an economic imperative, and that the Internet industry of the future will involve adding value to the free flow of information — much of it created by individuals and their browsing activity. Google, Facebook and Twitter are evidence of the trend, and so are legions of start-ups seeking riches in fields like social networking, cloud computing and smartphone applications.
“Getting this balance right is critical to the future of the Web, to foster innovation and economic growth,” Mr. Weitzner said.
Whatever the future of regulation, better digital tools are needed. Enhancing online privacy is a daunting research challenge that involves not only computing, but also human behavior and perception. So researchers nationwide are tackling the issue in new ways.
At Carnegie Mellon University, a group is working on what it calls “privacy nudges.” This approach taps computer science techniques like machine learning, natural language processing and text analysis, as well as disciplines like behavioral economics.
The goal is to design software that essentially sits over your shoulder and provides real-time reminders — short on-screen messages — that the information you’re about to send has privacy implications. “It learns, helps you and occasionally prompts you,” said Lorrie Faith Cranor, a computer scientist at Carnegie Mellon. “When we go online, there are a lot of ways we can inadvertently give up our privacy.”
On a social networking site, Ms. Cranor says, people often type in their birth dates and widely circulate them, hoping to receive online birthday greetings. But a birth date posted online, she notes, can also be used for marketing profiling, identification and potentially identity theft. A software agent, she says, could inform the user of that before a birth date is typed.
An on-screen alert is a mild nudge. A stronger one might be automatically enrolling the user in an online lottery for cash prizes (perhaps financed by the industry, to avoid tougher privacy regulation), if the person doesn’t disclose potentially sensitive personal information. The stronger incentive, says Alessandro Acquisti, a researcher who specializes in the economics of privacy, may be needed to offset the bias toward immediate gratification in human decision-making — thinking only of the emotionally satisfying birthday greeting next week instead of the privacy risks down the road.
M. Ryan Calo, a fellow at the Center for Internet and Society at the Stanford Law School, is exploring technologies that deliver “visceral notice.” His research involves voice and animation technology that emulates humans. When putting information in a personal health record, for example, a virtual nurse could explain to the user the privacy implications, and trade-offs, of sharing personal information with doctors, family members, insurers and drug companies.
Mr. Calo explains that people naturally react more strongly, in a visceral way, to anthropomorphic cues. He points to a sociological experiment that had people pay for coffee on an honor system. One box for depositing cash had a picture of flowers on it, while another had a picture of human eyes. Time and again, he said, people paid more often for coffee when the box had eyes instead of flowers. “Our brains are hard-wired to respond to images that look human, alive,” Mr. Calo said.
At Princeton, Edward W. Felten, a computer scientist, wants to re-engineer the Web browser for greater privacy. A key, he says, is to alter the software’s design so that information about on-screen viewing sessions is kept separate and not routinely passed along so a person’s browsing behavior can be tracked. His plan would push mainstream browsing toward anonymous mode, which can be done in the latest browser software, but only by opening a separate, specially designed window.
“The browser,” Mr. Felten said, “needs to be less promiscuous about revealing the information collected.”