The Cambridge Cyber Summit

It is possible to design a computer system that can't be hacked

Howard Shrobe, director of cybersecurity at MIT’s Computer Science and Artificial Intelligence Laboratory
WATCH LIVE
Keyboard on fire, computer, hacking
Tim Macpherson | Getty Images

Cyber attacks seem to occur nearly every day and most people assume that this is an inherent consequence of our reliance on modern computer systems and networks.

The normal reasoning goes something like this: "Computer systems are big, complex systems of unprecedented scale. Vulnerabilities are an unavoidable consequence of this complexity and we are stuck with them."

In this view, the best we can do is manage the risk and try to limit the consequences. We erect perimeter protections such as firewalls that provide vital but limited protection and we patch and pray. Yet major breaches continue. In effect, we're stuck in cyber hell and we need to learn how to make the best of it.


I would like to offer an alternative, more optimistic view. It is important to take a more historical view of the problem. The way in which we architect computer systems has it roots in the 1970s when the Unix operating system was developed, along with its system programming language C (which is still used for most system programming). Computer hardware of that era was slow and memory very expensive; nothing mattered as much as squeezing out every ounce of performance possible. One consequence of this is that instead of enforcing critical properties ubiquitously at the lowest levels of the system possible (e.g. in the hardware or in the programming language), we instead left it to programmers to get everything right on every line of code. Furthermore, computer systems of that era weren't ubiquitously connected to networks that offered access to attackers from around the world. So security wasn't even a secondary concern; it wasn't a concern at all.


In short, the architects of those systems made perfectly reasonable engineering trade-offs for their world; but our world is very different. Between then and now, Moore's Law has allowed a steady improvement in computer performance; systems today are more powerful on all metrics by a factor of about 50,000. They are all networked, and they are entrusted with critical functions. Yet we still use architectures appropriate to an earlier era. To paraphrase Einstein, everything changed except for our way of thinking about how to design computer systems.

The good news is that we can re-engineer these systems for today's needs, removing entire classes of vulnerabilities at a time. What we need to think about are the architectural principles that would govern secure designs. There are a handful of these that cover most of the vulnerabilities that exist. The first of these is memory safety, i.e. ensuring that a program never reaches outside the boundaries of an object. Memory safety errors are a key part of well over 50 percent of the vulnerabilities.

These errors can be eliminated completely in many programming languages. It's also possible to enforce controls on the integrity and flow of information. And this is true, even if programmers make coding mistakes.

In short, we have demonstrated that it is possible to design a modern computer system that attackers can't break into and that can protect our information. The critical question today is not whether we can design safer computer systems, but really one of creating the right incentives for systems like these to become the new mainstream of computing.

We don't need to live in cyber hell; but we do need to accelerate the transition to a new generation of computing systems that are inherently safe and resilient.


Commentary by Howard Shrobe, the director of cybersecurity and a principal research scientist at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Follow CSAIL on Twitter @MIT_CSAIL.

For more insight from CNBC contributors, follow @CNBCopinion on Twitter.