Error or terror: Controlling emerging technology

Innovation is vital to progress. Advances in science, have propelled economic and societal development throughout history. Today's emerging technologies have the potential to increase global prosperity and tackle major global challenges.

However, innovation also creates risks. Understanding the hazards that can stem from new technologies is critical to avoiding potentially catastrophic consequences.

The recent wave of cyber-attacks exemplify how new technologies can be exploited for malicious ends and create new threats to global safety. Risk governance needs to keep pace with advances in scientific innovation.

Read MoreGlobal conflict is world's top threat: Davos

Error or terror

Hacker cyber crime
Hlib Shabashnyi | Getty Images

Synthetic biology and artificial intelligence are two examples of the "next cyber"; emerging technologies with the capacity to deliver enormous benefits but which also present significant challenges to government, industry and society.

Take synthetic biology: creating new organisms from the building blocks of DNA offers the potential to fight infectious disease, treat neurological disorders, alleviate worries about food security and create biofuels.

The flipside is that the genetic manipulation of organisms could also create significant harm, through error or terror. The accidental leakof dangerous synthetized organisms, perhaps in the form of deadly viruses or plant mutations, could create massive damage.

Bio-terrorism threats could emerge from organized groups or lone individuals in the growing "biohacker" community who access synthetic biology inventions online.

Read MoreCyberattack on mill causes rare physical damage

The double-edged sword of Artificial Intelligence

Artificial intelligence (AI) is also a double-edged sword. Advances in AI can increase economic productivity, but might also create large-scale structural unemployment leading to serious social upheaval.

AI developments also raise new questions about accountability and liability: who is accountable for the decisions made by self-driving cars, when they weigh the choice of harming pedestrians versus passengers? Some have even posited that the achievement of "singularity", when machine brains surpass human intelligence, presents an existential threat to humanity.

Risk governance for these and other emerging technologies is extremely challenging. Many more institutions, as well as communities, are engaged in research and development and the pace of innovation is accelerating.

National legal and regulatory frameworks are underdeveloped, so certain topics and techniques escape scrutiny by not being specified.

Read MoreAI 'more dangerous than nukes,' Musk warns

Cash-strapped institutions that are meant to provide oversight are struggling to cope with advances that cross departmental jurisdictions and they are often unable to assess the risks with the rigor that they might wish.

Weaknesses also exist at an international level. For example, the Cartagena Protocol on Biosafety provides guidelines on the handling and transportation of living modified organisms, but not their development. The UN Convention on Biological Diversity addresses synthetic biology, but the resulting agreement is not legally binding.

A current live concern is that large-scale international negotiations such as the Transatlantic Trade and Investment Partnership (TTIP) may inhibit new governance proposals and influence global norms in pursuit of open markets and more streamlined regulation.

Read MoreFacebook knows you better than your family

Six steps to take

So what is the way forward? Enthusiasm for the potential benefits from emerging technologies also requires a willingness to accept risk, but we also need to manage this risk to avert disasters that might have been avoidable. Governance and control frameworks need to be reinvigorated and accountability needs to be clearer.

I recommend six actions:

  1. We desperately need more discussion between stakeholders – such as innovators, industry, society, governments and regulators – on what our risk governance priorities should be.
  2. Along with the talks, we should be pushing for increased funding and priority for research related to risk governance.
  3. Being more open to allow deeper risk assessment – we need to find the right balance between confidentiality and transparency, but intellectual property rights should not be used to restrict access to information needed for effective risk regulation.
  4. Filling gaps in national regulation in the areas that present the greatest risk, and changing laws and regulations so that they can be more adaptable to new developments.
  5. Strengthening discussions within international governance bodies to reach beyond principles to more binding protocols.
  6. Promoting a culture of responsibility around innovation – to encourage more self-policing among innovators, and de-glamorize hackers.

Innovation must be encouraged, but in parallel we need to set a course for rigorous risk governance of emerging technologies. It is much better to confront difficult issues now than endure an incident with disastrous consequences later. As we know all too well, history is littered with risk mitigation measures that proved ineffective because they were put in place too late.

Read MoreWhatsApp, iMessage may face ban

The Global Risks 2015 report is now live.

John Drzik is President of Global Risks and Specialties at Marsh, Marsh & McLennan Companies