Keeping up with the Machines… – Why collaboration is the key to ensuring the survival of humanity

Up until now, when you mention Artificial Intelligence (AI) to the average person, they will probably recall the latest movie, TV series or novel where some time in the future, machines are integrating themselves into society with humans.

However, AI is actually progressing faster than we think. Some of you may remember IBM’s AI machine Deep Blue beating world champion chess player Garry Kasparov, simply by learning and applying a set of actions in different scenarios, and each time giving itself feedback on how to improve. That was in 1997!

In March 2016, Google’s AlphaGo managed to successfully apply deep neural networks, and use machine learning techniques to win 4-1 against the legendary Lee Sedol , the top Go player in the world over the past decade. This is just the beginning! Facebook has built a way to let computers describe images to blind people; Microsoft showed off a new Skype system that can automatically translate from one language to another; and IBM singled out AI as one of its greatest potential growth areas.

With all this excitement in the AI space, it’s easy to say ‘that’s so cool!’ or ‘we’re all doomed!’ depending on your perspective in life. But without one common view of AI’s purpose, impacts and consequences, we could be leading down a route that only a handful of companies will have the power to control. And even they might not be in alignment with each other. If we make AI decisions in silo – as companies, and without full information, then we could spend decades managing the consequences, and untangling the mess.

This is where collaboration comes in.

Collaboration Thinking is just about taking off within individual organisations, in fact some are getting really good at it. We’re also seeing cross fertilization of ideas across different industries, like car companies working with tech firms, and insurance firms working with health companies.

But now we need to think bigger. With something as fascinating as AI coming to life faster than we’d care to hope for, we need to pull out all the collaboration stops to ensure that we all agree its future. It’s not for one company, or even a handful of companies, to dictate what the rules should be, what AI should and shouldn’t be able to do. These are decisions that we have to work together to agree.

The UN, the Googles, the Microsofts, the IBMs and any other companies involved in the research, design and thinking around AI need to come together and agree a way forward.

As a keen advocator and facilitator of collaboration, I would suggest starting with the following questions:

1. What is the worlds AI strategy to be? Where are we going with this? What do we want it to achieve?

2. What are the guiding principles of AI or laws of conduct that we should all agree to? It feels like the right time to redefine Asimov’s 3 laws of robotics.

3. Who is going to govern AI going forwards? It may well be a science right now, but as it seeps into people’s daily lives, how can we manage its risks and ensure we have the right governance model and regulation in place to protect our people?

4. Where do we draw the line? As companies, we’re investing a lot of money into AI research and techniques, but when do we draw a line under being competitive with each other and creating a unified infrastructure that is right for everyone?

5. What do we need to consider ethically? What are the implications of introducing AI systems to the way we currently live? What are the impacts and how should we manage the consequences on society, our jobs and our wellbeing?

Dear readers, I’d love to hear your thoughts on what other things we should be thinking about, to move toward a unified society that lays solid foundations for AI to grow the right way.

Selda