Patrick Schwerdtfeger is a motivational speaker who can cover the possible future dangers of artificial intelligence (AI) at your next business event. Contact us to check availability. The full transcript of the above video is included below.
Full Video Transcript:
Hi and welcome to another edition of Strategic Business Insights. Today we’re going to talk about artificial intelligence and, in particular, will artificial intelligence destroy us? Which is something that’s been up for debate more and more, increasingly in recent months and years. People like Elon Musk and Dr. Stephen Hawking and Bill Gates and many others have weighed in and many of them have said that artificial intelligence poses a very significant existential threat to the human race if it’s allowed to develop.
And in order for us to understand that threat, we have to understand the difference between general purpose AI and what they call applied or narrow AI. And narrow AI, I have a whole video on this which you can watch separately, but very quickly, narrow AI is when you give the artificial intelligence a series of rules. So I view it as a bottom-up approach where you’re giving it rules to understand a given situation. But general purpose AI is a software infrastructure where you just give it the data and allow the machine to learn itself and develop its own rules. And that’s where the danger really potentially could be because—
By the way, we already have examples of this in today’s world. There’s a program called AlphaGo, which was developed by DeepMind, which is a company based in the UK, purchased by Google in 2014, which has developed a software which learned to play the game of Go better than the world champion. And this is a game that has infinitely more possible outcomes than the game of chess, so really a significant achievement, and it did this only by analyzing the data and developing its own rules. And the same thing was done in other settings, and the one thing they all have in common is that once the machine finished its learning process, the developers of the software could no longer understand what the machine was doing.
So machines learn differently than human beings. They come to their own decisions. They come to their own conclusions. And once we have a platform that works, and we do have prototypes that work, in theory that company or whoever owns that technology could replicate that, a thousand different programs operating on different servers, and provide them all different data sets and allow this infrastructure of AI to learn countless different things all at the same time. This is the characteristic feature of our digital world, is replication is so easy. So you can replicate one program a thousand times or a million times and have all this happening simultaneously. So it could be a situation where we hear some kind of abstract story in social media or on the evening news about some artificial intelligence somewhere, and then literally two months later we’re in a crisis where global infrastructure is being compromised. I believe that that is possible and I believe it’s coming sooner than you think.
So the question is, can artificial intelligence destroy us? And it can if there is a single monolithic AI, in other words, if there is one artificial intelligence that completely dominates the space and has far more power than any other artificial. That’s where I think the primary danger is. And imagine if a technology is scaling by orders of magnitude. So let’s say every five years it gets 10 times more powerful. Well, if you have one AI which is operating at 100,000 units, whatever that might be, and then a bunch of other ones that are functioning at 1000 units, then over the course of those five years the ones that are functioning at a thousand go up to 10,000, a tenfold increase over the course of five years. So they’ve increased by 9000 each, let’s say, whereas the big, single monolithic artificial intelligence, which is operating at 100,000, goes up in the same five-year period to a million. So it goes up by 900,000 rather than 9000. Both are progressing at the same rate. In other words, the rate of exponential growth is the same for both, but because one is much larger than the other or further down the development path, the way it’s growing is increasing the lead. In other words, the other AIs could never catch up, and that’s precisely where the danger lies.
So the question becomes, where could a monolithic AI come from? And that’s a very good question. There are a few different things that we have to look at. Let’s look at two in particular. One is, where are the fastest computers in the world? Every year, in fact twice a year, they do contests to see where’s the fastest computer in the world, and right now the two fastest computers – supercomputers, in the world are both in China. In China. So China is investing heavily into technology and developing fastest computers in the world. The two fastest supercomputers right now are in China. And by the way, they’re not just a little bit faster than the other computers. They’re like eight or nine times faster than the fastest one in the United States. So it’s a very serious consideration and it’s an indication of where this technology might come from to look at where these fast supercomputers are located, and right now there are a number of them that are located in China and they are in many cases state-sponsored. So it’s a very significant consideration to look at China as the source of one of these potential monolithic AIs in the future.
But the other is, which company is investing the most in this type of technology? Here in the Western world, there’s one that’s literally head and shoulders above anyone else who’s even close and that company, of course, is Google. Now, I’m a huge fan of Google. I use all of Google’s services. I’ve been a big fan of Google for the last 10 years, even more than that. But the bottom line is that they are investing heavily in this space and developing technology which is far ahead of anyone else that we’re aware of, at least as far as the public news goes.
So when we look at where these potential threats might come from, two good places to look are Google and China to see what they’re doing, to follow what’s going on in those places, and I would say that the way to mitigate the risk is to have many offsetting AIs. In other words, we can’t have just one single monolithic AI. I would say at a minimum we should have 10 that are comparable because they can offset each other and it would prevent one from running away from all the others in the space.
So again, we’re not at a stage where this threat is imminent, but the way I view the progress and the way the progress is happening, I believe that this could all come very quickly. So maybe it comes five years from now, maybe it takes 20 years, but at some point we’re going to see a very quick exponential growth of an AI which in very short order has enormous power. And the Internet of things, the sensors being installed all over the world and big data, and everything’s being connected digitally and through the Internet and through networks, the fact is that that increases the potential power of an artificial intelligence because feasibly there is a future where it could tap into all these various devices all over the world and control things that 20 years ago were never even connected. So the danger is in the future. It’s not in the past. It might be here in five years, it might take 20, it might take 50. I believe it’ll happen sooner rather than later, and I believe that when we finally hear about it on the evening news it will be only a matter of months before the story changes very quickly and it potentially could affect our livelihoods on this planet. So, a forecast, not necessarily a glowing one but one that we can prepare for by ensuring that we develop multiple AIs at the same time, not having just one company or one country dominating the space. It’s an exciting time and it’s going to be an exciting future.
Thank you for watching this video. My name is Patrick, reminding you as always to think bigger about your business, think bigger about your life.
Patrick Schwerdtfeger is a keynote speaker who has spoken at business conferences in North America, South America, Europe, Africa, the Middle East and Asia.