Garry Kasparov on AI: ‘People always called me an optimist’

Garry Kasparov is a political activist who’s written books and articles on artificial intelligence, cybersecurity and online privacy, but he’s best known for being the former World Chess Champion who took on the IBM computer known as Big Blue in the mid-1990s.

I spoke to Kasparov before a speaking engagement at the Collision Conference last month where he was participating in his role as Avast Security Ambassador. Our discussion covered a lot of ground, from his role as security ambassador to the role of AI. (Transcribed questions and answers were edited for clarity.)

TechCrunch: How did you become a security ambassador for Avast?

Garry Kasparov: It started almost by accident. I was invited by one of my friends, who knew the previous Avast CEO (Vince Steckler) to be the guest speaker at the opening of their new headquarters in Prague. I met the team and very quickly we recognized that we could work together very effectively since Avast wanted an ambassador.

I thought that it would be a great combination because it’s about cybersecurity, and it’s also about customers, about individual rights, which is related to human rights, and it also had a little bit of a political element of course. But most importantly, it’s a combination of privacy and security and I felt that with my record of working for human rights, and also writing about individuals and privacy and also having some experience with computers, that it would be a good match.

Now it’s my fourth year and it seems that many of the things we have been discussing at conferences when I have spoken about the role of AI in our lives, and many of the discussions that we thought were theoretical, have become more practical.

What were those discussions like?

One of the favorite topics that was always raised at these conferences is whether AI will be a helping hand or threat. And my view has been that it’s neither because I have always said that AI was neither a magic wand nor a Terminator. It’s a tool. And it’s up to us to find the best way of using it and applying its enormous power to our good.

I always said that the problem with AI was not about killer robots, but it’s about evil humans behind it. And also I thought we were wrong about being concerned that it would develop too quickly. I think it has actually developed too slowly because I think it is inevitable. And I believe it’s better to move forward faster to create enough new jobs before the old jobs disappear on the chopping block of automation.

It seems now that a lot of people recognize that COVID-19 is forcing us into this new reality. And many of us wish that we could have driverless cars and robots doing all sorts of work. But unfortunately, we don’t have that, and it’s forced us into [thinking about] some form of UBI (universal basic income) without AI’s rise in productivity.

If we’re not where you think we should be, where do you think we are with AI?

People always called me an optimist, though I believe I have always been a realist on this front. There are some very high-profile doomsayers who are doing a disservice to humanity [where AI is concerned]. People are really terrified by the development of AI, and when you hear these [negative] speeches and warnings combined with all the Hollywood brainwashing production about AI, it created the wrong image in the minds of the general public.

I always thought that we have to find the right algorithm for working with machines. For me it’s always been about human plus machine, not human versus machine. […] Now we have to find the best ways of combining human creativity and intuition with the enormous force of AI. It’s always about finding our role. So what is our role in this new relationship? AI is getting more and more powerful, but it will never be able to replace us.

How can AI help or harm when it comes to cybersecurity?

It’s a super powerful tool and it depends who is using it and for what purpose. It offers new opportunities for hackers, whether individual hackers, groups or larger state-run operations. That means that you have to build a very powerful defense on the other side of the fence, and that requires a very deep understanding of the whole process and how you protect the most vulnerable elements of your network.

How can AI be used for good or ill when it comes to equality and justice?

I think the whole discussion is based on the wrong premises because [some] people think that AI can be more objective than we are [as humans], and that’s impossible because AI simply allocates our data and analyzes it.

If we have a history of prejudices or inequality, whether it’s racial, sexual, religious or whatever, it’s being aggregated into AI. So AI is like a mirror, and if you don’t like what you see in the mirror of course, you can start to look for some distortions, or you can look at yourself and start changing your behavior. But I think it sets the wrong expectations that AI can fix things in society, because at the end of the day, AI should analyze data.

Do you believe AI will ultimately replace humans?

On what basis do people think that AI is going to terrorize us? It’s an endless discussion about human-machine relations, but I see areas where we can communicate because AI cannot cover 100% of any problem because nothing is perfect in this universe. As long as there is a little room for corrections, that’s where humans belong.

[…]

We just have to understand how to adjust. I would say it’s a history of progress, and I think this is one of the challenges that we’re facing now and and that’s why Elon Musk and others had such [success] convincing the general public [that AI is evil] because over decades we have been trying to make people work like computers — it’s like it was a compliment, ‘oh he works like a computer.’ No, no, no; that’s exactly wrong. Now we have to go back and to make sure that we people will work like humans.