Surveys show that people are getting more and more concerned about AI, yet technology is expanding into almost every area of life and becoming an integral part of the way we live.
So how can we bring greater transparency into how AI works and help calm people’s fears? Peter Scott is founder of the Next Wave Institute and author You and artificial intelligence.
Scott: Fear may be justified, but it is not productive. If you are attacked by a tiger that rushes at you, then you are justified in feeling afraid. But if it causes you to remain paralyzed or not do what you are supposed to do when a tiger comes to you, it doesn’t help.
What I have found is that the fear people feel about the future of AI is proportional to how little they perceive it to be effective. They feel like they’re on a ride by tech companies who don’t know where to go and who’s driving too fast on the highway, while everyone else is sitting in the back seat terrified. And when I talk to techies, I try to make them realize that their natural enthusiasm can create this response.
It looks like magic
Arthur C. Clarke’s third law was that any sufficiently advanced technique was indistinguishable from magic. And for people who look at what artificial intelligence does today, with things like large language models, or amazing photomontage tools, it seems charming. But when we see AI doing one kind of magic, we assume it can do all kinds of magic, because how are we supposed to know where the limits lie?
So we immediately jump to the idea that AI is a threat, becomes conscious, or runs around in a mess. All of these things are relatively unlikely. What AI often does right now is expose the fact that a lot of the things we do cognitively as humans can be achieved by narrow AI using its ability to find patterns..
And the more you become familiar with it, the more you understand that what’s going on is spotting patterns. There is an epistemological mismatch between what we are told to expect any day now and what is realistic. Most robotics say that AI will not generally become intelligent until it understands the real world as we do.
AI will magnify your mistakes
edge: One challenge is what she calls the paradox of control – how do you control something if you don’t really understand its limits or how it works?
Scott: This is being tackled by many prominent philosophers and computer scientists who are researching how to control artificial intelligence when it finally becomes more capable than it is now. But you can also look at it in terms of the challenges facing today’s CEOs who are integrating AI into their organizations. Because AI essentially allows today’s CEOs to make the same mistakes they’re making now, only faster and at scale.
You must understand where your weaknesses and strengths are as an organization and how you currently have biases in your proprietary data. Bias may only be realized at the moment on a very small scale, but AI will amplify it. If you really understand these issues, you know what to control.
edge: One specific area in pattern identification is the ability of AI to match vast amounts of data about individuals from video surveillance, facial recognition, social media, etc. How should companies deal with these privacy issues?
Scott: It’s a very urgent question, and it’s a good place for people to focus their attention because AI greatly inflates the ethical view of business.
One example is what a company called Clearview AI did to mine massive amounts of social media information to be able to identify people from photos and tell you pretty much everything there is to know about them right away, which is great for law enforcement, but they had a lot of publicity. negative about it.
But these kinds of technologies are now within reach of organizations with fewer resources and less money as AI is pushed into goods. So there is a high potential for abuse here as a lot of regulators aren’t really sure what to do about it. If you, as a government, are too restrictive about this, you risk stopping innovation within your country, and then companies in your country losing the AI race, and no one wants that.
Companies should think deeply about their ethics
When you focus your attention on this issue, it helps you understand who you are as a person and as a business in terms of ethics, because AI amplifies these questions, and therefore this thought process has to happen on a much deeper level than what happened. Before. That’s why many companies are now popping up to provide ESG functionality to companies with respect to their AI footprint.
If you take the view that AI will be the panacea for our problems, you run the risk of avoiding responsibility, and that will get you into trouble, only faster and on a larger scale.
edge: Is artificial intelligence an existential threat to humanity as Stephen Hawking suggested, or are you optimistic about introducing the right controls?
Scott: It’s an interesting question, and again, it comes back to the agency question. Artificial intelligence can be an existential threat. But what frustrates me when asking this kind of question is that people tend to respond in one of two ways.
If they hear that AI is potentially a huge threat, they panic, roll the ball and do nothing. If they hear that AI is not likely to be a threat, they’ll go, well, I don’t need to do anything, I don’t. And we’ve seen the effect of this in another existential crisis, climate change: people are hesitating between “It’s not really a problem” and “I can’t do anything about it, we’re doomed.”
None of these responses were fruitful. The optimism/pessimism axis is the axis I like to travel at right angles along the “what can I do?” direction. I want to tell people, here’s what you can do to assure us of a better outcome. Because I see incredibly good results, and I can also see very bad results.
Focus on people, not technology
edge: So what should executives focus on who don’t want to miss the AI revolution but don’t want to do it badly?
Scott: First, you need to be aware of what AI can and cannot do, to understand its essence. There are many ways to do this. One way is to look at the humor that is generated around AI at the moment mainly by joking it, because then you get an idea of its edges, what it can’t do, and how it fails.
Then it comes to the people. Technology will take care of itself – there are countless people around the world working to improve technology, and we really don’t need to pay more attention to that. It will happen regardless.
Focus on conversations with your employees about where we want to be as a company in the future of smart machines? What makes us get out of bed in the morning that we want to do more of? If we had this optimistic future, what would it be?
Involve people throughout your organization in those conversations – I think that’s a very productive direction to go into. It was clearly something people could have done at any time, but there was no urgency. Now, artificial intelligence is providing impetus. Perhaps this fear of what might happen if we don’t wake up, pay attention, and start asking these deeper questions about ourselves actually provides an incentive to act.