The dangers of rapidly advancing artificial intelligence (AI) systems are becoming a hot topic of conversation, with expert interviews racking up millions of views online.
AI pioneer and Nobel Prize winner Geoffrey Hinton - often referred to as the ‘Godfather of AI’ -has been leading warnings about the technology’s potential to hurt, rather than help humanity, unless safeguards are put in place.
"We simply don’t know if we can make them not want to take over and not want to hurt us," Hinton told Diary of a CEO's Steven Bartlett in a YouTube interview earlier this year.
An expert in New Zealand says it's a valid concern but not the inevitable outcome of AI developments.
What troubles Hinton most is the race by companies and world superpowers to create the smartest AI system, possibly leading to an AI superintelligence which could eventually outpace the human race and evade our control.
When asked if we would be able to train an AI system that is smarter than us so it wouldn't harm humans, Hinton was pessimistic while still holding out some hope.
"I think it might be hopeless, but I also think we might be able to, and it would be sort of crazy if we went extinct because we didn’t try.”
He's not alone in his fears, with pioneer of artificial neural networks and deep learning Yoshua Bengio also voicing concerns during a TED Talk.
"These companies have a stated goal of creating machines that are smarter than us, that can replace human labour, yet we still don’t know how to make sure they don’t turn against us," Bengio said.
Difference between artificial intelligence systems
Professor of Artificial Intelligence at Victoria University of Wellington Ali Knott was among 20 experts to sign an open letter calling on the New Zealand Government to take a bi-partisan approach to better regulating the new technology.

We asked him to explain the difference between artificial intelligence, artificial general intelligence and what is meant by super-intelligent systems.
"The older AI is an AI that helps you make decisions that predict some output from some input in some particular task. So it would be like predicting, for instance, how long someone's going to be in a hospital bed," Professor Knott said.
"The newer AI, you might call generative AI, and that's like what you have, for instance, in ChatGPT or something like that. And that's the tool that essentially has created this enormous new, revolutionary improvement in AI abilities, because you can ask it to do anything.
"As for superhuman intelligence, the idea is that perhaps if you keep training a system on more data and you keep training with cleverer methods, then something like ChatGPT can be extended, and so that it does even better, and maybe we'll end up with something which is not just as good as people, but better than people."
Knott said some AI is already outperforming people in certain things like doctors exams, but stressed that there is still debate among experts whether superintelligence will be achieved.
Dangers of AI superintelligence
While making the point that he doesn't want to "frighten people" as AI smarter than humans may still be many years away or might not eventuate at all, Knott does have concerns around the technology.

"The scenario of superintelligence in particular, I think, is one we have to be wary about. If we feel as though we're moving towards AI systems that are able to act autonomously and are very much more powerful than humans, that scenario is one which should be avoided.
"So we're the dominant species on the planet because we're the smartest. You know, there are stronger species than us, but the cleverest ones are humans, and that's why we dominate the planet," Professor Knott said.
"I'm a bit worried that if you made a very, very smart, very autonomous AI machine, then that kind of thing might end up becoming the most powerful thing on the planet. But I don't want to frighten people with that idea, because it's just one outcome amongst many others, and it's somewhat distant, but it is something we need to take seriously because it's such a big deal."
Knott said he doesn't want the topic to take away from issues around regulating current AI systems already in use.
"It's very important that information doesn't just flow downwards from governments to people or from tech companies to people.
"We need to have a conversation where people are able to talk amongst themselves in their jobs, in their families, in their schools, about what they would like this technology to do."
Watch the full story on TVNZ+



















SHARE ME