Will we soon face AI-related risks? Maybe, but they are probably overestimated

Author

Georgy Kurakinhttps://www.linkedin.com/in/georgy-kurakin
Georgy Kurakin is a biologist, Member of the Royal Society of Biology (MRSB). His primary areas of expertise are computational biology and data science in biology. He has peer-reviewed publications in biochemistry and evolution of bacteria and has his own blog on Nature Portfolio Microbiology Community platform. Georgy is also a science journalist contributing to various Russian science media outlets and regular speaker at “popular science” events in Moscow.

More from this author

- Advertisement -spot_img

On 22 March 2023, Future of Life Institute published an open letter titled “Pause Giant AI Experiments: An Open Letter”. In this letter, eminent AI researchers and venture investors such as Yoshua Bengio, Elon Musk, and Steve Wozniak called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. The motivation for this letter were alleged risks of loss of control on AI and massive job loss due to the replacing humans by AI. Geoffrey Hinton, who has recently won the Nobel Prize for neural network research, expresses similar concerns. In his interview after winning the Nobel Prize, he said that AI could “get smarter than we are” and “take control”.

These concerns of the scientists are in line with the common trend: we are now facing the rise of “AI anxiety”, the widespread concerns that AI can undermine our safety or activities. Can they have a real basis? It is difficult to answer this question definitely, but let’s try to address it and consider it point by point.

Taking control, or “machine rebellion”

The fear of a “machine rebellion” has been influencing the public consciousness long before the emergence of modern-type AI. The full-fledged scenario of robot takeover can be found in R.U.R., a famous play of Karel Čapek. The same literary writing introduced the term “robot” into our language – so the cultural concept of machine rebellion appears to be as old (if not older) as the scientific concept of robot. By the early 2000s, the concept of machine rebellion – occurring as soon as we have self-learning robots – came even in children’s literature, for example in the novel Eager by Helen Fox. Wide implementation of neural networks in our life has just triggered these pre-existing fears.

Modern neural networks are unlikely to have self-awareness and even any type of real cognition, yet, when dealing with language models – especially chatbots – it can seem that they do. Microsoft’s AI chatbot confesses love for a user, its counterpart by Google seems outraged by a prompt and asks the user to die. Such behaviour might convey the impression that these AI’s can experience emotions and produce speech in a voluntary manner, but they don’t really understand the meaning of a single word they actually say.

A graphical representation of an artificial neural network
A neural network graphic, via MDPI

The things that we call neural networks are essentially a complicated way of fitting mathematical models connecting input and output information. As David Adger, Professor of Linguistics at Queen Mary University of London, explained to Serious Science, AI doesn’t even grasp any model of grammar like we do in our mind. Instead, AI uses a pre-fitted statistical model describing the probability that any specific word B will follow the word A.

For example, I didn’t writea pre-fitted statistical modelled” because I know that an adjective usually requires a noun, not a verb in the past tense. And the verb ‘uses’ requires the same. AI doesn’t know that ‘model’ is a kind of entity described by a noun. It just “knows” that the chance of meeting the word ‘model’ after the words ‘uses’ and ‘statistical is much larger than the chance to meet ‘modelled. And, if an AI wrote this text, it would use ‘model’ just because it is more probable. Not because it makes sense. Modern AIs do not consider any “sense”.

This leads to the understanding that modern AIs are far from sentient. Serg Masís, an American data scientist and the author of a bestselling book Interpretable Machine Learning with Python, explains:

If AI is to supersede or even complement human intelligence in more than a narrow way, it has to improve at generalizing. And right now, AI is powered by deep learning, which is a very brute force resource-intensive approach that lacks the kind of guardrails natural intelligence has, such as a lattice of symbolic, physical, and causal reasoning

Artificial general intelligence may really be once created, but this will probably require new technologies. Neutral networks we have now are bio-inspired but this doesn’t make them sentient. As I explained in my article for The Biochemist, neural networks are not the only type of bio-inspired AIs. At least, there are also artificial immune systems, which doesn’t sound quite as scary. Yet, neural networks are not more self-aware than artificial immune systems.

Such types of AI have no awareness, will, or emotions. Without motivation and awareness, any “rebellion” is impossible. “If the concern is about robots taking over, it’s not going to happen anytime soon” – Serg Masís agrees.

But what about job loss?

Job loss, or “they will replace us”

I work as a freelance medical translator. Often, I receive orders for post-editing the machine translation. Even if this machine translation is loaded into a well-configured computer-assisted translation (CAT) system with all the translation databases, some tasks appear to be extremely arduous. Sometimes, the in-house neural network of the customer hallucinates a table full of numbers, and you have to edit each number manually. Or it translates genetic terms incorrectly, and the painstaking process of hunting down these mistakes awaits me.

Even if a good AI engine is used (for example, DeepL), it cannot cope with all the terminological nuances of the biological text, and editing the text translated with it is much more frustrating than editing the text translated by a human colleague.

People who work in other creative jobs have similar impressions about AI. Diana Masalykina, a freelance illustrator and animator, is skeptical of the possibility that currently existing AIs outcompete human artists:

There’s still a lot of drawing to do in neural illustrations. In general, neural networks are of little help in illustration so far; they could rather be used as a source of inspiration.

The accounts of people using neural networks as a co-pilot for creative jobs confirm the idea expressed by Filip Vester in his article for The Skeptic: creativity is the sphere where existing AI cannot replace humans. Serg Masís takes the same view: “Narrow AI (the AI that exists today) will slowly take jobs that are cognitively manual and repetitive (honestly jobs nobody wants) and enhance other jobs by automating manual and repetitive portions of those jobs”. So, AI is unlikely to deprive us of the dream of finding a creative job. 

It is still just a tool, not a full-fledged independent creator. But can it be misused in a dangerous way?

AI misuse, or “the root of evil”

For this final question, my answer is probably yes. Unfortunately.

In the beginning of 2023, an university student Erika Schafrick tearfully told her TikTok audience of a “zero” grade in Philosophy: “Like sorry I didn’t <freaking> cheat and use ChatGPT just like everyone else in the <freaking> course probably did who passed. I actually tried to do it myself and use my own ideas. But that’s what I get right. That’s what I <freaking> get.”

Irrespective of whether her explanation of this grade was true, her emotional speech again revealed the emerging problem of academic cheating with AI. It is probably widespread in universities, and – much worse! – such cases are increasingly identified in scholarly publications.

While Erika Schafrick sparked a discussion on TikTok, one of the highest-cited chemists, Rafael Luque, was caught in the middle of a scandal. His unusually high publishing activity has attracted the attention of the scientific community: on average, he published an article every 37 hours! Moreover, he once confessed that used ChatGPT to “polish” his texts. University of Córdoba fired him from multiple affiliations as a formal cause. One month later, a Danish biologist Henrik Enghof found his name repeating in a scientific preprint, but didn’t find any of his works cited: instead, all citations were just hallucinated by a neural network, referring to papers that never existed.

A flock of Agent Smiths (Hugo Weaving) in The Matrix Revolutions, dressed in their signature black suits and ties with silver tie clip and black sunglasses.
Hugo Weaving plays the oppressive, indefatigable Agent Smith in the Matrix films, one of the most famous machine apocalypse settings. In The Matrix Revolutions, he copies himself many times. Via wired.it

The academic community faces an unprecedented challenge for scientific integrity. Now, any text submitted to a journal could appear to be generated by AI. We have some automated methods to identify such a misuse: unusual words in the text or the traces of a probabilistic way of generating texts (which I mentioned above) can be the sign that a text has been generated by an AI. But such checks require additional time and give rise to a climate of mistrust in science.

One more problem is the possibility of illegitimate use of AI to control and track people. The documentary book The Perfect Police State by Geoffrey Cain, using the example of China, shows how such control can provide a technological basis for mass reprisals in authoritarian states. These concerns have been reflected in the EU Artificial Intelligence Act, which directly prohibits using AI for social scoring, real-time biometric identification, and assessing the risk of an individual committing a criminal offence. Unfortunately, it is only the first act of its kind and applies only in the territory where the risk of using such technologies is minimal. But this is a perfect framework for regulating the use of AI, providing the key ideas behind its use needing to be prohibited as potentially dangerous.

Does AI itself pose any threat to the values of scientific integrity and democracy? In my opinion, no – it is humans that pose such a threat. AI is just a tool. We still need to find out how to regulate its use to minimise all related risks. But we must remember that now, like 100 and 200 years ago, all illegitimate actions are committed by humans. Not by AI.

Like hundreds of years ago, technologies may not be evil, per se. Only humans do evil, and nothing about that has changed yet.

The Skeptic is made possible thanks to support from our readers. If you enjoyed this article, please consider taking out a voluntary monthly subscription on Patreon.

- Advertisement -spot_img

Latest articles

- Advertisement -spot_img

More like this