AI poses no threat to humanity, surprising study finds

Large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity.

The research challenges the notion that LLMs could develop new, potentially dangerous skills on their own.

The research challenges the notion that LLMs could develop new, potentially dangerous skills on their own. (CREDIT: CC BY-SA 3.0)

Large language models (LLMs), like ChatGPT, may seem impressive with their ability to follow instructions and generate coherent language, but they don't pose an existential threat to humanity, according to recent research from the University of Bath and the Technical University of Darmstadt in Germany.

This research, presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), challenges the notion that LLMs could develop new, potentially dangerous skills on their own. The study reveals that while these models excel in language proficiency and can follow instructions effectively, they cannot independently acquire new abilities without explicit direction. In essence, they remain predictable, controllable, and safe.

LLMs are being trained on increasingly large datasets, leading to improvements in their language generation and their ability to respond to explicit and detailed prompts. However, the researchers found no evidence that these models could develop complex reasoning skills, which would be necessary for them to act unpredictably or autonomously.

Performance of non-instruction-tuned GPT models in the zero-shot setting. Grey background indicates tasks that are not previously identified as emergent. Tasks that require the output of a number or a coded string are evaluated using exact match accuracy (CREDIT: The University of Bath)

Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, pointed out, “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus.” He emphasized that the fear of LLMs becoming uncontrollable and dangerous is unfounded and distracts from more pressing concerns about the misuse of AI.

The research, led by Professor Iryna Gurevych at the Technical University of Darmstadt, involved a series of experiments designed to test the so-called "emergent abilities" of LLMs. These abilities refer to tasks that models perform successfully despite never having been explicitly trained for them.

For example, LLMs can answer questions about social situations without being specifically programmed to understand social dynamics. While it might appear that these models possess a deep understanding of social contexts, the research demonstrated that this capability actually stems from a well-known process called "in-context learning" (ICL). ICL allows models to complete tasks by leveraging a few examples provided to them.

Thousands of experiments conducted by the research team revealed that the capabilities and limitations of LLMs could be attributed to their instruction-following ability, memory, and linguistic proficiency. These findings challenge the idea that larger and more complex models could develop unforeseen abilities, such as advanced reasoning or planning, which could make them hazardous.

Dr. Tayyar Madabushi further explained, “The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning. However, our study shows that the fear that a model will go away and do something completely unexpected, innovative, and potentially dangerous is not valid.”

Concerns about LLMs posing an existential threat are not limited to the general public; they have been echoed by some of the world’s leading AI researchers. However, according to Dr. Tayyar Madabushi, these concerns are misplaced. The tests conducted in this study clearly demonstrated the absence of emergent complex reasoning abilities in LLMs, reinforcing the idea that these models remain safe and controllable.

The substantial overlap of the tasks on which the two models perform above the random baseline is noteworthy and indicates that instruction-tuning allows for the effective access of in-context capabilities rather than leading to the emergence of functional linguistic abilities. (CREDIT: The University of Bath)

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Dr. Tayyar Madabushi stated. Instead, he suggested that users should focus on providing explicit instructions and examples when working with LLMs, especially for tasks that require complex reasoning.

Professor Gurevych echoed these sentiments, adding, "Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."

In conclusion, while LLMs are powerful tools that can generate sophisticated language and respond to prompts, they do not have the capacity to develop new, complex abilities on their own. The real challenge lies in ensuring that these models are used responsibly and ethically, rather than fearing their potential to become autonomous threats.

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joshua Shavit
Joshua ShavitScience and Good News Writer
Joshua Shavit is a bright and enthusiastic 18-year-old student with a passion for sharing positive stories that uplift and inspire. With a flair for writing and a deep appreciation for the beauty of human kindness, Joshua has embarked on a journey to spotlight the good news that happens around the world daily. His youthful perspective and genuine interest in spreading positivity make him a promising writer and co-founder at The Brighter Side of News.