Can Artificial Intelligence Threaten Humanity?

Editor’s note: The Astana Times continues a section featuring articles by our readers. As a platform that values diverse perspectives and meaningful conversations, we believe that this new section will provide space for readers to share their thoughts and insights on various topics that matter to them and the AT audience.

The question of whether artificial intelligence (AI) will become an existential threat to humanity remains unresolved. Despite the absence of a definitive answer, a growing number of experts and scientists are raising concerns about the potential risks associated with the development of intelligent machines.

Photo credit: Shutterstock

Some believe that AI does not pose as significant an existential threat as it may initially appear. Large language models like ChatGPT, for example, operate based on trained algorithms and instructions. They are not capable of developing new skills independently and are controlled, predictable and safe systems. This perspective offers an optimistic outlook.

However, the rapid advancement of AI systems fuels previous concerns. If not properly managed, AI could evolve autonomously, surpass human intelligence, and potentially become hostile to humans. In the worst-case scenario, AI might perceive humanity as a harmful species and attempt to eliminate us. This is the more pessimistic view. 

Begim Kutym.

Tech leaders are also grappling with the question: “Is AI a real threat?” Their answers vary. The idea that AI is a threat to humanity is slowing down its development and use, which delays finding solutions to important problems that need quick action. As AI models grow more complex, they are beginning to tackle challenges that are currently unpredictable.

There are concerns that large AI models could acquire new abilities like thinking and planning, and thereby pose a threat to humans. However, a thorough examination of existing AI models reveals that they can only function as useful assistants within specific domains. They cannot exceed the instructions provided by engineers or learn new skills independently without external input. In other words, they are not capable of independently discovering and mastering new knowledge beyond their narrow specialization.

On Sept. 29, three major Western jurisdictions—leaders in the development of artificial intelligence technologies—signed an agreement to regulate AI systems. The countries that signed the convention have committed to adhering to all its requirements. Businesses also support the adoption of this agreement, as differing national laws on intellectual property present obstacles to the development of this technology.

The convention, signed by the United States, the European Union, and the United Kingdom, prioritizes human rights and democratic values in regulating AI systems in both the public and private sectors. The agreement, developed over two years by more than 50 countries, including Canada, Israel, Japan, and Australia, sets requirements for the liability of signatory countries for any harmful or discriminatory outcomes resulting from AI systems. It mandates that AI systems respect equality and privacy rights.

This is the first legally binding agreement of its kind, bringing together various countries and demonstrating that the international community is preparing for the challenges posed by AI. The convention shows that the global community shares a common vision for the development of artificial intelligence technologies. Joint innovation requires respect for universal values and the promotion of human rights, democracy and the rule of law.

However, regulating AI is not always straightforward. The European Union’s proposed AI regulation, which came into force last month, has sparked considerable controversy in the tech community. For example, companies like Meta refused to release their latest product, Llama, in the EU market.

Despite the challenges, AI’s potential for harm cannot be dismissed. Its ability to generate fake news, automate cyberattacks and even disrupt the job market presents real threats if misused. Autonomous weapons and AI-driven surveillance systems could lead to significant ethical concerns, while privacy violations become more likely as AI collects and analyzes vast amounts of personal data. If left unchecked, AI could also make decisions that are incomprehensible to humans, raising doubts about fairness and accountability.

These risks make it clear that stringent regulation is essential — not just to protect against the machines themselves, but to prevent misuse by the individuals who create and operate them. AI, in and of itself, is not the danger. The true threat lies in its potential to be used irresponsibly or maliciously by humans.

The ongoing debate about AI’s risks should not paralyze its development. Instead, it should push us toward smarter, more ethical innovation. Balancing caution with progress is crucial to ensure that AI reaches its full potential without putting humanity at risk. 

The author is Begim Kutym, a graduate student at the Nazarbayev University Graduate School of Public Policy. 

Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the position of The Astana Times. 


Get The Astana Times stories sent directly to you! Sign up via the website or subscribe to our X, Facebook, Instagram, Telegram, YouTube and Tiktok!