ASTANA — The rapid advancement of artificial intelligence (AI) poses urgent legal and ethical questions for governments worldwide. In an interview with Kazinform, Igor Rogov, the chair of Kazakhstan’s human rights commission and the president of the Kazakhstan Criminological Association, discussed the need for legal safeguards and global cooperation to ensure human rights are protected in the digital age.

Photo credit: Shutterstock
Global divide on AI regulation
Rogov noted that countries fall into two broad camps regarding the legal regulation of AI. One group, including China and the United States, argues against legal restrictions, citing concerns that regulation may hinder scientific progress and limit national competitiveness.
“An interesting shift has occurred in the United States: a few years ago, Elon Musk issued a statement, almost a warning, urging caution in adopting AI technologies, saying they posed a serious threat to humanity. Today, however, the same Elon Musk and the new administration led by President Donald Trump are calling for removing any restrictions on AI development, arguing that such limits hinder the country’s competitive edge,” said Rogov.

Igor Rogov. Photo credit: Kazinform
The second group, which primarily includes continental European countries and the United Kingdom, supports strong legal oversight of AI development and deployment. Rogov explained that these countries favor comprehensive legislative regulation to govern both the production and application of AI.
“AI is a reality. Naturally, questions arise not only about responsibility for potential harm caused by AI, but also about understanding the phenomenon from a legal standpoint. Ensuring human rights in the use of AI is especially important,” said Rogov.
“The legal framework for this process is currently almost nonexistent, which is why I believe legislative restrictions on the use of AI are absolutely necessary,” he added.
Emerging international frameworks
Several global institutions have taken steps to define ethical and legal norms for AI.
In March 2024, the European Parliament passed the Artificial Intelligence Act. This was followed in May by the Council of Europe’s adoption of the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. Later that year, in September, the United Nations adopted the Global Digital Compact, which outlines principles for digitalization, including the development and use of AI.
Rogov noted that while European Union (EU) laws do not apply to Kazakhstan, the framework convention is open for signing by non-member states.
“As far as I know, the issue of Kazakhstan joining this convention has not yet been raised,” said Rogov.
Kazakhstan’s legislative initiatives
Kazakhstan is also moving forward with its national legal agenda, including the development of a Digital Code and a draft law on AI, which has been submitted to the parliament.
According to Rogov, some legal experts in Kazakhstan and abroad have questioned the need for two separate legal documents.
“From a legal standpoint, provisions on AI should be an integral part of the Digital Code. For now, it seems the decision was made to pass a law first, and the Digital Code will follow much later,” said Rogov.
He noted that the Human Rights Commission is currently holding a public discussion on the draft legislation through its Telegram channel.
“Some participants expressed skepticism about the necessity of the law, with a few even questioning the constitutionality of regulating AI through formal legislation. They argued that many aspects could be handled through sub-legislative mechanisms instead,” he added.
Three legal priorities for AI governance
Rogov emphasized three urgent areas that require legal regulation.
The first is liability. Who should be held responsible for harm caused by AI? Is it the operator who used the system, the company that developed it, or the programmers who wrote the code?
“In some cases abroad, the question has even been raised about granting AI legal personality, meaning that AI itself would bear responsibility. Well, I think that belongs more to the realm of science fiction. So far, the issue of AI legal personality has not been seriously considered,” said Rogov.
The second area involves intellectual property rights. Who owns the outputs generated by AI, such as research findings, artworks, or musical compositions? According to Rogov, this issue remains unresolved and is not sufficiently addressed in the current draft legislation.
The third priority is combating fraud and criminal use of AI. Rogov emphasized that criminals increasingly use AI to mimic voices and facial appearances, enabling various forms of fraud and deception.
“This area should be a top priority in the legislative process,” he said.
Rogov noted that while the proposed law may serve as a framework focused on technological issues, it could still include some mechanisms for protecting human rights and safety.
“I think our deputies showed great courage by not waiting for other countries to pass similar laws, but instead taking the initiative to develop their own. They deserve credit for that. They also didn’t keep the bill behind closed doors. It is publicly available on the parliament’s website,” said Rogov.
Learning from global best practices
To improve Kazakhstan’s legislative work, Rogov called for deeper engagement with both domestic legal experts and international specialists, including those from France, Germany, and Russia.
He proposed submitting the draft law to the Venice Commission of the Council of Europe for expert review.
“Kazakhstan always had strong and productive relations with them, and they have provided detailed evaluations of our legislation in the past. I think it would be beneficial to receive methodological and theoretical support from this world-renowned expert organization,” he said.
He also pointed to Russia’s recent experience, where a specific law was passed to fight the criminal use of AI and digital technologies. Rogov noted that Kazakhstan’s criminologists and legal scholars should support the development of similar mechanisms that prevent crimes committed through digital platforms and AI tools.
AI in the judiciary: a cautious start
Kazakhstan has already started implementing AI in its judicial system. Rogov noted that the Supreme Court recently launched a mechanism that uses AI to draft decisions in civil cases. Parties involved in a dispute can preview the AI-generated decision before the court hearing.
Although judges remain solely responsible for final rulings, the availability of an AI-generated draft narrows the room for errors and corruption.
“Judges are not bound by the AI’s proposal (…) but since both parties are already familiar with the AI-generated draft, the judge will have to justify the decision,” said Rogov.
“This significantly narrows the space for corruption and reduces the likelihood of professional errors in judicial rulings. In the foreseeable future, it will be necessary to revise relevant institutions in criminal, civil, and several other branches of law,” he added.
Rights, innovation and sovereignty in focus
Rogov highlighted the complexity of writing effective AI legislation but noted that progress is possible.
“Any legal theorist can outline what needs to be done, but it is much harder to draft a norm that truly works in practice,” he said.
He emphasized that legislation should be designed to effectively fulfill its intended purpose – ensuring the safety of society, the state, and individuals involved in using AI. At the same time, it should also foster technological innovation.
“Striking a balance between these seemingly conflicting goals is, perhaps, where the true skill of legal experts and IT specialists lies,” Rogov said.
He also highlighted the need for harmonized digital legislation within the Eurasian Economic Union (EAEU). He noted that while synchronization among the countries’ legislations is necessary, each country’s sovereignty must be respected.
“Kazakhstan is transitioning to a qualitatively new concept of digitalization in public administration, where the focus of state policy is on people, their needs and interests. This human-centered approach is intended to ensure that digitalization in our countries benefits citizens and does not lead to restrictions on their rights and freedoms,” said Rogov.