HOME > NATION > Article

Text Size

small

medium

large


Protecting Children in the Age of  Artificial Intelligence

Protecting Children in the Age of  Artificial Intelligence

Provided by Nation.

The age of Artificial Intelligence (AI) is very much here.  The term “generative AI” is now commonplace, with the public fascinated that AI can actively produce content such as written and audio creations.

The world is moving towards Artificial General Intelligence (AGI) whereby robots will be able to match and even outdo human intelligence. Aptly, its relationship with children (under 18 years) invites reflection and precaution.

On the one hand, AI can bring great benefits, building on the strengths of existing digitalization. It can be a useful educational tool, such as to help children who face learning difficulties or disabilities. It is a technology of  connectivity and helps to facilitate communication and information dissemination. It can act as an instrument of  leisure, such as to invent games. It can promote human efficiency, such as to deal with repetitive tasks in the medical field.

On the other hand, AI also brings risks. It might be a tool of exploitation, such as sexual abuse and exploitation. It is a technology of alienation used for bullying, hate speech, discrimination and violence. It lends itself to information distortion and manipulation, such as hallucinations, fakes and scams, misinformation and disinformation, propaganda and surveillance. It is an instrument of stress, replete with addiction and superficial self-validation. It is emerging as an instrument of human subjection and dejection, especially when and where it controls human lives, perhaps absolutely.

How then is the world community to handle that ambivalence? The international guiding framework is the Convention on the Rights of  the Child and its General Comment No.25 on children’s rights in the digital environment, highlighting child protection.

In reality, implementation is open to a variety of orientations, bearing in mind that both AI and related responses are in a state of flux. 

On one front, there is the two-track situation whereby a general approach is contrasted with a more specific approach in handling the relationship between AI and children. The former is exemplified by various laws and guidelines of a general nature, such as to protect children’s privacy and safety and to highlight AI  transparency, especially to help explain the pros and cons of AI to children.

The more specific approach is to target various sectors for action. Twenty-five years ago, the Online Privacy Child Protection Act of the US offered a preview. It imposed a condition related to minimum age; children under 13 years old cannot consent to have their data revealed.  In 2025, California opted for this additional, specific intervention.  Its recent Patients Communications’ law stipulates that healthcare facilities using AI must adopt clear disclaimers when there is AI-generated content. There is a kind of “watermarking” or labelling AI generated content. The possibility of contacting human health care providers must also be available. 

On another front, there is the contrasting vision between ethical guidelines of a persuasive nature concerning AI utilization and the prescriptive approach of binding regulations with consequential accountability in the case of violations.  The ethical approach has emerged from some international agencies and it highlights basic principles, such as “Do No Harm”, safety and security, privacy and data protection, responsibility and accountability, transparency and explainability of  AI’s functions.

The prime example of the prescriptive approach is the European Union (EU) ’s AI Act,  in force in 2025.  There is a list of prohibited practices.  Social profiling, where data might be used to discriminate against people, is forbidden.  Subliminal targeting of children’s emotions as a kind of manipulation is proscribed.  The collection of real-time biometric data for surveillance purposes is not allowed, although there might be some leeway regarding national security. With lesser risks, the business sector is called upon to have Codes of  Conduct as a kind of self-regulation for policing itself, subject to linking up with the EU supervisory system as a whole. Violations can lead to massive fines.

Globally, certain realities are inevitable.  Where there is illegal content, such as the sexual abuse and sexual exploitation of children, for instance child pornography, national laws already prohibit such practices and they automatically apply to AI-related actions.  However, there might be differences regarding whether children appearing in AI-generated content are real children or merely digitally generated. The issue is not settled internationally, although child protection groups prefer to prohibit all images of children in such situations, without having to prove whether real children are involved.

From another dimension, there is the issue of how to deal with harmful content which is not illegal. For example, the mere fact that X hates Y is not necessarily illegal in international law or national law. Other actions may thus be required.  At present, the digital industry, especially its developers and deployers, have already adopted some tools through self-regulation to moderate content and take down harmful content, at times with and through filtering. For instance, many platforms have Codes against homophobic messages and they delete them, even if the national law nearby does not prohibit such content. This might also cover various forms of bullying and grooming of children, which might otherwise lead to discrimination or violence.

The key lies with digital and AI literacy so that the public, especially children, parents and teachers, can enjoy the benefits of technology safely,  securely, “smartly” and sustainably. This can be helped by the AI industry, where it ensures that its members are AI literate from the angle of assessing the risks as part of due diligence and mitigating them, with guardrails balancing well between freedom of expression and child rights’ protection.  In essence, there can be no substitute for an educated and literate public with a discerning and critically analytical mind, as well as to have cognitive and affective means to protect itself from transgressions.

Urgently, families need to have options for  “digital detox”. This would enable parents to work with children to safeguard some spaces at home to be free from technology.  There needs to be periods of human interaction without technology, together with leisure time together as humans. Humane activities such as pro bono help for disadvantaged groups need to be nurtured, to generate the warmth of empathy  which no technology can replace. 

Hence, the community needs “Top-Tips for Digital Detox” or “TT-4-DD” now!

Vitit Muntarbhorn

Vitit Muntarbhorn is a Professor Emeritus at Chulalongkorn University.  He was formerly UN Special Rapporteur on the Sale of  Children and is a member of the Advisory Group of  UNICEF, Thailand.

NATION

HEADLINES

POLITICS
S. Korean Court Decides on Release of Pres. Yoon, Indicted over Martial Law
ECONOMY
Tokyo Forex (5 P.M.): U.S. Dollar=147.67-68 Yen; Euro=1.0829-0831 Dollars
SPORTS
Japan Sumo Assn Issues Warning thru Stable Masters against Use of Online Casinos
OTHER
2,520 Still Missing ahead of 14th Anniversary of March 2011 Quake, Tsunami

AFP-JIJI PRESS NEWS JOURNAL


Photos