As the United States deepens its investments in artificial intelligence (AI) partnerships abroad, it is moving fast — signing deals, building labs, and exporting tools. Recently, President Donald Trump announced sweeping AI collaborations with Gulf countries like Saudi Arabia and the United Arab Emirates. These agreements, worth billions, are being hailed as historic moments for digital diplomacy and technological leadership.
But amid the headlines and handshakes, I keep asking the same question: where is child protection in all of this?
As someone who has worked across the Middle East and North Africa on children’s rights and protection, I have seen how fast-moving technologies can amplify harm when ethical safeguards are missing. In countries where digital regulation is still evolving and where vulnerable communities already fall through the cracks, introducing powerful AI tools without clear protections is not innovation, it's a risk.
And yet, these deals are being signed without a single line publicly dedicated to the safety of children, the protection of personal data, or the prevention of exploitation.
The MENA region is home to more than 100 million children, many of whom live in contexts shaped by displacement, economic hardship, or legal invisibility. The digital world, once imagined as a safe space for learning and connection, has also become a space where grooming, abuse, and trafficking happen at alarming speed.
Sign up for The Fulcrum newsletter
The INTERPOL report from 2020 warned that during COVID-19, online child sexual exploitation surged. Isolation, lack of oversight, and increased internet use created the perfect conditions for harm, and we still have not caught up.
Now, imagine adding AI to this landscape of facial recognition, predictive policing, and machine learning systems in countries that are still building their legal frameworks. Who decides how these systems are used? Who is responsible if they misidentify, exclude, or endanger a child?
This isn’t a critique of progress. The Gulf region is making major investments in tech, education, and infrastructure and that can bring real opportunities. But when the U.S. exports technology without including rights-based standards, it is exporting risk.
In all the official announcements, I’ve yet to see mention of child rights impact assessments, ethical use policies, safeguarding conditions, or civil society consultations. These are not extras. These are not nice-to-haves. They are essentials.
The U.S. cannot claim global leadership in AI while staying silent on the ethical standards that must accompany it. If it can include economic terms in these deals, it can also include human rights terms. If it can prioritize national security, it can also prioritize child safety.
Before the next deal is signed, child protection needs to be on the table, not as an afterthought, but as a requirement. We need binding commitments to data privacy and safety, independent oversight mechanisms, and a voice for child rights organizations in the negotiation process — because children will live with the consequences of these technologies even though they were never consulted.
We cannot allow powerful tools to be exchanged between governments without also exchanging responsibility. AI may be the future but if it doesn’t protect children, it’s a future built on omission.
And we’ve already seen what that costs.
Hassan Tabikh is a human rights practitioner from Baalbek, Lebanon, with over a decade of experience in human rights, social justice, and child protection across the MENA region. He is the MENA Regional Coordinator at ECPAT International and a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.