Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Parents Must Quit Infighting to Keep Kids Safe Online

Opinion

child holding smartphone

As Australia bans social media for kids under 16, U.S. parents face a harder truth: online safety isn’t an individual choice; it’s a collective responsibility.

Getty Images/Keiko Iwabuchi

Last week, Australia’s social media ban for children under age 16 officially took effect. It remains to be seen how this law will shape families' behavior; however, it’s at least a stand against the tech takeover of childhood. Here in the U.S., however, we're in a different boat — a consensus on what's best for kids feels much harder to come by among both lawmakers and parents.

In order to make true progress on this issue, we must resist the fallacy of parental individualism – that what you choose for your own child is up to you alone. That it’s a personal, or family, decision to allow smartphones, or certain apps, or social media. But it’s not a personal decision. The choice you make for your family and your kids affects them and their friends, their friends' siblings, their classmates, and so on. If there is no general consensus around parenting decisions when it comes to tech, all kids are affected.


According to More in Common, which recently surveyed parents in the U.S., U.K., France and Poland on their thoughts and experiences around online safety, 65% of U.S. parents are “very” concerned about their kids’ safety online, and another 28% are “somewhat” concerned (leaving 7% of parents not that concerned at all – a troubling number, even if it seems low).

And according to the researchers, deeper focus group sessions (which you can dive into by downloading the report here) showed many parents feel that other parents are undermining their ability to keep their children safe. Specifically, the researchers note that, “Differences in approaches between parents are seen as a source of tension, and a way for children to bypass the rules in their own household. This can lead to parents feeling powerless.”

Perhaps parents feel powerless because so often they are alone in this fight, given that the burden of responsibility for keeping kids safe online is lobbed squarely onto parents rather than onto the technology companies where it belongs. This is not by accident, or by default, but is a result of the democratic process failing to protect the most vulnerable among us – our children – from Big Tech. When there is no corporate accountability, the result is infighting and an inability for civil society to form a strong, united front.

We are in a divisive time in this country, politically, but we must not be divisive on this issue, and changing community norms is one of the best defenses we have right now against the risks our kids are facing. We know child sexual abuse material can be found on every platform. We know social media is problematic for a multitude of reasons for kids under 16 (and even older). The online realm, especially now that AI has exploded with essentially no guardrails and major support from the current administration, is only getting crazier and more dangerous. AI toys are the newest threat and should make every parent lose sleep at night.

Certainly not all kids are the same – what one child can handle online might be very different for the next child. And parents are the best judges of that. But let’s be real – not all parents are diligent or, as the More in Common research shows, all that concerned about the mental and physical risks posed to young people when they go online. We can be pro-tech and also pro-safety, but we have to be able to talk to each other and come to some agreement around what we, as a country, will allow for our children. But we won’t come to a consensus without first agreeing that it’s a collective problem with collective consequences.

There is legislation that will help to fight this problem and hold tech companies responsible for what happens on their platforms. And we must support this sort of policy action to get to the root cause. But as parents, the greatest power we have is our ability to come together. We must not let parental individualism get in our way of protecting our kids online.


Erin Nicholson is the strategic communications adviser for ChildFund International, a global nonprofit dedicated to protecting children online and offline. ChildFund launched the #TakeItDown campaign in 2023 to combat online child sexual abuse material. She is currently a Public Voices Fellow on Prevention of Child Sexual Abuse with The OpEd Project.

Read More

An illustration of orange-colored megaphones, one megaphone in the middle is red and facing the opposite direction of the others.

A growing crisis threatens U.S. public data. Experts warn disappearing federal datasets could undermine science, policy, and democracy—and outline a plan to protect them.

Getty Images, Richard Drury

America's Data Crisis: Saving Trusted Facts Is Essential to Democracy

In March 2026, more than a hundred information and data experts gathered in a converted Christian Science church to confront a problem most Americans never see, but that shapes nearly every public debate we have. The nonprofit Internet Archive convened this national Information Stewardship Forum at their San Francisco headquarters because something fundamental is breaking: the country’s shared foundation of facts.

For decades, the United States has relied on a vast ecosystem of federal data on health, climate, the economy, education, demographics, scientific research, and more. This data is the backbone of journalism, policymaking, scientific discovery, and public accountability. It is how we know whether the air is safe to breathe, whether unemployment is rising or falling, whether a new disease is spreading, or whether a community is being left behind.

Keep ReadingShow less
Man lying in his bed, on his phone at night.

As the 2026 election approaches, doomscrolling and social media are shaping voter behavior through fear and anxiety. Learn how digital news consumption influences political decisions—and how to break the cycle for more informed voting.

Getty Images, gorodenkoff

Americans Are Doomscrolling Their Way to the Ballot Box and Only Getting Empty Promises

As the spring primary cycle ramps up, voters are deciding which candidates to elect in the November general election, but too much doomscrolling on social media is leading to uninformed — and often anxiety-based — voting. Even though online platforms and politicians may be preying on our exhaustion to further their agendas, we don’t have to fall for it this election cycle.

Doomscrolling is, unfortunately, part of daily life for many of us. It involves consuming a virtually endless amount of negative social media posts and news content, causing us to feel scared and depressed. Our brains have a hardwired negativity bias that causes us to notice potential threats and focus on them. This is exacerbated by the fact that people who closely follow or participate in politics are more likely to doomscroll.

Keep ReadingShow less
The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less