Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The rise and fall of fact-based journalism

Road sign with options for Fact and Opinion
Maria Vonotna/Getty Images

Radwell is the author of “ American Schism: How the Two Enlightenments Hold the Secret to Healing Our Nation ” and serves on the Business Council at Business for America. This is the fifth entry in a 10-part series on the American schism in 2024.

The late 19th century in our country marked the height of yellow journalism, a style of newspaper reporting that prioritized sensationalism over facts. Presenting little in the way of legitimately well-researched news, papers of that era focused on eye-catching headlines to drive sales. Stories of the day were rife with scandal-mongering, crime, sex and violence. Even “legitimate” news stories were full of outrageous exaggerations. Historians argue to this day about the role of yellow journalism in pushing the United States into the Spanish American War.

In the early part of the 20th century, however, the tide seemed to shift. Some newspaper owners, responding to consumer thirst for more dependable information, realized that accurate investigative reporting could stimulate good business. Moreover, some like Joseph Pulitzer believed newspapers were public institutions with a duty to improve society. After purchasing the New York World in 1883, Pulitzer started replacing the many sensational stories with real journalistic coverage. By the time of his death in 1911, the World was a widely respected publication.


In the first decade of the new century, newly formed press associations began championing higher education. In 1908, the same year as the founding of the National Press Club, the University of Missouri opened the first school dedicated to journalism, followed by Columbia University in 1912 (funded by a $2 million grant from Pulitzer). With other schools adding journalism to their curriculum, the new field of study was defined as a process of collecting, processing and disseminating information in the public interest.

Now sanctioned by universities, the journalism industry could teach acceptable behavior and establish credentials, and also promulgate high ethical norms such as accuracy, balance, impartiality and truthfulness, independent of any commercial or political interests. It was nothing less than the birth of a profession.

Over the next decade, the field further distinguished itself with a robust sense of social responsibility towards the general public, good governance and democracy. At its foundation were two principle underpinnings; the first was designating a relentless focus on the pursuit of truth as the center of the value hierarchy. Second, the revolutionary idea of erecting a “Chinese wall” between the owner and the editor of a newspaper. News would no longer be shaped to suit the partisan interests of press owners, but rather would be determined by trained nonpartisan professionals, using judgment and skills honed in journalism schools.

So what happened that led us from the days of Walter Cronkite to the present era in which the autonomy of professional journalism seems to be vanishing faster than the Amazon rainforest. Here are the three developments of the recent decades that proved pivotal:

  • The regulatory framework was rescinded. In 1987, President Ronald Reagan’s FCC repealed the “fairness doctrine,” which required the holders of broadcast licenses to present controversial issues of public importance in a manner that fairly reflected differing viewpoints (some argued that as cable news spread, the doctrine seemed to be rendered obsolete).
  • News got replaced by (sensational) entertainment. In the face of the rising costs of accurate investigative news gathering, Roger Ailes pioneered a new business model at the Fox News Channel. This “winning” model, in which costly journalism is replaced by inexpensive pundit blowhards, caught on and became highly attractive to all media owners. The alternative path for many other television and radio stations was the outright elimination of news.
  • The great training camp for fresh “up and coming” journalists withered away. The growth of the internet proved to be a death sentence for the money-maker in the print business — “the classifieds,” which kept afloat thousands of local newspapers across the United States. The unintended consequence: The vital training ground where young journalists newly out of school could learn the profession receded as town and regional newspapers closed. In fact, the AP reports that the nation has lost two-thirds of its newspaper journalists in the last 20 years.

Today what is left is a media landscape where the search for eyeballs (or clicks) is the raison d’être, which routinely trumps accuracy, data or any form of verified information. The subscription model has become scarce and in the maelstrom of advertising that remains, most Americans have given up the pursuit of truth. The alternative is to create and maintain your own unsullied version of the truth in your chosen bubble.


Read More

An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less
Technology and Presidential Election

Anthropic’s Mythos AI raises alarms about surveillance, deepfakes, and democracy. Why urgent AI regulation is needed as U.S. policy struggles to keep pace.

Getty Images, Douglas Rissing

How the Latest in AI Threatens Democracy

On April 24, America got a wake-up call from Anthropic, one of the nation’s leading artificial intelligence companies. It announced a new AI tool, called Mythos, that can identify flaws in computer networks and software systems that, as Politico puts it, “Even the brightest human minds have been unable to identify.”

A machine smarter than the “brightest human minds” sounds like a line from a dystopian science fiction movie. And if that weren’t scary enough, we now have a government populated by people who seem oblivious to the risks AI poses to democracy and humanity itself.

Keep ReadingShow less
Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate
the letters are made up of different colors

Who’s Responsible When AI Causes Harm?: Unpacking the Federal AI Liability Framework Debate

This nonpartisan policy brief, written by an ACE fellow, is republished by The Fulcrum as part of our partnership with the Alliance for Civic Engagement and our NextGen initiative — elevating student voices, strengthening civic education, and helping readers better understand democracy and public policy.

Key takeaways

  • The U.S. has no national AI liability law. Instead, a patchwork of state laws has emerged which has resulted in legal protections being dependent on where an individual resides.
  • It’s often unclear who is legally responsible when AI causes harm. This gap leaves many people with no clear path to seek help.
  • In March 2026, the White House and Congress introduced major proposals to establish a federal standard, but there is significant disagreement about whether that standard should prioritize protecting innovation or protecting people harmed by AI systems.

Background: A Patchwork of State Laws

Without a national AI law, states have been filling in the gaps on their own. The result is an uneven landscape where a person’s legal protections depend entirely on which state they live in.

Keep ReadingShow less