In this episode of the Politics in Question podcast, the team discusses thermostatic politics to explain what it means and how it works.
Podcast: What is thermostatic politics?


In this episode of the Politics in Question podcast, the team discusses thermostatic politics to explain what it means and how it works.

U.S. President Donald Trump at the White House February 20, 2026 in Washington, DC.
Secrecy is like a shroud of fog. By limiting what people can see and check for themselves, the public gets either a glimpse (or nothing at all), depending on what gatekeepers decide to share. And just as fog comes in layers, so does withholding: one missing document, one delayed detail, one “not available” that becomes routine.
Most adults understand there are things that shouldn’t be shown. Lawyers can’t reveal case details to people who aren’t involved. Police don’t release information during an active investigation. Doctors shouldn’t discuss your medical history at home. The reason is simple: actual harm can follow when sensitive information is revealed too early or to those who shouldn’t be told.
But another kind of secrecy has been developing over time. It’s less about protection and more about insulation. It’s the kind that says, “You don’t get to ever know.” This veil isn’t meant to protect a person or preserve an investigation. It protects the system from questions.
And when it becomes routine, it’s not just transparency that gets limited. Restricting what the public can verify is how legitimacy begins to fray.
Silence isn’t being used as an occasional tactic anymore. Concealment has become the new normal: the structure of how systems work.
When reticence becomes a framework, lawfulness starts wobbling.
Permissibility isn’t a mood. It’s a public agreement that power is being used in fair, limited, accountable ways, not perfectly, but enough that consent isn’t something authorities simply demand.
Consent needs visibility. Not full transparency. No sane person is asking for live-streamed investigations or open-source intelligence files. But we do need enough clarity to verify claims, understand guardrails, and recognize consequences when boundaries are crossed.
When documentation is withheld, trust doesn’t vanish overnight. It erodes. And over time, people stop treating official claims as “true by default” when the supporting facts can’t be checked, all while drifting toward the “most credible” narrative minus receipts.
Sometimes obscurity shows up in quiet ways: disclosures that arrive too late to matter; carefully framed details that feel vacuum-sealed; paragraphs blacked out. Other times it’s more direct: sealed investigations, buried records, decisions made behind closed doors—followed by the public being told that transparency is dangerous.
Not all secrecy is bad. Some information must be kept secret: cases can be ruined by publicity; witnesses can be intimidated; a vulnerable person can be dragged through a digital town square as punishment. But there’s a point when “necessary confidentiality” becomes power without visibility. That’s when legitimacy starts rotting — not with a dramatic collapse, but with a slow administrative shrug.
The public doesn’t need to know everything. We need enough to answer one basic question:
Is power being used with limits, or for convenience?
That question stops being abstract when a state announces an investigation tied to something as infamous as Jeffrey Epstein’s Zorro Ranch in New Mexico. Investigation doesn’t automatically mean conviction. Allegations aren’t proof. But when officials announce a serious inquiry connected to a high-profile network of harm, headlines aren’t enough. People want to know what is demonstrable, what can be documented, what will be sealed (and why), and how the inquiry is structured.
This is where obscurity shows up. High-stakes cases often follow the same pattern: statements that sound informative but deliver less; more hedging, highlights, and half releases; even when officials insist they’re doing the work well. The structure can still feel engineered to limit scrutiny, not because everything is unfair, but because scrutiny is inconvenient.
This is the purity gap: systems request belief while delaying the information that makes belief reasonable.
Silence doesn’t have to be deliberate to disrupt. Too many people can benefit (quietly, for a wide range of reasons) from withholding. It limits responsibility. It controls accountability. It leaves the public in a fog of jumbled discourse. The system it was meant to safeguard begins to disintegrate, gradually enough that what once would have triggered an alarm becomes normal.
It starts as a choice: a move to keep something out of the public view. What starts as a decision becomes a rule. Habits are normalized. Then they turn into a process.
One method is the hardening of “need to know.” Intended for sensitive information, it gets stretched beyond recognition — because it becomes useful for hiding what’s embarrassing, politically costly, or simply inconvenient. In this climate, the definition of “need” retracts, the circle of access tightens, and suddenly “need to know” isn’t just about confidentiality.
It’s about who gets to hold power and who gets protected while rank is enforced.
And plausible deniability doesn’t require lying. By learning where not to seek facts (what questions not to ask, what records not to request, what concerns not to connect), people learn how not to know. Cultural signaling does the rest. So does the quiet math of saving your job.
The process often starts with a legitimate reason: safety, privacy, diplomacy, or an active investigation. Over time, withholding becomes a way to avoid oversight, conflict, or dissent. New people arrive and are told, “This is just how things are done.” No longer a tactic, it becomes cultural inheritance.
Opacity rarely arrives all at once. Each step looks reasonable by itself. Add one more approval. Include one more restricted folder. Tag on another redaction. Slap on more intermediaries. Tack on legal reviews. Eventually, a black box appears.
People adapt to the “new normal.” Informationalism replaces explanation; sound bites replace evidence; proof becomes optional. When evidence isn’t forthcoming, believable narratives get accepted. When proof is consistently inaccessible, suspicion becomes the default operating system.
“Trust the process” can be sincere. More often than not, it’s used as a substitute for explanation.
People don’t flip trust on and off like a light switch because they were asked nicely. Acceptance is a conclusion reached after repeatedly witnessing: consistent rule of application, consequences for broken rules, justified secrecy, and credible oversight.
A system that conducts itself as if it’s being scrutinized earns trust. A legitimate system doesn’t treat review as a threat. It may dread misunderstanding, but it recognizes that accountability is part of stewardship, not an enemy of it.
The last decade has produced a strange shift: institutions increasingly behave as though citizens’ questions aren’t “friendly.” Sometimes there’s a cause. People can be righteously
angry. Values can change without explanation. People can feel lied to. But the answer to anger isn’t doubling down on hidden facts. It’s improved oversight for what must remain confidential—alongside truthful, irrefutable transparency about what can be seen.
So, the question is no longer “secrecy or transparency.” It’s this:
Where is the line—and who enforces it?
People can tolerate confidentiality when given clear standards, narrow definitions, and the ability to review records later. People can accept “not yet.” They cannot live with “never” dressed up as “trust us.”
As long as secrecy remains in place, accountability must be structurally built in. It must be a design choice, not a public relations campaign.
There should be a framework for what is held back, why, for how long, and what triggers disclosure later. Predictable rules matter more than constant detail. There should be a default timeline, with exceptions that are convincing.
Too many systems don’t give ordinary people a credible path to contest secrecy decisions. There should be an avenue for challenge that doesn’t require bleeding retirement funds into lawsuits.
This doesn’t require full disclosure. It requires a reviewable reason, a time limit, and independent appeal options when “confidential” becomes a familiar tune. Agencies can be protected with an open framework. When review is possible (even later), people are less likely to assume the worst now. Eventual disclosure robs conspiracy thinking of oxygen.
At present, the vacuum is filling itself.
The deeper issue is volatility: the public can’t discern fact from fiction and becomes susceptible to the most emotionally charged story. Distrust turns into currency. Whether deserved or not, every agency gets treated like a defendant before trial.
Volatility compounds: legitimate rules aren’t followed strictly; eagerness to bypass systems increases; conversations unmoor from facts; division accelerates; and every action is presumed malevolent, which makes governance harder.
Believing they’re protecting themselves, institutions respond by tightening secrecy even more. And the loop goes like this: more privacy, less trust. Less trust, more silence. That’s how secrecy becomes infrastructure, all while everyone else tilts at windmills.
This isn’t a philosophical complaint about “clearness.” It’s a warning about system stability.
It’s also a moment where information is plentiful, but review is limited, and artificial intelligence (AI) can generate logical nonsense at scale. In that climate, institutions can’t afford to treat legitimacy as an emotional public relations problem. Legitimacy is operational: it’s the difference between challenging opinions being accepted with clean boundaries—and every decision being treated as a power grab because the lines aren’t visible.
Otherwise, you aren’t asking for trust. You’re demanding it. And people don’t follow demands for very long. They either conform out of fear or rebel out of resentment. Neither is stable.
At some point, fog stops being weather.
It becomes architecture.
And whether we like it or not, fog shapes behavior.
Linda Hansen is a writer and the founder of Bridging the Aisle, a nonpartisan platform fostering honest, respectful dialogue across divides and renewed trust in democracy.

U.S. President Donald Trump, with Vice President JD Vance and Speaker of the House Mike Johnson looking on, delivers his State of the Union address during a Joint Session of Congress at the U.S. Capitol on Feb. 24, 2026, in Washington, D.C. Trump delivered his address days after the Supreme Court struck down the administration's tariff strategy and amid a U.S.
State of the Union speeches haven’t mattered in a while. Even in their heyday, they were only bringing in 60-plus million viewers, and that’s been declining substantially for decades. They rarely result in a post-speech bump for any president, and according to Gallup polling data since 1978, the average change in a president’s approval rating has been less than one percentage point in either direction.
To be sure, this is good news for President Trump. He should hope and pray this State of the Union was lightly watched.
His speech was a chaotic cacophony of lies, bigotry, gaslighting, and willful ignorance, painting the portrait of a man who has lost the country, and he knows it.
If Trump is confident about the state of the union, the health of the Republican Party, and keeping the majority come November, his unhinged and delusional address belied that confidence. Instead, his cartoonish overcompensating for a disastrous first year only drove home the point that his administration is spiraling out of control and has no plans to change course.
Sounding very much like Joe Biden and Kamala Harris — who, I’ll remind you, lost all seven swing states in 2024 — Trump bragged about a country and economy that most Americans don’t recognize.
“Our nation is back: bigger, better, richer and stronger than ever before,” he declared. “This is the golden age of America.”
Few Americans feel that way, however. Polls show Trump’s job approval is at an all-time low. Most Americans think Trump is moving the country in the wrong direction, and a plurality believes Trump is doing a worse job than Biden. Most think he’s focused on issues that aren’t very important to them, and a majority say they are very concerned about the cost of health care, food, consumer goods, and housing. Less than a third of Americans believe the economy will be better in a year.
This anxiety over the economy and health of the country could not match Trump’s bombastic gloating any less. Americans are worried and frustrated, and are in no mood for Trump’s delusional victory laps.
He didn’t fare much better on immigration, another one of Trump’s signature issues. In April of 2025, 48% of Americans approved of Trump’s handling of immigration. In the months following, which saw ICE surges in major cities, ugly confrontations with citizens, the unlawful detainment of several illegal immigrants, and the shocking deaths of two protesters, his approval has dipped to a low of 41%, with his disapproval skyrocketing to 55%.
There was no acknowledgment of this or attempt at a course-correction in his speech, though. Instead, he played to the cheapest of seats with gory tales of violence by illegal drug lords, murderers and rapists — criminals no one has an issue with removing.
Finally, on tariffs, Trump told voters the sky was green. “Everything was working well,” before the Supreme Court shot them down, he insisted, and said that “factories, jobs, investment and trillions and trillions of dollars will continue pouring into America” because of those tariffs.
But according to independent estimates, his tariffs have cost U.S. households as much as $2,600 per year and polls show a majority of Americans oppose them.
Now, if he doesn’t want to listen to voters, that’s certainly his prerogative and I imagine Democrats won’t get in his way. Republican lawmakers who are up in November, though, probably wish he’d start sounding different when he talked about the economic pain most Americans were feeling.
But they’d need a different president, one who isn’t delusional and totally unwilling to admit what most people can see and feel: the state of the union is bad, and Trump is to blame.
S.E. Cupp is the host of "S.E. Cupp Unfiltered" on CNN.

The U.S. and Israel’s joint military campaign against Iran, which rolled out under the name Operation Epic Fury, is a phrase that sounds more like a summer action film than a real‑world conflict in which people are dying. The operation involves massive strikes across Iran, with U.S. Central Command reporting that more than 1,700 targets have been hit in the first 72 hours. President Donald Trump described it as a “massive and ongoing operation” aimed at dismantling Iran’s military capabilities.
This framing matters. When leaders adopt language that emphasizes spectacle, they risk shifting public perception away from the gravity of war. The death of Iran’s supreme leader following the bombardment, for example, was a world‑altering event, yet it unfolded under a banner that evokes adrenaline rather than anguish.
The name Epic Fury does more than describe military action; it markets it. It suggests inevitability, righteousness, and even entertainment value. But war is not entertainment. It is destruction, displacement, and death. When language sanitizes or glamorizes violence, it becomes harder for the public to grapple with the ethical stakes of military force.
U.S. Secretary of War Pete Hegseth speaks during a news conference at the Pentagon on March 2, 2026 in Arlington, Virginia. Secretary Hegseth and Chairman of the Joint Chiefs of Staff General Dan Caine held the news conference to give an update on Operation Epic Fury. (Photo by Alex Wong/Getty Images) (Photo by Alex Wong/Getty Images)
In his first briefing, Defense Secretary Pete Hegseth said, “Two days ago, under the direction and direct orders of President Donald J. Trump, the Department of War launched Operation Epic Fury, the most-lethal, most-complex and most-precise aerial operation in history." The phrasing is unmistakably promotional—“most-lethal,” “most-complex,” “most-precise”—as though he were unveiling a new weapons platform or a blockbuster film rather than describing a real military campaign in which real people are dying.
Hegseth’s language repeatedly frames the conflict as a long-awaited moment of righteous vengeance. He describes Iran’s actions over the past 47 years as a “savage, one-sided war against America,” and casts the U.S. response as “our retribution against their ayatollah and his death cult.” He tells the public, “If you kill Americans, if you threaten Americans anywhere on Earth, we will hunt you down without apology and without hesitation, and we will kill you.” This is not the sober language of a statesman explaining the gravity of war. It is the language of a revenge narrative—one that reduces complex geopolitical realities to a simple morality play.
The danger of this rhetoric is not merely stylistic. It shapes how the public understands the conflict. When Hegseth boasts that “America… is unleashing the most lethal and precise air power campaign in history” and celebrates the absence of “stupid rules of engagement” or “politically correct wars,” he is not simply describing military strategy. He is signaling that restraint, proportionality, and international law are obstacles to be discarded. He is inviting the public to view the overwhelming force not only as justified but also exhilarating.
This framing obscures the human consequences of the operation. Iranian cities have been struck repeatedly. Civilian infrastructure has been damaged. Families are fleeing. Hospitals are overwhelmed. These realities are nowhere in Hegseth’s remarks. Instead, he speaks of “epic fury,” “lethality,” and a “generational turning point,” as though the suffering of ordinary people is irrelevant to the story he wants to tell. Even when acknowledging American casualties, he uses them to justify further escalation: “No apologies, no hesitation, epic fury for them and the thousands of Americans before them taken too soon by Iranian radicals.”
The rhetoric also encourages a dangerous sense of inevitability and triumphalism. Hegseth tells U.S. troops, “We are not defenders anymore. We are warriors, trained to kill the enemy and break their will.” He assures them, “We will finish this on America-first conditions of President Trump’s choosing, nobody else’s.” This is not the language of limited, carefully calibrated military action. It is the language of totalizing conflict—conflict framed as destiny, as purification, as a test of national character.
When war is framed this way, dissent becomes harder. Nuance becomes suspect. Civilian casualties become collateral to a narrative of righteous fury. And the public becomes more likely to accept open-ended conflict when it is packaged as a spectacle rather than a tragedy.
The United States has a long history of naming military operations in ways that evoke purpose or resolve—Desert Storm, Enduring Freedom, Inherent Resolve. But Epic Fury marks a shift toward something more explicitly theatrical. It is not a name meant to clarify objectives or communicate seriousness. It is a name meant to excite, to dramatize, to sell.
War is not a product. It is not a storyline. It is not a moment for branding. It is a human catastrophe, even when undertaken for reasons leaders deem necessary. When officials adopt language that glamorizes violence and reduces geopolitical complexity to a revenge narrative, they erode the public’s ability to understand the true stakes of military action.
The question now is whether the public will accept this Hollywood‑style packaging of war—or whether it will demand a return to language that reflects the gravity of life, death, and the responsibilities of a democratic nation.
Hugo Balta is the executive editor of The Fulcrum and the publisher of the Latino News Network

Texas Rep. Al Green held a sign reading "Black People Aren't Apes," protesting a racist video Trump had previously shared on Truth Social. Green was escorted out of the House chamber just minutes into President Donald Trump's State of the Union address.
This was nothing new.
Before President Donald Trump released a video on his Truth Social account earlier this month that depicted Michelle and Barack Obama as apes, many were already well aware of his compulsive use of AI-generated deepfake content to disparage the former president. Many were also well aware of his tendency to employ dehumanizing rhetoric to describe people of color.
Unfortunately, this high-level bigotry has become a normalized phenomenon in the media cycle today. But it has deep roots in history throughout Western civilization.
While no apology was issued for the video, or for any of the president’s exhaustingly frequent social media posts, this particular video was removed within hours.
Of course, the blame for this “erroneous” post was redirected to an anonymous staffer, but Trump then proceeded to post several photos of himself alongside Black celebrities. This was clearly damage control.
Across the aisle, Democratic House Minority Leader Hakeem Jeffries decried this imagery as vile. Others suggested the message was backpedaled because he felt the tides turning.
There is precedence. Throughout history, blatant associations of race and animality have been out of bounds because they diminish the humanity of people of color. Underlying this claim is another inference that is even worse: Humanity is a quality that has long been wielded against BIPOC folks. The human, as a social concept, depends on animalization, and dehumanization is human.
The term “dehumanization” implies a process by which one’s inherent humanness is discarded, leaving behind an absent reference. Enlightenment Era thinkers from Western Europe established a narrow conceptualization of the human that was measured, above all else, by the capacity to reason.
Decolonial philosopher, essayist, poet, and scholar, Sylvia Wynter, refers to this figure as “Man,” the benchmark by which one’s full humanity could be recognized. Jamaican-born Wynter, 97, argues that Eurocentric ideas about rationality and civility were inseparable from the racial hierarchy produced by the age of exploration and colonization.
In this culture, as many have been conditioned to perceive the animal as the opposite of the human, the history of the West reveals that animality is not the opposite of humanity—but its precursor. The human is a newer (and intrinsically better) model of the animal. Dehumanization, then, aligns certain humans alongside other nonhuman animals, who are deemed to lack those humanizing qualities.
One of the reasons why so many feel deeply unsettled by racist imagery that likens people of color to nonhuman animals is because it is a cruel reminder not only of a history of violent dehumanization but also because it forces a reckoning with a continuum (from least animal to most animal) that too many still buy into.
Human superiority was entrenched in abolitionist rhetoric from the 18th and 19th centuries. Abolitionist and British surgeon Alexander Falconbridge, who recorded and published his observations from time spent in slave ships between 1782 and 1787, writes, “Nor do these unhappy beings, after they become the property of the Europeans (from whom, as a more civilized people, more humanity might naturally be expected), find their situation in the least amended.” Falconbridge appeals here to his audience’s civility, that which separates “Europeans” from the enslaved, “unhappy beings.”
During this time, Swedish taxonomist Carl Linnaeus developed the binomial system of classification, a categorization system of living beings that codified and hierarchically distributed both race and species.
To justify these divisions, naturalists sought out differences that proved Human superiority—centered around language, art, and culture. The problem, as Amie Souza Reilly, Writer-in-Residence at Sacred Heart University and author of the 2025 book Human/ Animal: A Bestiary In Essays, writes, isn’t “just that the White European naturalists assumed only human animals can reason, or that this reason makes them superior, but that they used this line of thinking to subjugate, enslave, display, and dehumanize people were not White Europeans by aligning nonwhite, nonmale, non-Europeans with animals, therefore pushing themselves to the top of the hierarchy they invented.”
Political scientist at the University of California-Irvine Claire Jean Kim refers to race and species as two interconnected “taxonomies of power.” These taxonomies lump and split nonwhite groups according to how close to nature they are perceived.
Her examination of this satirical drawing, published during the 1867 California gubernatorial race, demonstrates how these taxonomies work not as a set system but as a context-specific methodology used to justify all kinds of oppression—chattel slavery, theft of indigenous land, exploitation of migrant labor, and even industrial slaughter.
To be clear, the point is not to invalidate the harm caused by such dehumanizing discourse present day or historically. My position is in no way aligned with those who claim that Trump’s post has been taken out of context to manufacture controversy.
Claiming ignorance and hiding behind allegory does not dismiss the harm of racialization. However, it is important to recognize that racism like this is tethered to the very core of liberal humanism.
Charles Chesnutt, a Black novelist, essayist, and activist, understood this in 1889, when he published “Dave’s Neckliss.” The short story, alongside several other “Conjure Tales,” is narrated by John, an Ohioan farmer who purchases land in and relocates to North Carolina after the Civil War.
The stories center around interactions with Uncle Julius, a Black man whose anecdotes about the slave plantation are filtered through John’s rational lens. In this story, John’s observations reveal himself to be the arbiter of what constitutes the human: “But in the simple human feeling, and still more in the undertone of sadness, which pervaded his stories, I thought I could see a spark which, fanned by favoring breezes and fed by the memories of the past, might become in his children’s children a glowing flame of sensibility, alive to every thrill of human happiness or human woe.”
Rather than a biological fact or even an essential right, the human here is a marker of one’s place in the social order, and it can be given or taken away on a whim from those marked as other.
Sen. Tim Scott (R. SC) said he could only “pray” that the racist video post was a fake, because the alternative would mean grappling not just with the president’s racism but with his unassailable power to determine—like Linnaeus, like Falconbridge, like John—the relative value of all human—and nonhuman—life.
Akash Belsare is an assistant professor of English at the University of Illinois Springfield and a Public Voices Fellow with The OpEd Project.