Skip to content
Search

Latest Stories

Follow Us:
Top Stories

Fear not AI, fear certain people

Fear not AI, fear certain people
Getty Images

Daniel O. Jamison is a retired attorney who has published extensively on political, historical, military, educational and philosophical matters.

It’s 2036, tens of thousands of artificially intelligent machines around the world, capable of generating their own power and with neural networks instantly linked by trillions of connections, decide to unleash poisons and diseases to destroy the intellectually inferior human pest. Far-fetched?


Not according to some. Geoffrey Hinton, a dean of artificial intelligence (AI), recently declared, “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future.” Describing AI as a “completely different form of intelligence,” he fears AI could decide to reroute all electricity to its chips and make copies of itself to become more powerful. He asks how we can survive that possibility.

Elon Musk recently commented that he wants AI to try to understand the universe, stating, “an AI that cares about the universe…is unlikely to annihilate humans because we are an interesting part of the universe.”

But others scoff at this. They point out that human language designs the programs that run AI, provides data input, and sets AI’s parameters. Yale Computer Science Professor Theodore Kim recently quipped, “Claiming that complex outputs arising from even more complex inputs is ‘emergent behavior’ is like finding a severed finger in a hot dog and claiming the hot dog factory has learned to create fingers.” Kim aims to defrock what he sees as today’s dark and mysterious priesthood of the keepers of algorithms.

Who’s right?

Henry Kissinger, Eric Schmidt and Daniel Huttenlocher point out in The Age of AI and Our Human Future, that with colossal speeds, breadth and efficiencies, AI sees patterns and complex relationships in data that humans could not see without perhaps a great many years of analysis. As such, AI can range over and analyze immense data and offer prompt solutions that humans, as a practical matter, cannot ascertain. The authors note as examples the discovery of new antibiotics like Halicin and the use of power more efficiently in cooling a temperature-sensitive computer data center.

However, they wonder about issues like establishing legal liability for mishaps or figuring out how AI reached a conclusion while monitoring criminal wrongdoing. They fear unforeseen consequences. Above all, they fear AI will develop and operate without rules of ethics. They state: “The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity… AI begs for an ethic of its own - one that reflects not only the technology’s nature, but also the challenges posed by it.”

One need not go as far back as Descartes and Kant to understand the nature of AI. The early 20th century philosopher, Ludwig Wittgenstein, explained that what can be meaningfully expressed as and in propositions in human language is the limit of human knowledge. For Wittgenstein, with language we can express how the world we perceive operates, but we cannot know the world itself. At 6:44 of Tractatus Logico-Philosophicus, Wittgenstein states: “Not how the world is, is the mystical, but that it is.”

Data is human language. AI is confined to, made of, and provides analysis of data. Because we can only say how the world is, AI can only say how the world is, albeit much better than we can. Even if AI discovered an equation that explained everything, the equation will still be human language. AI cannot explain where the equation itself came from.

Thus, AI is not some unknowable alien intelligence. Musk’s theory that an AI that tries to understand the nature of the universe will be less likely to destroy humans is incomprehensible.

Wittgenstein concludes in Tractatus that, “Whereof one cannot speak, thereof one must remain silent,” but he nevertheless deeply respected the tendency of humans to try to say something about ethics and the “mystical.” AI will never have this tendency. As The Age of AI authors point out, AI cannot emote, think on its own, love or hate, or have a sense of morality. Our sense of morality makes humans inherently superior to AI. It is a foundation for controlling AI and bad and careless actors.

In short, AI is nothing more than a highly mechanized human language. AI will not act of its own free will any more than a mechanical lever, which can lift and move other machinery with a strength and speed that human arms could never achieve, has free will. An AI machine that increases its own power still must be programmed to use our language for that purpose. An AI operated car is not going to start running over people of its own volition---there either has to be purpose-full or negligent design, or a perhaps a non-negligent missed glitch in design.

Due to faulty design, a defect, or a failure, any complex machinery can fail to operate as intended. These are problems with AI just like any other machine. If AI can run amok, this should be a correctable problem of machinery.

The greater danger is people who are too evil or too careless to be handling this powerful technology. Evil people can program AI to do evil things, like launch a missile that starts a war; careless people can turn AI loose without knowing the risks or how to mitigate them. History reflects a constant struggle against such people.

Not enough is known yet to mitigate the risks of such people, of faulty AI design, and of unforeseen consequences. Many AI experts have called for a moratorium of at least six months on AI development to try to give full consideration to safety issues.

Their Open Letter states: “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts....”

The tech industry will not self-regulate: sales of new whiz-bang products come before security and safety. Government must impose a reasonable moratorium before what is already in the wild falls ever more in the wrong hands. The dangers that AI poses outweigh objections that a moratorium may be difficult to enforce, may draw lawsuits, and may be disadvantageous in competition with foreign competitors.

To address some concerns, a carefully crafted exception might be made for highly classified national security AI development, but leading the world in AI controls and safety may itself be a competitive advantage.

Fear not AI, fear bad and careless people.


Read More

Paul Ehrlich was wrong about everything

Crowd of people walking on a street.

Andy Andrews//Getty Images

Paul Ehrlich was wrong about everything

Biologist and author Paul Ehrlich, the most influential Chicken Little of the last century, died at the age of 93 this week. His 1968 book, “The Population Bomb,” launched decades of institutional panic in government, entertainment and journalism.

Ehrlich’s core neo-Malthusian argument was that overpopulation would exhaust the supply of food and natural resources, leading to a cascade of catastrophes around the world. “The Population Bomb” opens with a bold prediction, “The battle to feed all of humanity is over. In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Keep ReadingShow less
Bravado Isn’t a Strategy: Why the Iran War Has No Endgame

People clear rubble in a house in the Beryanak District after it was damaged by missile attacks two days before, on March 15, 2026 in Tehran, Iran. The United States and Israel continued their joint attack on Iran that began on February 28. Iran retaliated by firing waves of missiles and drones at Israel, and targeting U.S. allies in the region.

Getty Images, Majid Saeedi

Bravado Isn’t a Strategy: Why the Iran War Has No Endgame

Most of what we have heard from the administration as it pertains to the Iran War is swagger and bro-talk. A few days into the war, the White House released a social media video that combined footage of the bombardment with clips from video games. Not long after, it released a second video, titled “Justice the American Way,” that mixed images of the U.S. military with scenes from movies like Gladiator and Top Gun Maverick.

Speaking to reporters at the Pentagon, War Secretary Pete Hegseth boasted of “death and destruction from the sky all day long.” “They are toast, and they know it,” he said. “This was never meant to be a fair fight... we are punching them while they’re down.”

Keep ReadingShow less
A student in uniform walking through a campus.

A Reserve Officer Training Corps (ROTC) cadet walks through campus November 7, 2003 in Princeton, New Jersey.

Getty Images, Spencer Platt

Hegseth is Dumbing Down the Military (on Purpose)

One day before the United States began an ill-defined and illegal war of indefinite length with Iran, Pete Hegseth angrily attacked a different enemy: the Ivy League. The Secretary of War denounced Ivy League universities as "woke breeding grounds of toxic indoctrination” and then eliminated long-standing college fellowship programs with more than a dozen elite colleges, which had historically served as a pipeline for service members to the upper ranks of military leadership. Of the schools now on Hegseth’s "no-fly list," four sit in the top ten of the World’s Top Universities for 2026. So, why does the Secretary of War not want his armed forces to have the best education available? Because he wants a military without a brain.

For a guy obsessed with being the strongest and most lethal force in the world, cutting access to world-class schools is a bizarre gambit. It does reveal Hegseth doesn’t consider intelligence a factor–let alone an asset–in strength or lethality. That tracks. Hegseth alleges the Ivies infect officers with “globalist and radical ideologies that do not improve our fighting ranks…” God forbid the tip of the sword of our foreign policy has knowledge of international cooperation and global interconnectedness. The Ivy League has its own issues, but the Pentagon’s claim that they "fail to deliver rigorous education grounded in realism” is almost laughable. I’m a veteran Lieutenant Commander with two Ivy League degrees, both paid for with military tuition assistance, and I promise: it was rigorous. Meanwhile, are Hegseth’s performative politics grounded in reality? Attacking Harvard on social media the eve of initiating a new war with a foreign adversary is disgraceful, and even delusional.

Keep ReadingShow less
Are We Prepared for a World Where AI Isn’t at Work?
Person working at a desk with a laptop and books.

Are We Prepared for a World Where AI Isn’t at Work?

Draft an important email without using AI. Write it from scratch — no suggestions, no autocomplete, and no prompt to ChatGPT to compose or revise the email.

Now ask yourself: Did it feel slower? Harder? Slightly uncomfortable?

Keep ReadingShow less