Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI - Its Use, Misuse, and Regulation

Opinion

AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.


Clearly, AI, in the hands of knowledgeable professionals, can assist them in doing their work and improving our lives.

That, however, is where the good news ends. And it all depends on the phrase "in the hands of knowledgeable professionals." AI in the hands of everyday people, as it is already today, is a danger to themselves and our society. Let me give you some examples.

Perhaps the worst are the many reported cases where individuals are using AI as a companion or advisor. People, mostly teenagers, it appears, are looking to chatbots for emotional support, including advice on suicide, because they don't have someone to confide in. AI may sound like a person and be able to respond to what someone says or asks, but it isn't trained to respond to the complexities that make up a person's emotional state. Further, they are currently designed mostly to reassure people with doubts; if someone is in emotional trouble, reassuring that person that he's doing the right thing is probably exactly the wrong thing to do.

Another class of cases are people—I assume again very lonely people—who look to chatbots as a love or sex object. There was a report of one teenage boy who committed suicide in order to "join" his chatbot love. People are confusing illusion and reality.

It has also been reported that many people who are having problems with the medical profession or who just don't have access to a doctor are using chatbots to self-diagnose. The reader may well ask, if professionals can use AI for this purpose, why can't the average layman? The answer is that AI depends on the quality of the information it is given about the problem it is being asked to solve. The old expression, "Crap in, crap out," clearly applies here. Medical problems are so complex that it is unlikely the patient can identify all the factors AI needs to properly answer the question.

A second class of harm comes from the use of AI by individuals intent on creating misinformation, whether on the right or the left, to influence people's responses to political events.

We have seen the impact that misinformation on social media has had. Now we have the added impact of AI. As was just reported by The New York Times, a "torrent of fake videos and images" has been generated by people using AI to create reactions to the Iran war online. The impact of these images is strong because people tend to believe what they see; AI has been perfected to a level where, even in the hands of amateurs, one cannot tell that an AI-generated video or image is fake.

An entirely different kind of harm comes from people—whether students or adults—who use AI to generate a variety of work products—papers, applications, articles. To say that this practice is a no-no is obvious. When someone submits work-product, it should be their own, meaning it results from their own mind. Using AI to generate such things is just another way of cheating.

But the problem is not just that these people are being dishonest about what they have submitted. It's that they haven't used their mind. Remember the famous words of Descartes: "I think, therefore I am." The development and use of one's mind is what makes people grow, increase their ability to process information, and perform tasks. Using AI yields no growth.

The list goes on and on. But the general point is the one I started with: In the hands of professionals, AI is already very useful for analyzing difficult or rare situations, and it will likely become even more so. However, in the hands of the average person, it is either a way to meet an emotional need that isn't being satisfied in the real world, an invitation to be lazy or cheat, or a way to spread misinformation to achieve a goal.

While I sympathize deeply with people who are lonely and look to their chatbot to mitigate that loneliness, it is a bad and dangerous answer. For those using AI because their human providers are inadequate, their problem is very real, but using AI is again a bad and dangerous answer.

People in both these situations are suffering from a failure of society—of humans—to provide an environment where people are nourished and heard. Whether it's within the family, in the workplace, or in one's relationship with a healing or other provider, this is a serious societal problem. But chatbots are not the answer.

The answer, as I see it, is twofold. First, the law needs to regulate the use of AI. Its use should be restricted to assisting professionals in analyzing problems. AI products (e.g., chatbots) should not be available to the average person. AI should be treated similarly to a controlled substance; only people licensed to use it can obtain it.

Let the tech giants howl at this limitation on their ability to make money from their AI investments. Even with restricted use, I have no doubt that they can figure out how to make a good profit.

Second, society and families are failing people in numerous ways. Parents need to change the way they raise their children (see my book, Raising a Happy Child). Doctors need to communicate better with their patients. And society needs to stop sending people messages of inadequacy. The latter will, in all likelihood, never come about. And so children will continue to be harmed by what they learn from the media and by their interactions with others.

If we cannot change society, then we have to provide children (or adults) with the means to see themselves differently so they are not damaged by these interactions. (See my book, Discover Your Power.) Turning to chatbots to resolve the problem is not the answer.

Finally, AI should not enable people to influence an already chaotic political landscape by distributing misinformation. This dangerous tool must be kept out of the hands of all but professionals working in areas where the benefits of AI are clear.

Ronald L. Hirsch is a teacher, legal aid lawyer, survey researcher, nonprofit executive, consultant, composer, author, and volunteer. He is a graduate of Brown University and the University of Chicago Law School and the author of We Still Hold These Truths. Read more of his writing at www.PreservingAmericanValues.com


Read More

AI, Reality, and the Pygmalion Effect: Why Human Judgment Still Matters
Woman typing on laptop at wooden table with breakfast.

AI, Reality, and the Pygmalion Effect: Why Human Judgment Still Matters

When the World goes Mad, one must accept Madness as Sanity, since Sanity is, in the last analysis, nothing but the Madness on which the Whole World happens to agree. (George Bernard Shaw)

Among the most prolific and famous playwrights of the 20th century, Shaw wrote “Pygmalion,” the play upon which “My Fair Lady” was based. Pygmalion was a Greek mythological figure, a sculptor from Cyprus, who fell in love with the statue he created. Aphrodite turned his sculpture into a real woman, promoting the idea that the “created” is greater than the “creator.”

Keep ReadingShow less
Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit
a sign with a question mark and a question mark drawn on it

Humanoid Educators Will Widen Inequality—And Only Tech Overlords Will Benefit

In March, First Lady Melania Trump hosted an AI-powered humanoid robot at the White House during the Fostering the Future Together Global Coalition Summit, and introduced Plato, a humanoid educator marketed as a replacement for teachers that could homeschool children. A humanoid educator that speaks multiple languages, is always available, and draws on a vast store of information could expand access in meaningful ways. But the evidence suggests that the risks outweigh the benefits, that adoption will be uneven, and that the families most likely to adopt Plato will bear those risks disproportionately.

Research on excessive technology use in childhood has found consistent results. Young children and teenagers who spend too much time with screens are more likely to experience reduced physical activity, lower attention spans, depression, and social anxiety. On the same day that Melania Trump introduced Plato, a California jury ruled that Meta and YouTube contributed to anxiety and depression in a woman who began using social media at age 6, a reminder that the consequences of under-tested technology on children can be severe and long-lasting.

Keep ReadingShow less
An illustration of a block with the words, "AI," on it, surrounded by slightly smaller caution signs.

The future of AI should be measured by its impact on ordinary Americans—not just tech executives and investors. Exploring AI inequality, labor concerns, and responsible innovation.

Getty Images, J Studios

The Kayla Test: Exploring How AI Impacts Everyday Americans

We’re failing the Kayla Test and running out of time to pass it. Whether AI goes “well” for the country is not a question anyone in SF or DC can answer. To assess whether AI is truly advancing the interests of Americans, AI stakeholders must engage with more than power users, tokenmaxxers, and Fortune 500 CEOs. A better evaluation is to talk to folks like Kayla, my Lyft driver in Morgantown, WV, and find out what they think about AI. It's a test I stumbled upon while traveling from an AI event at the West Virginia University College of Law to one at Stanford Law.

Kayla asked me what I do for a living. I told her that I’m a law professor focused on AI policy. Those were the last words I said for the remainder of the ride to the airport.

Keep ReadingShow less
Close up of a person on their phone at night.

From “Patriot Games” to The Hunger Games, how spectacle, social media, and political culture risk normalizing violence and eroding empathy.

Getty Images, Westend61

The Capitol Is Counting on Us to Laugh

When the Trump administration announced the Patriot Games, many people laughed. Selecting two children per state for a nationally televised sports competition looked too much like Suzanne Collins’ Hunger Games to take seriously. But that instinct, to laugh rather than look closer, is one the Capitol is counting on. It has always been easier to normalize violence when it arrives dressed as entertainment or patriotism.

Here’s what I mean: The Hunger Games starts with the reaping, the moment when a Capitol official selects two children, one boy and one girl, to fight to the death against tributes from every other district. The games were created as an annual reminder of a failed rebellion, to remind the districts that dissent has consequences. At first, many Capitol residents saw the games as a just punishment. But sentiments shifted as the spectacle grew—when citizens could bet on winners, when a death march transformed into a beauty pageant, when murder became a pathway to celebrity.

Keep ReadingShow less