Skip to content
Search

Latest Stories

Follow Us:
Top Stories

AI - Its Use, Misuse, and Regulation

Opinion

AI - Its Use, Misuse, and Regulation
Glowing ai chip on a circuit board.
Photo by Immo Wegmann on Unsplash

There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.

First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.


Clearly, AI, in the hands of knowledgeable professionals, can assist them in doing their work and improving our lives.

That, however, is where the good news ends. And it all depends on the phrase "in the hands of knowledgeable professionals." AI in the hands of everyday people, as it is already today, is a danger to themselves and our society. Let me give you some examples.

Perhaps the worst are the many reported cases where individuals are using AI as a companion or advisor. People, mostly teenagers, it appears, are looking to chatbots for emotional support, including advice on suicide, because they don't have someone to confide in. AI may sound like a person and be able to respond to what someone says or asks, but it isn't trained to respond to the complexities that make up a person's emotional state. Further, they are currently designed mostly to reassure people with doubts; if someone is in emotional trouble, reassuring that person that he's doing the right thing is probably exactly the wrong thing to do.

Another class of cases are people—I assume again very lonely people—who look to chatbots as a love or sex object. There was a report of one teenage boy who committed suicide in order to "join" his chatbot love. People are confusing illusion and reality.

It has also been reported that many people who are having problems with the medical profession or who just don't have access to a doctor are using chatbots to self-diagnose. The reader may well ask, if professionals can use AI for this purpose, why can't the average layman? The answer is that AI depends on the quality of the information it is given about the problem it is being asked to solve. The old expression, "Crap in, crap out," clearly applies here. Medical problems are so complex that it is unlikely the patient can identify all the factors AI needs to properly answer the question.

A second class of harm comes from the use of AI by individuals intent on creating misinformation, whether on the right or the left, to influence people's responses to political events.

We have seen the impact that misinformation on social media has had. Now we have the added impact of AI. As was just reported by The New York Times, a "torrent of fake videos and images" has been generated by people using AI to create reactions to the Iran war online. The impact of these images is strong because people tend to believe what they see; AI has been perfected to a level where, even in the hands of amateurs, one cannot tell that an AI-generated video or image is fake.

An entirely different kind of harm comes from people—whether students or adults—who use AI to generate a variety of work products—papers, applications, articles. To say that this practice is a no-no is obvious. When someone submits work-product, it should be their own, meaning it results from their own mind. Using AI to generate such things is just another way of cheating.

But the problem is not just that these people are being dishonest about what they have submitted. It's that they haven't used their mind. Remember the famous words of Descartes: "I think, therefore I am." The development and use of one's mind is what makes people grow, increase their ability to process information, and perform tasks. Using AI yields no growth.

The list goes on and on. But the general point is the one I started with: In the hands of professionals, AI is already very useful for analyzing difficult or rare situations, and it will likely become even more so. However, in the hands of the average person, it is either a way to meet an emotional need that isn't being satisfied in the real world, an invitation to be lazy or cheat, or a way to spread misinformation to achieve a goal.

While I sympathize deeply with people who are lonely and look to their chatbot to mitigate that loneliness, it is a bad and dangerous answer. For those using AI because their human providers are inadequate, their problem is very real, but using AI is again a bad and dangerous answer.

People in both these situations are suffering from a failure of society—of humans—to provide an environment where people are nourished and heard. Whether it's within the family, in the workplace, or in one's relationship with a healing or other provider, this is a serious societal problem. But chatbots are not the answer.

The answer, as I see it, is twofold. First, the law needs to regulate the use of AI. Its use should be restricted to assisting professionals in analyzing problems. AI products (e.g., chatbots) should not be available to the average person. AI should be treated similarly to a controlled substance; only people licensed to use it can obtain it.

Let the tech giants howl at this limitation on their ability to make money from their AI investments. Even with restricted use, I have no doubt that they can figure out how to make a good profit.

Second, society and families are failing people in numerous ways. Parents need to change the way they raise their children (see my book, Raising a Happy Child). Doctors need to communicate better with their patients. And society needs to stop sending people messages of inadequacy. The latter will, in all likelihood, never come about. And so children will continue to be harmed by what they learn from the media and by their interactions with others.

If we cannot change society, then we have to provide children (or adults) with the means to see themselves differently so they are not damaged by these interactions. (See my book, Discover Your Power.) Turning to chatbots to resolve the problem is not the answer.

Finally, AI should not enable people to influence an already chaotic political landscape by distributing misinformation. This dangerous tool must be kept out of the hands of all but professionals working in areas where the benefits of AI are clear.

Ronald L. Hirsch is a teacher, legal aid lawyer, survey researcher, nonprofit executive, consultant, composer, author, and volunteer. He is a graduate of Brown University and the University of Chicago Law School and the author of We Still Hold These Truths. Read more of his writing at www.PreservingAmericanValues.com


Read More

The robot arm is assembling the word AI, Artificial Intelligence. 3D illustration

AI has the potential to transform education, mental health, and accessibility—but only if society actively shapes its use. Explore how community-driven norms, better data, and open experimentation can unlock better AI.

Getty Images, sarawuth702

Build Better AI

Something I think just about all of us agree on: we want better AI. Regardless of your current perspective on AI, it's undeniable that, like any other tool, it can unleash human flourishing. There's progress to be made with AI that we should all applaud and aim to make happen as soon as possible.

There are kids in rural communities who stand to benefit from AI tutors. There are visually impaired individuals who can more easily navigate the world with AI wearables. There are folks struggling with mental health issues who lack access to therapists who are in need of guidance during trying moments. A key barrier to leveraging AI "for good" is our imagination—because in many domains, we've become accustomed to an unacceptable status quo. That's the real comparison. The alternative to AI isn't well-functioning systems that are efficiently and effectively operating for everyone.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
Posters are displayed next to Sen. Ted Cruz (R-TX) as he speaks at a news conference to unveil the Take It Down Act to protect victims against non-consensual intimate image abuse, on Capitol Hill on June 18, 2024 in Washington, DC.

A lawsuit against xAI over AI-generated deepfakes targeting teenage girls exposes a growing crisis in schools. As laws struggle to keep up, this story explores AI accountability, teen safety, and what educators and parents must do now.

Getty Images, Andrew Harnik

Deepfakes: The New Face of Cyberbullying and Why Parents, Schools, and Lawmakers Must Act

As a former teacher who worked in a high school when Snapchat was born, I witnessed the birth of sexting and its impact on teens. I recall asking a parent whether he was checking his daughter’s phone for inappropriate messages. His response was, “sometimes you just don’t want to know.” But the federal lawsuit filed last week against Elon Musk's xAI has put a national spotlight on AI-generated deepfakes and the teenage girls they target. Parents and teachers can’t ignore the crisis inside our schools.

AI Companies Built the Tool. The Grok Lawsuit Says They Own the Damage.

Whether the theory of French prosecutors–that Elon Musk deliberately allowed the sexualized image controversy to grow so that it would drive up activity on the platform and boost the company’s valuation–is true or not, when a company makes the decision to build a tool and knows that it can be weaponized but chooses to release it anyway, they are making a risk-based decision believing that they can act without consequence. The Grok lawsuit could make these types of business decisions much more costly.

Keep ReadingShow less
Sketch collage image of businessman it specialist coding programming app protection security website web isolated on drawing background.

Amazon’s court loss over Just Walk Out highlights a deeper issue: employers are increasingly collecting workers’ biometric data without meaningful consent. Explore the growing conflict between workplace surveillance, privacy rights, and outdated U.S. laws.

Getty Images, Deagreez

The Quiet Rise of Employee Surveillance

Amazon’s loss in court over its attempt to shield the source code behind its Just Walk Out technology is a small win for shoppers, but the bigger story is how employers are quietly collecting biometric data from their own workers.

From factories to Fortune 500 companies, employers are demanding fingerprints, palmprints, retinal scans, facial scans, or even voice prints. These biometric technologies are eroding the boundary between workplace oversight and employee autonomy, often without consent or meaningful regulation.

Keep ReadingShow less