There has been no shortage of articles hailing the opportunity of AI and ones forecasting disaster from AI. I understand the good uses to which AI could be put, but I am also well aware of the ways in which AI is dangerous or will denigrate our lives as thinking human beings.
First, the good uses. There is no question that AI can outthink human beings, regardless of how famous or knowledgeable, because of the amount of information it can process in a short amount of time. The most powerful accounts I've read have been in the field of medical research: doctors have fed facts into AI, asking for a diagnosis or a possible remedy, and AI has come up with remarkable answers beyond the human mind's capability.
Clearly, AI, in the hands of knowledgeable professionals, can assist them in doing their work and improving our lives.
That, however, is where the good news ends. And it all depends on the phrase "in the hands of knowledgeable professionals." AI in the hands of everyday people, as it is already today, is a danger to themselves and our society. Let me give you some examples.
Perhaps the worst are the many reported cases where individuals are using AI as a companion or advisor. People, mostly teenagers, it appears, are looking to chatbots for emotional support, including advice on suicide, because they don't have someone to confide in. AI may sound like a person and be able to respond to what someone says or asks, but it isn't trained to respond to the complexities that make up a person's emotional state. Further, they are currently designed mostly to reassure people with doubts; if someone is in emotional trouble, reassuring that person that he's doing the right thing is probably exactly the wrong thing to do.
Another class of cases are people—I assume again very lonely people—who look to chatbots as a love or sex object. There was a report of one teenage boy who committed suicide in order to "join" his chatbot love. People are confusing illusion and reality.
It has also been reported that many people who are having problems with the medical profession or who just don't have access to a doctor are using chatbots to self-diagnose. The reader may well ask, if professionals can use AI for this purpose, why can't the average layman? The answer is that AI depends on the quality of the information it is given about the problem it is being asked to solve. The old expression, "Crap in, crap out," clearly applies here. Medical problems are so complex that it is unlikely the patient can identify all the factors AI needs to properly answer the question.
A second class of harm comes from the use of AI by individuals intent on creating misinformation, whether on the right or the left, to influence people's responses to political events.
We have seen the impact that misinformation on social media has had. Now we have the added impact of AI. As was just reported by The New York Times, a "torrent of fake videos and images" has been generated by people using AI to create reactions to the Iran war online. The impact of these images is strong because people tend to believe what they see; AI has been perfected to a level where, even in the hands of amateurs, one cannot tell that an AI-generated video or image is fake.
An entirely different kind of harm comes from people—whether students or adults—who use AI to generate a variety of work products—papers, applications, articles. To say that this practice is a no-no is obvious. When someone submits work-product, it should be their own, meaning it results from their own mind. Using AI to generate such things is just another way of cheating.
But the problem is not just that these people are being dishonest about what they have submitted. It's that they haven't used their mind. Remember the famous words of Descartes: "I think, therefore I am." The development and use of one's mind is what makes people grow, increase their ability to process information, and perform tasks. Using AI yields no growth.
The list goes on and on. But the general point is the one I started with: In the hands of professionals, AI is already very useful for analyzing difficult or rare situations, and it will likely become even more so. However, in the hands of the average person, it is either a way to meet an emotional need that isn't being satisfied in the real world, an invitation to be lazy or cheat, or a way to spread misinformation to achieve a goal.
While I sympathize deeply with people who are lonely and look to their chatbot to mitigate that loneliness, it is a bad and dangerous answer. For those using AI because their human providers are inadequate, their problem is very real, but using AI is again a bad and dangerous answer.
People in both these situations are suffering from a failure of society—of humans—to provide an environment where people are nourished and heard. Whether it's within the family, in the workplace, or in one's relationship with a healing or other provider, this is a serious societal problem. But chatbots are not the answer.
The answer, as I see it, is twofold. First, the law needs to regulate the use of AI. Its use should be restricted to assisting professionals in analyzing problems. AI products (e.g., chatbots) should not be available to the average person. AI should be treated similarly to a controlled substance; only people licensed to use it can obtain it.
Let the tech giants howl at this limitation on their ability to make money from their AI investments. Even with restricted use, I have no doubt that they can figure out how to make a good profit.
Second, society and families are failing people in numerous ways. Parents need to change the way they raise their children (see my book, Raising a Happy Child). Doctors need to communicate better with their patients. And society needs to stop sending people messages of inadequacy. The latter will, in all likelihood, never come about. And so children will continue to be harmed by what they learn from the media and by their interactions with others.
If we cannot change society, then we have to provide children (or adults) with the means to see themselves differently so they are not damaged by these interactions. (See my book, Discover Your Power.) Turning to chatbots to resolve the problem is not the answer.
Finally, AI should not enable people to influence an already chaotic political landscape by distributing misinformation. This dangerous tool must be kept out of the hands of all but professionals working in areas where the benefits of AI are clear.
Ronald L. Hirsch is a teacher, legal aid lawyer, survey researcher, nonprofit executive, consultant, composer, author, and volunteer. He is a graduate of Brown University and the University of Chicago Law School and the author of We Still Hold These Truths. Read more of his writing at www.PreservingAmericanValues.com



















Trump’s ‘Just for Fun’ War Talk Shows a Dangerous Trivialization