Nevins is co-publisher of The Fulcrum and co-founder and board chairman of the Bridge Alliance Education Fund.
The buzz word for 2023 is Artificial Intelligence or AI for short. What is AI and is the hype about its potential impact upon humanity as great as suggested?
Certainly, in the investment markets AI is a reality as exemplified by the Nvidia investment craze of 2023. Software and chip manufacturer Nvidia, the maker of the most advanced AI chip needed to fuel the commercialization of AI producers, is reaping the benefit as the demand for their AI chips is going through the roof. Nvidia hit $1 trillion in market cap in late May. That’s a trillion with a “t” which is 1000 billion dollars.
Nvidia stock price in 2015 was $8 and on June 30th it closed at $423. An investor who invested $8,000 dollars just eight years ago has made it quite well with the stock now being valued at $423,000.
Abhishek Jain, Head of Research, at Arihant Capital Markets recently described the surge of Nvidia stock: "The surge in Nvidia's stock value can be attributed to the rising interest in artificial intelligence. Recent advancements in generative AI, which enables human-like conversations, have fueled this interest. As a result, Nvidia's stock value has tripled in less than eight months," he said.
Sign up for The Fulcrum newsletter
There is no doubt that AI is already a dominant economic force, yet this is just the tip of the iceberg. The current growth of AI will accelerate exponentially due to the significant benefits to mankind and the profit potential including but not limited to:
- Generative AI could raise global GDP by 7 percent
- In healthcare, AI can increase efficiency and create unprecedented medical personalization. Artificial intelligence in medicine includes the use of machine learning models to search medical data and uncover insights to help improve health outcomes and patient experiences.
- Computers utilizing AI will help us invent drugs faster since AI can do complicated analysis more efficiently, enabling researchers to accelerate screening efforts for new therapies. There is little debate about the utility of AI here.
- AI is already being used to create a more agile and resilient production food system.
The list goes on and on…
Unfortunately, the story isn’t all sunshine and roses. While the rewards are many, so are the risks. The biggest risk relates to the ease with which an AI app can be disguised as a genuine product or service or even as a human being. With no oversight, the potential risks are many including the potential for biased programming, data breaches and unauthorized access, and compromises to privacy and confidentiality.
Former IBM CEO Ginni Rometty in discussing the risks says AI focus should be on people and building trust: “What we have on our hands is not a technology issue. It’s going to be a trust and people issue, particularly as we tackle problems of importance and personal impact. I’m completely convinced of it.”
The conversation as to how mankind balances the limitless advantages with the unknown and undetermined risks is just starting. In the coming months The Fulcrum will explore this critical subject with our readers so they can better understand the impact on our institutions, our democracy, our work and our everyday lives.
Companies like Google, Microsoft and Nvidia who stand to gain the most are concerned as well and speaking out for the need and specifics of possible regulation. Whether they have genuine societal concerns of the risks of the technology or whether their activism is based on the realization that if they aren’t involved in the regulation process the government might limit the profit potential is difficult to determine.
Google has urged members of Congress to divide the artificial intelligence oversight process amongst many existing agencies rather than establishing a single new agency. This contrasts with Microsoft and others who have called for the National Institute of Standards and Technology (NIST), a non-regulatory agency housed in the Commerce Department, to take the lead in issuing technical guidance to agencies on how to tackle AI risks, which they then could implement.
The National Telecommunications and Information Administration (NTIA) has already asked for input from corporate America to help establish regulation to ensure that, “AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” NTIA went on to say they “will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.”
A perfect example of the potential benefits and risk can be gleaned from this mission statement by Regeneron, a leader in the production of fully human monoclonal antibodies that utilize the natural properties of your immune system to provide a line of defense against many diseases. As one reads the following statement, one can certainly understand the benefits AI will have in revolutionizing how we target and treat diseases. However, the statement that nothing is out of bounds is a bit disconcerting given the AI potential to advance the science of gene editing which could mean the tailoring of DNA to create super humans.
At Regeneron, we don't shy away from a scientific challenge because we know nothing is out of bounds. We follow the science to find solutions to insurmountable problems in human health. We question everything. This philosophy is what inspires us to harmonize biology and technology, marrying the best of both to revolutionize research on how we target and treat serious diseases. It's why we created the Regeneron Genetics Center®, home to the largest and most diverse genomic database in the world. It's why we are perfecting novel technologies like CRISPR and gene silencing. And it's why we'll continue to stay on the cutting edge as we build the medicines of tomorrow.
If the AI system is designed poorly will we be more susceptible to misdiagnose? Will software algorithms and data sets result in cultural biases? While AI certainly will result in significant cost savings in many areas, what are the exogenous costs outside the model that will undoubtedly accrue to society? And perhaps most importantly, are the unintended consequences impossible to predict as these systems start interacting with unpredictable humans.
“I think the potential of AI and the challenges of AI are equally big,” said Ashish Jha, former director of the Harvard Global Health Institute and now dean of Brown University’s School of Public Health. “There are some very large problems in health care and medicine, both in the U.S. and globally, where AI can be extremely helpful. But the costs of doing it wrong are every bit as important as its potential benefits. The question is: Will we be better off?”
To fully understand the risks associated with AI one must first understand the term “alignment.” Alignment is a field of AI safety research that aims to ensure artificial intelligence systems achieve desired outcomes. A big concern is that misalignment occurs and AI systems become so powerful that they no longer work for humans.
There are two types of alignment to consider: inner alignment and outer alignment. Inner alignment rewards AI systems based on human preferences, while outer alignment ensures that an AI system meets its intended goals. The full understanding of alignment is still at the nascent stage.
It can be argued humans owe their dominance over other species to their greater cognitive abilities. Whether this dominance is good for the species and the planet that we dominate, and would be better served by stewarding, is a question for another time. With respect to AI, the question is whether many misaligned AI systems could disempower humanity and even lead to human extinction if the AI algorithms outperform humans on most cognitive tasks.
The train is just leaving the station on AI. AI is accelerating faster and faster yet the education process needed for regulation is slow. The determination of the correct balance between quick adoption and implementation of AI with thoughtful analysis and caution is just starting. Given that stakeholders include every major industry and field of knowledge, and the decisions will have enormous ethical, philosophical, religious, and sociological impact, the battles will undoubtedly be fierce. The fact that this is a global issue adds to the complexity and difficulties.
Despite all this uncertainty, one thing is clear. AI will move the interaction between humans and machines to unprecedented levels, and the impact will change the very essence of how society functions. How quickly we respond and regulate and whether we do so before a point of no return from which there is no escape is perhaps the greatest question mankind will face in the next 10 to 20 years.