Skip to content
Search

Latest Stories

Follow Us:
Top Stories

The new world of AI

The new world of AI
Getty Images

Nevins is co-publisher of The Fulcrum and co-founder and board chairman of the Bridge Alliance Education Fund.

The buzz word for 2023 is Artificial Intelligence or AI for short. What is AI and is the hype about its potential impact upon humanity as great as suggested?


Certainly, in the investment markets AI is a reality as exemplified by the Nvidia investment craze of 2023. Software and chip manufacturer Nvidia, the maker of the most advanced AI chip needed to fuel the commercialization of AI producers, is reaping the benefit as the demand for their AI chips is going through the roof. Nvidia hit $1 trillion in market cap in late May. That’s a trillion with a “t” which is 1000 billion dollars.

Nvidia stock price in 2015 was $8 and on June 30th it closed at $423. An investor who invested $8,000 dollars just eight years ago has made it quite well with the stock now being valued at $423,000.

Abhishek Jain, Head of Research, at Arihant Capital Markets recently described the surge of Nvidia stock: "The surge in Nvidia's stock value can be attributed to the rising interest in artificial intelligence. Recent advancements in generative AI, which enables human-like conversations, have fueled this interest. As a result, Nvidia's stock value has tripled in less than eight months," he said.

There is no doubt that AI is already a dominant economic force, yet this is just the tip of the iceberg. The current growth of AI will accelerate exponentially due to the significant benefits to mankind and the profit potential including but not limited to:

  • Generative AI could raise global GDP by 7 percent
  • In healthcare, AI can increase efficiency and create unprecedented medical personalization. Artificial intelligence in medicine includes the use of machine learning models to search medical data and uncover insights to help improve health outcomes and patient experiences.
  • Computers utilizing AI will help us invent drugs faster since AI can do complicated analysis more efficiently, enabling researchers to accelerate screening efforts for new therapies. There is little debate about the utility of AI here.
  • AI is already being used to create a more agile and resilient production food system.

The list goes on and on…

Unfortunately, the story isn’t all sunshine and roses. While the rewards are many, so are the risks. The biggest risk relates to the ease with which an AI app can be disguised as a genuine product or service or even as a human being. With no oversight, the potential risks are many including the potential for biased programming, data breaches and unauthorized access, and compromises to privacy and confidentiality.

Former IBM CEO Ginni Rometty in discussing the risks says AI focus should be on people and building trust: “What we have on our hands is not a technology issue. It’s going to be a trust and people issue, particularly as we tackle problems of importance and personal impact. I’m completely convinced of it.”

The conversation as to how mankind balances the limitless advantages with the unknown and undetermined risks is just starting. In the coming months The Fulcrum will explore this critical subject with our readers so they can better understand the impact on our institutions, our democracy, our work and our everyday lives.

Companies like Google, Microsoft and Nvidia who stand to gain the most are concerned as well and speaking out for the need and specifics of possible regulation. Whether they have genuine societal concerns of the risks of the technology or whether their activism is based on the realization that if they aren’t involved in the regulation process the government might limit the profit potential is difficult to determine.

Google has urged members of Congress to divide the artificial intelligence oversight process amongst many existing agencies rather than establishing a single new agency. This contrasts with Microsoft and others who have called for the National Institute of Standards and Technology (NIST), a non-regulatory agency housed in the Commerce Department, to take the lead in issuing technical guidance to agencies on how to tackle AI risks, which they then could implement.

The National Telecommunications and Information Administration (NTIA) has already asked for input from corporate America to help establish regulation to ensure that, “AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” NTIA went on to say they “will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.”

A perfect example of the potential benefits and risk can be gleaned from this mission statement by Regeneron, a leader in the production of fully human monoclonal antibodies that utilize the natural properties of your immune system to provide a line of defense against many diseases. As one reads the following statement, one can certainly understand the benefits AI will have in revolutionizing how we target and treat diseases. However, the statement that nothing is out of bounds is a bit disconcerting given the AI potential to advance the science of gene editing which could mean the tailoring of DNA to create super humans.

At Regeneron, we don't shy away from a scientific challenge because we know nothing is out of bounds. We follow the science to find solutions to insurmountable problems in human health. We question everything. This philosophy is what inspires us to harmonize biology and technology, marrying the best of both to revolutionize research on how we target and treat serious diseases. It's why we created the Regeneron Genetics Center ®, home to the largest and most diverse genomic database in the world. It's why we are perfecting novel technologies like CRISPR and gene silencing. And it's why we'll continue to stay on the cutting edge as we build the medicines of tomorrow.

If the AI system is designed poorly will we be more susceptible to misdiagnose? Will software algorithms and data sets result in cultural biases? While AI certainly will result in significant cost savings in many areas, what are the exogenous costs outside the model that will undoubtedly accrue to society? And perhaps most importantly, are the unintended consequences impossible to predict as these systems start interacting with unpredictable humans.

“I think the potential of AI and the challenges of AI are equally big,” said Ashish Jha, former director of the Harvard Global Health Institute and now dean of Brown University’s School of Public Health. “There are some very large problems in health care and medicine, both in the U.S. and globally, where AI can be extremely helpful. But the costs of doing it wrong are every bit as important as its potential benefits. The question is: Will we be better off?”

To fully understand the risks associated with AI one must first understand the term “alignment.” Alignment is a field of AI safety research that aims to ensure artificial intelligence systems achieve desired outcomes. A big concern is that misalignment occurs and AI systems become so powerful that they no longer work for humans.

There are two types of alignment to consider: inner alignment and outer alignment. Inner alignment rewards AI systems based on human preferences, while outer alignment ensures that an AI system meets its intended goals. The full understanding of alignment is still at the nascent stage.

It can be argued humans owe their dominance over other species to their greater cognitive abilities. Whether this dominance is good for the species and the planet that we dominate, and would be better served by stewarding, is a question for another time. With respect to AI, the question is whether many misaligned AI systems could disempower humanity and even lead to human extinction if the AI algorithms outperform humans on most cognitive tasks.

The train is just leaving the station on AI. AI is accelerating faster and faster yet the education process needed for regulation is slow. The determination of the correct balance between quick adoption and implementation of AI with thoughtful analysis and caution is just starting. Given that stakeholders include every major industry and field of knowledge, and the decisions will have enormous ethical, philosophical, religious, and sociological impact, the battles will undoubtedly be fierce. The fact that this is a global issue adds to the complexity and difficulties.

Despite all this uncertainty, one thing is clear. AI will move the interaction between humans and machines to unprecedented levels, and the impact will change the very essence of how society functions. How quickly we respond and regulate and whether we do so before a point of no return from which there is no escape is perhaps the greatest question mankind will face in the next 10 to 20 years.


Read More

Why Trump’s antics don’t work on our allies

From left to right: Ukraine's President Volodymyr Zelensky, Britain's Prime Minister Keir Starmer and France's President Emmanuel Macron hold a meeting during a summit at Lancaster House on March 2, 2025, in London, England.

(Justin Tallis/WPA Pool/Getty Images/TNS)

Why Trump’s antics don’t work on our allies

It is among the most familiar patterns of the Trump era. First, the president says or does something weird, rude or otherwise norm-defying. Some elected Republicans object, and the response from Trump and his minions is to shoot the messenger. The dynamic holds constant whether it’s big (January 6 pardons) or small (tweeting “covfefe” just after midnight).

The essence of this low-road-for-me-high-road-for-thee dynamic rests on the belief that Trumpism is a one-way road. Insulting Trump, deservedly or not, is forbidden, while Trump’s antics should be celebrated when possible, defended when necessary, or ignored when neither of those responses is possible. But he should never, ever face consequences for his own actions.

Keep ReadingShow less
Government Cyber Security Breach

An urgent look at the risks of unregulated artificial intelligence—from job loss and environmental strain to national security threats—and the growing political battle to regulate AI in the United States.

Getty Images, Douglas Rissing

AI Has Put Humanity on the Ballot

AI may not be the only existential threat out there, but it is coming for us the fastest. When I started law school in 2022, AI could barely handle basic math, but by graduation, it could pass the bar exam. Instead of taking the bar myself, I rolled immediately into a Master of Laws in Global Business Law at Columbia, where I took classes like Regulation of the Digital Economy and Applied AI in Legal Practice. By the end of the program, managing partners were comparing using AI to working with a team of associates; the CEO of Anthropic is now warning that it will be more capable than everyone in less than two years.

AI is dangerous in ways we are just beginning to see. Data centers that power AI require vast amounts of water to keep the servers cool, but two-thirds are in places already facing high water stress, with researchers estimating that water needs could grow from 60 billion liters in 2022 to as high as 275 billion liters by 2028. By then, data centers’ share of U.S. electricity consumption could nearly triple.

Keep ReadingShow less
The Cracks in the Nonprofit System Are Built into Its Foundation
1 U.S.A dollar banknotes

The Cracks in the Nonprofit System Are Built into Its Foundation

Across the nonprofit sector, signs of strain are becoming more visible. Staff turnover is rising, compliance demands are increasing, and community needs are growing more complex. Yet the funding structures that support this work remain largely unchanged. What appears today as instability is not a sudden disruption. It is the predictable outcome of a model that has relied on endurance rather than investment.

For decades, nonprofit organizations have been tasked with addressing society’s most persistent challenges. Domestic violence, homelessness, behavioral health, and poverty depend heavily on nonprofit infrastructure to deliver services and stabilize communities. The sector has sustained this responsibility not because it was designed to be durable, but because the people working within it continued to adapt under pressure. Commitment filled the gaps where investment was limited. That approach is now reaching its limits.

Keep ReadingShow less
Concerns Rise as States Opt In to National Voucher Plan
boy in green sweater writing on white paper
Photo by CDC on Unsplash

Concerns Rise as States Opt In to National Voucher Plan

WASHINGTON — Cris Gulacy-Worrell used to call herself a “public school purist,” openly advocating against school voucher programs in the early 2010s. Then she founded Oakmont Education, a network of charter schools in Ohio, Iowa and Michigan, designed to help students who have dropped out of high school earn their diplomas and secure jobs.

Now she describes herself as “pro-school choice” and wants to see change in the K-12 education system.

Keep ReadingShow less