Skip to content
Search

Latest Stories

Top Stories

The critical need for transparency and regulation amidst the rise of powerful artificial intelligence models

The critical need for transparency and regulation amidst the rise of powerful artificial intelligence models

OpenAI CEO Samuel Altman testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during an oversight hearing to examine A.I., focusing on rules for artificial intelligence

Getty Images

Samuelis an artificial intelligence (AI) scientist and professor, leading the Master of Public Informatics programs at Bloustein, Rutgers University. Samuel is also a member of the Scholars Strategy Network.

As artificial intelligence (AI) technologies cross over a vital threshold of competitiveness with human intelligence, it is necessary to properly frame critical questions in the service of shaping policy and governance while sustaining human values and identity. Given AI’s vast socioeconomic implications, government actors and technology creators must proactively address the unique and emerging ethical concerns that are inherent to AI’s many uses.


Open Source versus Black Box AI Technologies

AI can be viewed as an adaptive “set of technologies that mimic the functions and expressions of human intelligence, specifically cognition, and logic.” In the AI field, foundation models (FMs) are more or less what they sound like: large, complex models that have been trained on vast quantities of digital general information that may then be adapted for more specific uses.

Two notable features of foundation models include a propensity to gain new and often unexpected capabilities as they increase in scale (“emergence”), and a growing predisposition to serve as a common “intelligence base” for differing specialized functions and AI applications (“homogenization”). Large language models (LLMs) that power applications like ChatGPT are foundation models with a focus on modeling human language, knowledge, and logic. Advanced AIs and foundation models have the potential to replace multiple task-specific or narrow AIs due to their scale and flexibility, which increases the risk of a few powerful persons or entities who control these advanced AIs gaining extraordinary socioeconomic power, creating conditions for mass exploitation and abuse.

Sign up for The Fulcrum newsletter

ChatGPT and other large language model applications, that are growing in popularity, use a nontransparent “black box” approach; users of these technologies have little to no access to the inner workings or underlying AI models and can only observe outputs (such as an essay) that result from data inputs (such as a written or spoken prompt) to judge how these applications function. Such opaque foundation models have widespread future AI application potential, displaying homogenization where the base models can be adapted to serve a range of specialized purposes. These exponentials are the risks inherent in a system that allows for private control of advanced AIs. Open-source initiatives, on the other hand, prioritize transparency and public availability of the AI models, including code, process, relevant data, and documentation, so that users and society at large have an opportunity to understand how these technologies function.

For human society, the spirit of the open-source movement is one of the most valuable forces at play in the AI technologies arena. Research and development of AI ethics must emphasize the contributions of open data, open-source software, open knowledge, and responsible AI movements, and contrast these with the challenges presented by relatively opaque AI applications. Closely coupled with open-source, the open-data movement (which refers to “data that is made freely available for open consumption, at no direct cost to the public, which can be efficiently located, filtered, downloaded, processed, shared, and reused without any significant restrictions on associated derivatives, use, and reuse”) can be a significant contributor to the development of responsible AI. Open-source initiatives distribute power and reduce the likelihood of centralized control and abuse by a few AI owners with concentrated power.

Questioning AI Ethics to Inform Regulatory Processes

Drawing from the open-source movement’s prioritization of transparency, decentralization, fairness, liberty, and empowerment to the people, responses to the following five questions should be required of all companies creating opaque AI applications such as ChatGPT.

Is it fair to use an opaque black-box approach for AI technologies when the implications and impacts of complex AI technologies posit many significant risks? The consequences of AIs are great and must be associated with proportionately higher levels of accountability. The volatile impacts of AIs are expected to be exponentially greater over time than past technologies. Therefore, for human society, it would be relatively less risky in the long run for companies to embrace the open-source approach which has already demonstrated the critical value of transparency and open availability of source materials.

If building upon valuable, free, good-faith open-source research, is it then morally correct to build opaque black boxes for private profit? Significant open-source contributions, made in good faith, have laid the foundations for the present-day AIs. Many of the technological modules, such as transformers, used within applications like ChatGPT came from open-source research. Having reaped the benefits of open-source, privatization of critical AI models blocks societal innovation opportunities and even if presently legally permissible, should be considered ethically wrong.

Why should for-profit companies be allowed to deprive people of their right to know all specific details about what data an AI (that they are expected to use, compete against, and perhaps even be subject to in the future) has been developed with? Large language models are trained on vast quantities of data—in the interests of public benefit and transparency, all data “ingredients” used for training must be detailed in a testable manner. If the specific texts on which GPT3,4 /ChatGPT have been trained remain largely undisclosed, it would be difficult for the public or governments to audit for the fair use of data; to gauge if restricted, protected, private or confidential data have been used; and to see if a company has added synthetic data or performed other manipulations causing the AI to present biased views on critical topics. All training data must be declared in real-time.

Why should companies not be held responsible for transparency and be compelled to demonstrate the absence of deliberate bias mechanisms and output-manipulating systems within their applications? Combining complex, risk-inducing, hidden AI technologies with the opacity of data use gives “emergence” to the potential rise and spread of manipulative AIs. Companies must be required to show that their AIs provide a fair and unbiased representation of information, and they must be held responsible to prove in real-time that they are indeed reflecting “facts as they are.” The absence of protections for the public will potentially facilitate mass manipulation of users—consider a future filled with powerful mind-bending manipulative AIs under the control of a few of the self-declared elites.

Why should for-profit AIs not be regulated by enhanced AI-appropriate laws and policies for consumer protection? ChatGPT Plus was released at $20 per month for privileged access in February of 2023. For a company that started as a “nonprofit to develop AI” with a self-declared purpose to “benefit humanity as a whole,” this rush to monetization can appear opportunistic and shortsighted. When an AI is presented as a commercial product, the company profiting from this product should be held liable for full transparency and all the output the AI produces, and companies should not be allowed to actively or passively coerce users (again) into signing away all their fair rights. Governments must break the habit of acting only after powerful companies and wealthy investors have ensured positive returns on investment, and companies must be challenged to develop profit models which accommodate full transparency of methods and data.

Fear of abuse or global security concerns are feeble excuses for hiding general-purpose scientific discoveries, building opacity, and creating black boxes. Instead of waiting for harm to occur, governments must be proactive; for example, if large language model applications had been covered by proactive AI regulations before ChatGPT’s launch, we would be able to use the technology more confidently and increase productivity with informed concerns on bias, limitations, and manipulation. Monetization and positive return on investment are important and it is necessary to have sustainable business models with some level of opacity of final production systems. However, given the nature and power of AI technologies, utmost care must be taken to provide open availability of AI foundation models and training data and to ensure transparency, fairness, accountability, application of open-source principles, and adoption of responsible AI practices.

Supporting the integration of open data and AI, along with proactive policies built on the principles of the open-source movement, will serve as a sustainable value-creation strategy to ensure that the benefits of AI are truly disbursed equitably to all people. Using a framework derived from the open-source movement will ensure an optimal measure of public power over artificial intelligence, and lead to a much-needed improvement in accountability and responsible behavior by companies and governments who “own” these technologies.

Read More

Joe Biden being interviewed by Lester Holt

The day after calling on people to “lower the temperature in our politics,” President Biden resort to traditionally divisive language in an interview with NBC's Lester Holt.

YouTube screenshot

One day and 28 minutes

Breslin is the Joseph C. Palamountain Jr. Chair of Political Science at Skidmore College and author of “A Constitution for the Living: Imagining How Five Generations of Americans Would Rewrite the Nation’s Fundamental Law.”

This is the latest in “A Republic, if we can keep it,” a series to assist American citizens on the bumpy road ahead this election year. By highlighting components, principles and stories of the Constitution, Breslin hopes to remind us that the American political experiment remains, in the words of Alexander Hamilton, the “most interesting in the world.”

One day.

One single day. That’s how long it took for President Joe Biden to abandon his call to “lower the temperature in our politics” following the assassination attempt on Donald Trump. “I believe politics ought to be an arena for peaceful debate,” he implored. Not messages tinged with violent language and caustic oratory. Peaceful, dignified, respectful language.

Keep ReadingShow less

Project 2025: The Department of Labor

Hill was policy director for the Center for Humane Technology, co-founder of FairVote and political reform director at New America. You can reach him on X @StevenHill1776.

This is part of a series offering a nonpartisan counter to Project 2025, a conservative guideline to reforming government and policymaking during the first 180 days of a second Trump administration. The Fulcrum's cross partisan analysis of Project 2025 relies on unbiased critical thinking, reexamines outdated assumptions, and uses reason, scientific evidence, and data in analyzing and critiquing Project 2025.

The Heritage Foundation’s Project 2025, a right-wing blueprint for Donald Trump’s return to the White House, is an ambitious manifesto to redesign the federal government and its many administrative agencies to support and sustain neo-conservative dominance for the next decade. One of the agencies in its crosshairs is the Department of Labor, as well as its affiliated agencies, including the National Labor Relations Board, the Equal Employment Opportunity Commission and the Pension Benefit Guaranty Corporation.

Project 2025 proposes a remake of the Department of Labor in order to roll back decades of labor laws and rights amidst a nostalgic “back to the future” framing based on race, gender, religion and anti-abortion sentiment. But oddly, tucked into the corners of the document are some real nuggets of innovative and progressive thinking that propose certain labor rights which even many liberals have never dared to propose.

Sign up for The Fulcrum newsletter

Keep ReadingShow less
Donald Trump on stage at the Republican National Convention

Former President Donald Trump speaks at the 2024 Republican National Convention on July 18.

J. Conrad Williams Jr.

Why Trump assassination attempt theories show lies never end

By: Michele Weldon: Weldon is an author, journalist, emerita faculty in journalism at Northwestern University and senior leader with The OpEd Project. Her latest book is “The Time We Have: Essays on Pandemic Living.”

Diamonds are forever, or at least that was the title of the 1971 James Bond movie and an even earlier 1947 advertising campaign for DeBeers jewelry. Tattoos, belief systems, truth and relationships are also supposed to last forever — that is, until they are removed, disproven, ended or disintegrate.

Lately we have questioned whether Covid really will last forever and, with it, the parallel pandemic of misinformation it spawned. The new rash of conspiracy theories and unproven proclamations about the attempted assassination of former President Donald Trump signals that the plague of lies may last forever, too.

Keep ReadingShow less
Painting of people voting

"The County Election" by George Caleb Bingham

Sister democracies share an inherited flaw

Myers is executive director of the ProRep Coalition. Nickerson is executive director of Fair Vote Canada, a campaign for proportional representations (not affiliated with the U.S. reform organization FairVote.)

Among all advanced democracies, perhaps no two countries have a closer relationship — or more in common — than the United States and Canada. Our strong connection is partly due to geography: we share the longest border between any two countries and have a free trade agreement that’s made our economies reliant on one another. But our ties run much deeper than just that of friendly neighbors. As former British colonies, we’re siblings sharing a parent. And like actual siblings, whether we like it or not, we’ve inherited some of our parent’s flaws.

Keep ReadingShow less
Constitutional Convention

It's up to us to improve on what the framers gave us at the Constitutional Convention.

Hulton Archive/Getty Images

It’s our turn to form a more perfect union

Sturner is the author of “Fairness Matters,” and managing partner of Entourage Effect Capital.

This is the third entry in the “Fairness Matters” series, examining structural problems with the current political systems, critical policies issues that are going unaddressed and the state of the 2024 election.

The Preamble to the Constitution reads:

"We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."

What troubles me deeply about the politics industry today is that it feels like we have lost our grasp on those immortal words.

Keep ReadingShow less