Ken Rudin joins The Great Battlefield podcast to talk about his career as a journalist at NPR, hosting his podcast, The Political Junkie and why he's changed his nonpartisan orientation in the Trump era.
Site Navigation
Search
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
Top Stories
Latest news
Read More
Rainy day fund would help people who lose their jobs thanks to AI
Oct 18, 2024
Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.
Artificial intelligence will eliminate jobs.
Companies may not need as many workers as AI increases productivity. Others may simply be swapped out for automated systems. Call it what you want — displacement, replacement or elimination — but the outcome is the same: stagnant, struggling communities. The open question is whether we will learn from mistakes. Will we proactively take steps to support the communities most likely to bear the cost of “innovation.”
We’ve seen what happens when communities experience sustained loss of meaningful work. Globalization caused more than 70,000 factories to close and 5 million manufacturing workers to look for new jobs. Those forced to find work elsewhere rarely found a good substitute. The remaining jobs usually paid less, provided fewer benefits and afforded less security in comparison to a union job at a factory, for example.
Economists assumed that those workers would eventually move to more lucrative pastures and find the areas with more economic vibrancy. Workers stayed put. It’s hard to leave your pasture, when it’s the place you, your family and your community have long called home. This tendency to stay put, though, created a difficult reality. Suddenly, whole communities found their economic well-being on the decline. That’s a recipe for unrest.
Sign up for The Fulcrum newsletter
The same story played out in my home state, Oregon. New technology and policies rendered the timber industry a dying trade. Residents of towns like Mill City, a timber town through and through, didn’t jointly march to a new area but understandably stayed where their families had established deep roots.
It’s time to stop assuming that people will give up on their communities. Home is much more than just a job. So when AI eliminates jobs, what safeguards will be in place so that people can remain in their communities and find other ways to thrive?
I don’t have a full answer to that question, but there’s at least one safeguard that deserves consideration: a rainy day fund. We don’t know when, where and how rapidly AI will upend a community’s economic well-being. That’s why we need to create a support fund that can help folks who suddenly find themselves with no good options. This would mark an improvement on unemployment because it would be specifically targeted to assist those on the losing end of our AI gamble and should be available to both laborers and local governments.
The AI companies responsible for prioritizing their pursuit of artificial general intelligence — AI systems with human-level capabilities — over community stability should front the costs of that fund. Congress can and should tax the companies actively inducing a new wave of displacement.
The fund should be dispersed upon any sizable disruption to a specific industry or sector. Both cities and workers could apply for support to weather economic doldrums and find new ways to thrive. Such support helped us all get through Covid. A similar strategy might help mitigate the worst-case scenarios associated with AI.
The potential downsides of this fund are worth the certain benefits of more resilient communities. A tax or penalty on AI would hinder the ability of AI companies to develop and deploy AI as quickly as possible. The specific allocation of that revenue to a rainy day fund might also nudge companies to avoid creating models likely to disrupt various professions. That’s all fine by me. We have survived centuries without AI, there’s no need for the latest and greatest model to come as soon as possible, especially given the immense costs of that pace of innovation.
Now is the time for Congress to enact such a proposal. Following the election, we may find Congress to be even more gridlocked and fragmented than before. Elected officials should welcome the chance to tell their constituents about a policy to bolster their economic prospects.
The urgency to address the job displacement caused by AI cannot be overstated. By establishing a rainy day fund, taxing AI companies to support displaced workers and exploring additional policies to maintain community stability, we can mitigate the adverse effects of rapid technological advancement. Congress must prioritize the well-being of communities over the relentless pursuit of AI innovation. Doing so will not only knit a stronger social fabric but also ensure AI develops in line with the public interest.
Keep ReadingShow less
Recommended
What's next for the consumer revolution in health care?
Oct 18, 2024
Pearl, the author of “ChatGPT, MD,” teaches at both the Stanford University School of Medicine and the Stanford Graduate School of Business. He is a former CEO of The Permanente Medical Group.
For years, patients have wondered why health care can’t be as seamless as other services in their lives. They can book flights or shop for groceries with a few clicks, yet they still need to take time off work and drive to the doctor’s office for routine care.
Two advances are now changing thisoutdated model and ushering in a new era of health care consumerism. With at-home diagnostics and generative artificial intelligence, patients are beginning to take charge of their health in wayspreviously unimaginable.
Patients can now bypass the doctor’s office and diagnose a range of medical conditions with home testing. Meanwhile,one in six Americans already use generative AI for medical advice. Together, these technologies are shifting health care away from the traditional, clinician-led model to one where empowered patients can make independent medical decisions.
But with this power comes responsibility. The shift to health care consumerism will require patients, doctors and government to assume new roles, ensuring people are protected and the medical profession remains sustainable into the future.
Sign up for The Fulcrum newsletter
From Lab To Living Room: The Rise Of Patient Empowerment
The early days of Covid-19 showcased a stark contrast between what patients and clinicians expect from health care.
While doctors and public health officials prioritized PCR testing — accurate but uncomfortable and slow —patients flocked to antigen tests that were easier, faster andnearly as reliable.
Today, the preference for at-home testing is more entrenched than ever. Americans already use FDA-approved home tests for a wide range of conditions, from diagnosing urinary tract infections to confirming pregnancy. Home tests forcervical cancer and syphilis are now available, making it clear that patients will continue favoring the privacy and convenience of at-home options over the discomfort of doctor visits.Dozens more tests are soon to follow.
Generative AI is also gaining traction as patients become more comfortable using the technology to make health care decisions and choose the best treatments. A recent study found thatchatbot responses to patient questions were rated four times higher in quality and nine times higher in empathy than those of doctors. As AI continues to improve — becoming32 times more powerful in the next five years — consumers will increasingly trust it as a source of medical expertise.
These advancements will require fundamental changes to health care. The question is: How should doctors, patients and government officials approach medical care going forward?
Doctors Bear New Burdens In Consumer-Driven Medicine
Whether clinicians embrace it or not, health care consumerism is here. Patients now rankconvenience in health care as more important thanquality and even cost. This means doctors will need to adapt or risk losing patients toretail clinics and telemedicine platforms.
To respond to these changes, doctors must see home testing and AI not as threats, but as valuable tools to build a more collaborative relationship with patients.
For example, when patients receive a positive result from an at-home test for syphilis or cervical cancer, they are likely to feel a sense of urgency — even if immediate intervention isn’t medically necessary. Doctors who reserve time for same-day or next-day appointments will help build trust, minimize treatment delays and reduce the risk of complications.
Similarly, as patients increasingly use generative AI, they will come to their doctors not just with symptoms but often with a presumed diagnosis or suggested treatment plan. Given AI’s growing ability tomatch human diagnostic accuracy, clinicians will need to approach these AI-generated insights with an open mind. Rather than dismissing them or restarting the diagnostic process from scratch, doctors should integrate this information into their decision-making. And, when clinically appropriate, offering a same-day telemedicine consultation will help address patient concerns quickly while strengthening the doctor-patient relationship.
To a busy doctor — unaccustomed to this level of service — these changes might seem overwhelming. But with AI having already provided the likely diagnosis, the time required for doctors to confirm and treat the issue will be much shorter, helping to reduce appointment backlogs and streamline care.
The Responsibility Of The ‘Consumer Patient’
With patient empowerment comes responsibility. Patients embracing health care consumerism must be prepared to take greater accountability for their health. Here are three key steps.
- Knowledge: Patients should educate themselves on which at-home diagnostic tests areavailable and appropriate for their symptoms or screening needs. Similarly, they should familiarize themselves with generative AI platforms and test them with past medical encounters to gauge their accuracy.
- Expertise: Patients should compare home testing options with traditional lab tests. How accurate are at-home tests? Areerrors likely to result in false positives or false negatives? Understanding these factors is essential to judging whether the convenience of at-home testing outweighs the risks.
- Planning: Don’t wait until a test result or AI diagnosis causes concern. Plan ahead. Who will you contact if a test raises red flags? Where will you go if AI suggests a serious issue? If you don’t have a personal physician, will you turn to telemedicine, urgent care or an online doctor?
A Shift In Government: From Regulator To Facilitator
Consumer-friendly health care tools allow patients to stay on top of preventive care, better manage chronic conditions and avoid unnecessary office visits. But for this shift to succeed, government agencies like the CDC and FDA must take on new roles, as well.
First, the government must support companies developing safe and secure consumer health tools, from at-home diagnostics to GenAI platforms. Second, it needs to ensure these tools are accessible to all Americans, regardless of their location or income. Finally, agencies must prioritize educating the public on how to use these technologiessafely and effectively.
When done in collaboration with doctors and supported by government efforts, these tools will make health care more efficient and effective. Just as Amazon revolutionized shopping with speed and convenience, home testing and GenAI will drive consumerism in medicine. The combination of engaged clinicians, empowered patients and advanced technology will be far more powerful than any of these alone.
The consumerism train has left the station. It’s time for doctors, patients and the government to get onboard.
Keep ReadingShow less
Reality bytes: Kids confuse the real world with the screen world
Oct 04, 2024
Patel is an executive producer/director, the creator of “ConnectEffect” and a Builders movement partner.
Doesn’t it feel like summer break just began? Yet here we are again. Fall’s arrival means kids have settled into a new school year with new teachers, new clothes and a new “attitude” for parents and kids alike, to start on the right foot.
Yet it’s hard for any of us to find footing in an increasingly polarized and isolated world. The entire nation is grappling with a rising tide of mental health concerns — including the continually increasing alienation and loneliness in children — and parents are struggling to foster real human connection for their kids in the real world. The battle to minimize screen time is certainly one approach. But in a world that is based on screens, apps and social media, is it a battle that realistically can be won?
If we want to reduce screens’ negative impact on our children’s mental health, what we need is a “hard reset” of their relationships with their devices by ensuring they are deeply aware of the difference between the real world and the screen world.
I’ve spent the last eight years focused on showing people the difference between these worlds, helping bring them back together, in person, to bridge divides and foster authentic human connection, conversation and community. Like the people I work with, parents can help their children understand the difference between the two worlds through a two-part plan: first, by hard-resetting their misguided relationships with their screen and, second, by intentionally connecting them to others in real life.
Sign up for The Fulcrum newsletter
Remember when the end of “The Wizard of Oz” revealed that the wizard was just a man behind a curtain? To break a child’s toxic relationship with their screen, parents need to pull back another curtain to show their kids exactly how all media works, from social media and news companies to search engines and apps. Almost everything kids see on their screens is an edit, and behind almost every edit is a similar intention: more likes, followers and users that can be monetized. Through the attention extraction model, most everything that appears on our screen is designed to maximize our attention for profit, feeding us more content, regardless of the impact it may have on us individually and as a society. If, as a family, you haven’t yet watched the documentary “The Social Dilemma,” the start of the year is a perfect time.
Helping kids realize that the structure of social media is not made with their well-being in mind — in fact, it has a very different motive — can help them recognize that they are not alone in their feelings and reactions to the screen. According to Pew Research, 31 percent of teens say social media makes them feel like their friends are leaving them out and 23 percent say what they see on social media makes them feel worse about their own life. Talking with their peers less about what is on their screen, but rather how their screen makes them feel, is a point of connection they may not realize.
Having spent nearly a decade connecting people, it is clear that one of the secrets to connection in the real world is the introduction. In other words, how people are introduced to one another often sets up the way they will see one another. Based on the primary-recency effect, when people first connect through the two-dimensional edits in the screen world, they make assumptions that lean into pre-conceived notions of how the “other” should be. In a country growing increasingly polarized and dehumanized by social media echo chambers and a profound lack of human connection, this reality impacts our children, who have less real world experiences under their belts.
The beginning of a school year offers a timely opportunity to allow children the space to paint a more complete picture of their new classmates before screens intervene. A simple initialism, EPIC, can provide parents with four techniques for making sure interactions are maximized for connection and trust.
- Equalization: What are the meaningful overlaps of life experiences that your child and those around them share? Have them seek similarities, rather than differences, with the kids they are about to meet. If they change what they are looking for, it will change what they see.
- Personalization: In a world of infinite edits of information that make it hard to find common ground, encourage your child to personalize what they think based on their own life experiences, rather than regurgitate information they absorbed from their screen.
- Investigation: When people meet for the first time, they often feel anxious about what they are going to say. Suggest your child focus on trying to learn and understand the other person rather than worrying about their responses.This empathy will be felt by the other person, and is a powerful driver of trust and connection.
- Collaboration: Many young adults feel overwhelmed by the burden of social interactions, fearing if it goes wrong it’s all their fault. Social interactions are less worrisome when people remember both sides are equal participants in a collaboration and it’s not all on them.
If we use this time at the start of every year to teach children the realities of the screens they use and how to intentionally foster deeper, real world introductions, they will create a future for themselves and others empowered and enriched by social connections, not fearful of them.
Keep ReadingShow less
We may face another 'too big to fail' scenario as AI labs go unchecked
Oct 02, 2024
Frazier is an assistant professor at the Crump College of Law at St. Thomas University and a Tarbell fellow.
In the span of two or so years, OpenAI, Nvidia and a handful of other companies essential to the development of artificial intelligence have become economic behemoths. Their valuations and stock prices have soared. Their products have become essential to Fortune 500 companies. Their business plans are the focus of the national security industry. Their collapse would be, well, unacceptable. They are too big to fail.
The good news is we’ve been in similar situations before. The bad news is we’ve yet to really learn our lesson.
In the mid-1970s, a bank known for its conservative growth strategy decided to more aggressively pursue profits. The strategy worked. In just a few years the bank became the largest commercial and industrial lender in the nation. The impressive growth caught the attention of others — competitors looked on with envy, shareholders with appreciation and analysts with bullish optimism. As the balance sheet grew, however, so did the broader economic importance of the bank. It became too big to fail.
Regulators missed the signs of systemic risk. A kick of the bank’s tires gave no reason to panic. But a look under the hood — specifically, at the bank’s loan-to-assets ratio and average return on loans — would have revealed a simple truth: The bank had been far too risky. The tactics that fueled its go-go years rendered the bank over exposed to sectors suffering tough economic times. Rumors soon spread that the bank was in a financially sketchy spot. It was the Titanic, without the band, to paraphrase an employee.
Sign up for The Fulcrum newsletter
When the inevitable run on the bank started, regulators had no choice but to spend billions to keep the bank afloat — staving it from sinking and bringing the rest of the economy with it. Of course, a similar situation played out during the Great Recession — risky behavior by a few bad companies imposed bailout payments on the rest of us.
AI labs are similarly taking gambles that have good odds of making many of us losers. As major labs rush to release their latest models, they are not stopping to ask if we have the social safety nets ready if things backfire. Nor are they meaningfully contributing to building those necessary safeguards.
Instead, we find ourselves in a highly volatile situation. Our stock market seemingly pivots on earnings of just a few companies — the world came to a near standstill last month as everyone awaited Nvidia’s financial outlook. Our leading businesses and essential government services are quick to adopt the latest AI models despite real uncertainty as to whether they will operate as intended. If any of these labs took a financial tumble or any of the models were significantly flawed, the public would likely again be asked to find a way to save the risk takers.
This outcome may be likely but it’s not inevitable. The Dodd-Frank Act passed in response to the Great Recession and intended to prevent another Too Big to Fail situation in the financial sector has been roundly criticized for its inadequacy. We should learn from its faults in thinking through how to make sure AI goliaths don’t crush all of us Davids.
Some sample steps include mandating and enforcing more rigorous testing of AI models before deployment. It would also behoove us to prevent excessive reliance on any one model by the government — this could be accomplished by requiring public service providers to maintain analog processes in the event of emergencies. Finally, we can reduce the economic sway of a few labs by fostering more competition in the space.
Too Big to Fail scenarios have happened on too many occasions. There’s no excuse for allowing AI labs to become so large and so essential that we collectively end up paying for their mistakes.
Keep ReadingShow less
Load More