Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innova ...
…
continue reading

1
The State of AI Safety with Steven Adler
47:23
47:23
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
47:23Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model …
…
continue reading

1
Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US
46:15
46:15
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:15Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory a…
…
continue reading

1
Uncle Sam Buys In: Examining the Intel Deal
47:34
47:34
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
47:34Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House’s announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as i…
…
continue reading

1
AI in the Classroom with MacKenzie Price, Alpha School co-founder, and Rebecca Winthrop, leader of the Brookings Global Task Force on AI in Education
1:20:43
1:20:43
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:20:43MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at h…
…
continue reading

1
The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBride
45:17
45:17
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
45:17Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas …
…
continue reading

1
Export Controls: Janet Egan, Sam Winter-Levy, and Peter Harrell on the White House's Semiconductor Decision
53:37
53:37
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
53:37Alan Rozenshtein, research director at Lawfare, sat down with Sam Winter-Levy, a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace; Janet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security; and Peter Harrell, a nonresident fello…
…
continue reading

1
Navigating AI Policy: Dean Ball on Insights from the White House
58:11
58:11
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
58:11Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving p…
…
continue reading

1
The Legal Maze of AI Liability: Anat Lior on Bridging Law and Emerging Tech
46:41
46:41
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:41In this episode, we talk about the intricate world of AI liability through the lens of agency law. Join us as Anat Lior explores the compelling case for using agency law to address the legal challenges posed by AI agents. Discover how analogies, such as principal-agent relationships, can help navigate the complexities of AI liability, and why it's …
…
continue reading

1
Values in AI: Safety, Ethics, and Innovation with OpenAI's Brian Fuller
50:28
50:28
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
50:28Brian Fuller, product policy leader at OpenAI joins Kevin on the challenges of designing policies that ensure AI technologies are safe, aligned, and socially beneficial. From the fast-paced landscape of AI development to the balancing of innovation with ethical responsibility. Tune in to gain insights into the frameworks that guide AI's integration…
…
continue reading

1
Because of Woke: Renée DiResta and Alan Rozenshtein on the ‘Woke AI’ Executive Order
46:48
46:48
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:48Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration’s Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue…
…
continue reading

1
Moving the AGI Goal Posts: AI Skepticism with Sayash Kapoor
58:32
58:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
58:32In this episode of Scaling Laws, Kevin Frazier is joined by Sayash Kapoor, co-author of "AI Snake Oil," to explore the complexities of AI development and its societal implications. They delve into the skepticism surrounding AGI claims, the real bottlenecks in AI adoption, and the transformative potential of AI as a general-purpose technology. Kapoo…
…
continue reading

1
A New AI Regulatory Regime? SB 813 with Lauren Wagner and Andrew Freedman
55:30
55:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
55:30In this episode, join Kevin Frazier as he delves into the complex world of AI regulation with experts Lauren Wagner of the Abundance Institute and Andrew Freedman, Chief Strategy Officer at Fathom. As the AI community eagerly awaits the federal government's AI action plan, our guests explore the current regulatory landscape and the challenges of im…
…
continue reading

1
AI Action Plan: Janet Egan, Jessica Brandt, Neil Chilson, and Tim Fist
1:03:21
1:03:21
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:03:21Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute f…
…
continue reading

1
Lt. Gen Jack Shanahan: Defense's AI Integration
55:45
55:45
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
55:45Lt. Gen. (ret) Jack Shanahan joins Kevin Frazier to explore the nuanced landscape of AI in national security. Challenging the prevalent "AI arms race" narrative. The discussion delves into the complexities of AI integration in defense, the cultural shifts required within the Department of Defense, and the critical role of public trust and shared na…
…
continue reading

1
Eugene Volokh: Navigating Libel and Liability in the AI Age
58:29
58:29
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
58:29Kevin Frazier brings Eugene Volokh, a senior fellow at the Hoover Institution and UCLA law professor, to explore the complexities of libel in the age of AI. Discover how AI-generated content challenges traditional legal frameworks and the implications for platforms under Section 230. This episode is a must-listen for anyone interested in the evolvi…
…
continue reading

1
Cass Madison and Zach Boyd: State Level AI Regulation
41:32
41:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
41:32Cass Madison, the Executive Director of the Center for Civic Futures, and Zach Boyd, Director of the AI Policy Office at the State of Utah, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how state governments are adjusting to the Age of AI. This conversation explo…
…
continue reading

1
Ethan Mollick: Navigating the Uncertainty of AI Development
1:06:11
1:06:11
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:06:11In this episode of Scaling Laws, Alan and Kevin discuss the current state of AI growth, focusing on scaling laws, the future of AGI, and the challenges of AI integration into society with Ethan Mollick, Professor of Management at Wharton, specializing in entrepreneurship and innovation. They explore the bottlenecks in AI adoption, particularly the …
…
continue reading

1
The AI Moratorium Goes Down in Flames
55:32
55:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
55:32On the inaugural episode of Scaling Laws, co-hosts Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Law, and Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, speak with Adam Thierer, a senior fellow for the Technology and Innovation team at the R Street Institu…
…
continue reading
Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Alan and Kevin dive into the intersection of AI, innovation policy, and the law through…
…
continue reading

1
Matt Perault, Ramya Krishnan, and Alan Rozenshtein Talk About the TikTok Divestment and Ban Bill
50:32
50:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
50:32Last week the House of Representatives overwhelmingly passed a bill that would require ByteDance, the Chinese company that owns the popular social media app TikTok, to divest its ownership in the platform or face TikTok being banned in the United States. Although prospects for the bill in the Senate remain uncertain, President Biden has said he wil…
…
continue reading
Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem. On March 18, the Supreme Court heard oral arguments in Murthy v. Missouri, concerning the potential First Amendment implications of government outreach to social media platforms—what’s sometimes known as jawboning. The case arrived at the Supreme Cou…
…
continue reading

1
How Are the TikTok Bans Holding Up in Court?
49:27
49:27
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
49:27In May 2023, Montana passed a new law that would ban the use of TikTok within the state starting on January 1, 2024. But as of today, TikTok is still legal in the state of Montana—thanks to a preliminary injunction issued by a federal district judge, who found that the Montana law likely violated the First Amendment. In Texas, meanwhile, another fe…
…
continue reading

1
Jeff Horwitz on Broken Code and Reporting on Facebook
53:58
53:58
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
53:58In 2021, the Wall Street Journal published a monster scoop: a series of articles about Facebook’s inner workings, which showed that employees within the famously secretive company had raised alarms about potential harms caused by Facebook’s products. Now, Jeff Horwitz, the reporter behind that scoop, has a new book out, titled “Broken Code”—which d…
…
continue reading

1
Will Generative AI Reshape Elections?
49:03
49:03
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
49:03Unless you’ve been living under a rock, you’ve probably heard a great deal over the last year about generative AI and how it’s going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might a…
…
continue reading

1
The Crisis Facing Efforts to Counter Election Disinformation
57:00
57:00
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
57:00Over the course of the last two presidential elections, efforts by social media platforms and independent researchers to prevent falsehoods from spreading about election integrity have become increasingly central to civic health. But the warning signs are flashing as we head into 2024. And platforms are arguably in a worse position to counter false…
…
continue reading

1
Talking AI with Data and Society’s Janet Haven
46:22
46:22
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:22Today, we’re bringing you an episode of Arbiters of Truth, our series on the information ecosystem. And we’re discussing the hot topic of the moment: artificial intelligence. There are a lot of less-than-informed takes out there about AI and whether it’s going to kill us all—so we’re glad to be able to share an interview that hopefully cuts through…
…
continue reading

1
What Impact did Facebook Have on the 2020 Elections?
45:24
45:24
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
45:24How much influence do social media platforms have on American politics and society? It’s a tough question for researchers to answer—not just because it’s so big, but also because platforms rarely if ever provide all the data that would be needed to address the problem. A new batch of papers released in the journals Science and Nature marks the late…
…
continue reading

1
Brian Fishman on Violent Extremism and Platform Liability
1:04:21
1:04:21
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:04:21Earlier this year, Brian Fishman published a fantastic paper with Brookings thinking through how technology platforms grapple with terrorism and extremism, and how any reform to Section 230 must allow those platforms space to continue doing that work. That’s the short description, but the paper is really about so much more—about how the work of con…
…
continue reading

1
Cox and Wyden on Section 230 and Generative AI
29:52
29:52
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
29:52Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content. On this episode of Arbiters of Truth, Lawfare’s occasional series on the information ecosy…
…
continue reading

1
An Interview with Meta’s Chief Privacy Officers
45:53
45:53
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
45:53In 2018, news broke that Facebook had allowed third-party developers—including the controversial data analytics firm Cambridge Analytica—to obtain large quantities of user data in ways that users probably didn’t anticipate. The fallout led to a controversy over whether Cambridge Analytica had in some way swung the 2016 election for Trump (spoiler: …
…
continue reading
If someone lies about you, you can usually sue them for defamation. But what if that someone is ChatGPT? Already in Australia, the mayor of a town outside Melbourne has threatened to sue OpenAI because ChatGPT falsely named him a guilty party in a bribery scandal. Could that happen in America? Does our libel law allow that? What does it even mean f…
…
continue reading

1
A TikTok Ban and the First Amendment
46:32
46:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:32Over the past few years, TikTok has become a uniquely polarizing social media platform. On the one hand, millions of users, especially those in their teens and twenties, love the app. On the other hand, the government is concerned that TikTok's vulnerability to pressure from the Chinese Communist Party makes it a serious national security threat. T…
…
continue reading

1
Ravi Iyer on How to Improve Technology Through Design
45:05
45:05
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
45:05On the latest episode of Arbiters of Truth, Lawfare's series on the information ecosystem, Quinta Jurecic and Alan Rozenshtein spoke with Ravi Iyer, the Managing Director of the Psychology of Technology Institute at the University of Southern California's Neely Center. Earlier in his career, Ravi held a number of positions at Meta, where he worked …
…
continue reading
During recent oral arguments in Gonzalez v. Google, a Supreme Court case concerning the scope of liability protections for internet platforms, Justice Neil Gorsuch asked a thought-provoking question. Does Section 230, the statute that shields websites from liability for third-party content, apply to a generative AI model like ChatGPT? Luckily, Matt…
…
continue reading
You've likely heard of ChatGPT, the chatbot from OpenAI. But you’ve likely never heard an interview with ChatGPT, much less an interview in which ChatGPT reflects on its own impact on the information ecosystem. Nor is it likely that you’ve ever heard ChatGPT promising to stop producing racist and misogynistic content. But, on this episode of Arbite…
…
continue reading
Tech policy reform occupies a strange place in Washington, D.C. Everyone seems to agree that the government should change how it regulates the technology industry, on issues from content moderation to privacy—and yet, reform never actually seems to happen. But while the federal government continues to stall, state governments are taking action. Mor…
…
continue reading

1
Rick Hasen and Nate Persily on Replatforming Trump on Social Media
43:46
43:46
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
43:46On November 19, Twitter’s new owner Elon Musk announced that he would be reinstating former President Donald Trump’s account on the platform—though so far, Trump hasn’t taken Musk up on the offer, preferring instead to stay on his bespoke website Truth Social. Meanwhile, Meta’s Oversight Board has set a January 2023 deadline for the platform to dec…
…
continue reading

1
A Member of Meta’s Oversight Board Discusses the Board’s New Decision
46:13
46:13
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
46:13When Facebook whistleblower Frances Haugen shared a trove of internal company documents to the Wall Street Journal in 2021, some of the most dramatic revelations concerned the company’s use of a so-called “cross-check” system that, according to the Journal, essentially exempted certain high-profile users from the platform’s usual rules. After the J…
…
continue reading

1
Decentralized Social Media and the Great Twitter Exodus
57:32
57:32
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
57:32It’s Election Day in the United States—so while you wait for the results to come in, why not listen to a podcast about the other biggest story obsessing the political commentariat right now? We’re talking, of course, about Elon Musk’s purchase of Twitter and the billionaire’s dramatic and erratic changes to the platform. In response to Musk’s takeo…
…
continue reading
The Supreme Court has granted cert in two cases exploring the interactions between anti-terrorism laws and Section 230 of the Communications Decency Act. To discuss the cases, Lawfare editor-in-chief Benjamin Wittes sat down on Arbiters of Truth, our occasional series on the online information ecosystem, with Lawfare senior editors and Rational Sec…
…
continue reading

1
Mark Bergen on the Rise and Rise of YouTube
1:01:08
1:01:08
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:01:08Today, we’re bringing you another episode of our Arbiters of Truth series on the online information ecosystem. Lawfare senior editor Quinta Jurecic spoke with Mark Bergen, a reporter for Bloomberg News and Businessweek, about his new book, “Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination.” YouTube is one of the largest a…
…
continue reading

1
The Fifth Circuit is Wrong on the Internet
54:30
54:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
54:30Our Arbiters of Truth series on the online information ecosystem has been taking a bit of a hiatus—but we’re back! On today’s episode, we’re discussing the recent ruling by the U.S. Court of Appeals for the Fifth Circuit in NetChoice v. Paxton, upholding a Texas law that binds large social media platforms to certain transparency requirements and si…
…
continue reading
A few weeks ago on Arbiters of Truth, our series on the online information system, we brought you a conversation with two emergency room doctors about their efforts to push back against members of their profession spreading falsehoods about the coronavirus. Today, we’re going to take a look at another profession that’s been struggling to counter li…
…
continue reading

1
The Corporate Law Behind Musk v. Twitter
58:44
58:44
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
58:44You’ve likely heard that Elon Musk wanted to buy Twitter… and that he is now trying to get out of buying Twitter… and that at first he wanted to defeat the bots on Twitter… but now he’s apparently surprised that there are lots of bots on Twitter. It's a spectacle made for the headlines, but it's also, at its core, a regular old corporate law disput…
…
continue reading

1
Online Speech and Section 230 After Dobbs
55:52
55:52
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
55:52When the Supreme Court handed down its opinion in Dobbs v. Jackson Women’s Health Organization, overturning Roe v. Wade, the impact of the decision on the internet may not have been front of mind for most people thinking through the implications. But in the weeks after the Court’s decision, it’s become clear that the post-Dobbs legal landscape arou…
…
continue reading
Since the beginning of the pandemic, we’ve talked a lot on this show about how falsehoods about the coronavirus are spread and generated. For this episode, Evelyn Douek and Quinta Jurecic spoke with two emergency medicine physicians who have seen the practical effects of those falsehoods while treating patients over the last two years. Nick Sawyer …
…
continue reading

1
What We Talk About When We Talk About Algorithms
1:03:30
1:03:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
1:03:30Algorithms! We hear a lot about them. They drive social media platforms and, according to popular understanding, are responsible for a great deal of what’s wrong about the internet today—and maybe the downfall of democracy itself. But … what exactly are algorithms? And, given they’re not going away, what should they be designed to do? Evelyn Douek …
…
continue reading

1
The Jan. 6 Committee Takes On the Big Lie
54:06
54:06
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
54:06The House committee investigating the Jan. 6 insurrection is midway through a blockbuster series of hearings exploring Donald Trump’s efforts to overturn the 2020 election and disrupt the peaceful transfer of power. Central to those efforts, of course, was the Big Lie—the false notion that Trump was cheated out of victory in 2020. This week on Arbi…
…
continue reading

1
Rebroadcast: The Most Intense Online Disinformation Event in American History
50:30
50:30
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
50:30If you’ve been watching the hearings convened by the House select committee on Jan. 6, you’ve seen a great deal about how the Trump campaign generated and spread falsehoods about supposed election fraud in 2020. As the committee has argued, those falsehoods were crucial in generating the political energy that culminated in the explosion of the Janu…
…
continue reading

1
Defamation, Disinformation, and the Depp-Heard Trial
56:03
56:03
التشغيل لاحقا
التشغيل لاحقا
قوائم
إعجاب
احب
56:03If you loaded up the internet or turned on the television somewhere in the United States over the last two months, it’s been impossible to avoid news coverage of the defamation trial of actors Johnny Depp and Amber Heard—both of whom sued each other over a dispute relating to allegations by Heard of domestic abuse by Depp. In early June, a Virginia…
…
continue reading