Daniel Schwarcz
banner
danielschwarcz.bsky.social
Daniel Schwarcz
@danielschwarcz.bsky.social
Fredrikson & Byron Professor of Law, University of Minnesota Law School

Interested in insurance law, regulation & policy and the impact of AI on lawyering.

Access my research here: https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=499486
For more details on how and why an insurance exchange for homeowners insurance makes sense, check out my recently published article: Obamacare For Homeowners Insurance: Fixing America's Broken Insurance Markets In A Time Of Climate Change. papers.ssrn.com/sol3/papers....
Obamacare For Homeowners Insurance: Fixing America's Broken Insurance Markets In A Time Of Climate Change
<p>Over the last decade, homeowners insurance markets nationwide have experienced unprecedented instability due to climate change. These disruptions, which are
papers.ssrn.com
November 5, 2025 at 4:23 PM
A centralized insurance exchange would be a pro-market reform.

And we already know it works: the ACA created similar exchanges for health insurance.

Ironically, this idea makes more sense in homeowners insurance, where there are more carriers due to lower entry barriers.
November 5, 2025 at 4:23 PM
Right now, huge discrepancies exist in coverage, pricing, and mitigation discounts.

Why? Because most homeowners don’t comparison shop—it’s hard, confusing, and time-consuming.

That lack of shopping weakens competition and drives up costs for those least able to bear them.
November 5, 2025 at 4:23 PM
This could expand where liability and insurance work while informing smarter ex ante regulation. Draft’s still evolving—feedback welcome! papers.ssrn.com/sol3/papers....
The Limits of Regulating AI Safety Through Liability and Insurance: Lessons From Cybersecurity
As Artificial Intelligence (AI) systems become increasingly embedded in decision-making, design, and development across public and private sectors, proposals to
papers.ssrn.com
November 5, 2025 at 3:35 PM
We also propose a data-driven complement: build on California’s new transparency laws and other insurance data systems to mandate standardized AI safety reporting.
November 5, 2025 at 3:35 PM
But many AI risks—cyberattacks, terrorism, financial fraud, even psychosis—are adversarial and hard to measure, making insurance-based safety incentives much weaker.
November 5, 2025 at 3:35 PM
We clarify when liability and insurance can actually incentivize AI safety—mainly when losses are common and observable (like self-driving cars or AI-enabled medical devices).
November 5, 2025 at 3:35 PM
California’s transparency-first approach can help build the data needed to actually understand and reduce AI risk—something insurers alone can’t do.
October 21, 2025 at 4:05 PM
That's because liability means that insurers will become key intermediaries in promoting AI safety. But they’ve struggled to model or mitigate analogous risks in cyber insurance—and likely would again with AI.
October 21, 2025 at 4:05 PM
Drawing from our new draft, The Limits of Regulating AI Safety Through Liability and Insurance: Lessons From Cybersecurity, papers.ssrn.com/sol3/papers...., we argue that expanding liability for AI harms won’t necessarily make systems safer.
The Limits of Regulating AI Safety Through Liability and Insurance: Lessons From Cybersecurity
As Artificial Intelligence (AI) systems become increasingly embedded in decision-making, design, and development across public and private sectors, proposals to
papers.ssrn.com
October 21, 2025 at 4:05 PM
Unlike last year’s vetoed S.B. 1047, this new law explicitly caps penalties—even for catastrophic AI failures—and avoids assigning broad liability to AI developers.
That’s deliberate: as Sen. Scott Wiener said, “SB 53 is more focused on transparency.”
October 21, 2025 at 4:05 PM
This framework draws on principles of consumer financial regulation and lessons from the EU’s new AI Act. It seeks to balance the promise of AI-driven financial advice with the need to protect consumers from significant harm. Check it out!
September 25, 2025 at 1:56 PM
Our solution is a dual regulatory approach: (1) a licensing requirement for robo-advisors that use generative AI to match consumers with products, and (2) heightened duties of care and loyalty for all robo-advisors.
September 25, 2025 at 1:56 PM
We argue that the current U.S. regulatory framework is not up to the task. Existing rules fail to prevent AI-enabled robo-advisors from providing conflicted, inaccurate, or manipulative advice on a large scale.
September 25, 2025 at 1:56 PM
Generative AI can make customized financial advice widely available. But because it mimics human advisors so convincingly, it also creates serious risks that consumers will be nudged into costly or inappropriate products
September 25, 2025 at 1:56 PM
Check out the broader law review article, The Limits of Regulating AI Safety Through Liability and Insurance: Lessons From Cybersecurity. on which our Lawfare piece is based here: papers.ssrn.com/sol3/papers....
September 9, 2025 at 3:44 PM