
Image: Flickr / Wikimedia Commons
Corgi Wants to Be the Insurance Company for the AI Era
The YC-backed insurer just hit a $1.3B valuation. Its AI liability product is aimed squarely at startups building with models they do not fully control.
This article was produced by the AETW editorial team.
Corgi, a full-stack AI-native insurance carrier built for startups, launched a dedicated AI liability product this week and closed a $160M Series B at a $1.3B valuation. The coverage is designed for the specific risks that come with shipping AI in production.
The gap it is filling
When a startup's AI model hallucinates a legal citation, produces a discriminatory credit score, or leaks training data in a breach, traditional tech errors and omissions insurance was not built to handle it. The policies were written for software that fails in predictable ways. AI fails in weirder ones.
Corgi, a YC-backed insurance carrier founded in 2024, is betting that gap is large enough to build a billion-dollar company around. This week the company launched a dedicated AI liability product and announced a $160 million Series B led by TCV, valuing the startup at $1.3 billion. The round brings total funding raised to over $268 million, less than two years after the company was founded.
What the AI coverage actually covers
Rather than selling a standalone policy, Corgi integrates its AI liability coverage directly with a customer's existing Tech E&O policy. The structure is modular, meaning companies choose coverage based on the specific risks in their product, not a one-size-fits-all package.
The covered scenarios include: claims from biased algorithms in hiring, lending, or healthcare decisions; liability from inaccurate or harmful content generated by an LLM; legal disputes over training data and IP; adversarial attacks on deployed models; synthetic media misuse; and failures in autonomous systems. Corgi also clarifies a question that trips up many founders building on top of OpenAI, Anthropic, or open-source models - if a customer sues over harm caused by an upstream model's behavior, the startup's own Tech E&O responds, not the foundation model provider's policy.
The business model underneath
Corgi operates as a full-stack carrier, which means it underwrites, issues, and manages claims itself rather than acting as a broker sitting between the customer and a traditional insurer. The company received regulatory approval as a licensed carrier in July 2025. By cutting out the broker layer and running underwriting on its own AI systems, Corgi claims it can generate quotes in under 10 minutes and bind coverage the same day. Traditional underwriting cycles often run multiple weeks.
The startup packages its coverage into stage-appropriate tiers - a Pre-Seed bundle focused on CGL, D&O, Tech E&O, and Cyber, scaling up through Series A and growth-stage bundles as headcount and enterprise contract exposure increases. Coverage modules can be added or upgraded from a dashboard without starting a new policy process, which matters when a startup goes from 10 to 100 employees or signs its first large enterprise deal in a quarter.
Why the timing makes sense
Three forces are converging that make AI-specific insurance increasingly non-optional for startups. Enterprise customers are demanding proof of AI risk coverage before signing API agreements. Investors are auditing data provenance and IP posture as part of due diligence before closing rounds. And regulators, particularly under the EU AI Act, are tightening compliance requirements in ways that make having documented risk controls a commercial necessity, not just a legal one.
Corgi's annual recurring revenue has crossed $40 million since receiving its carrier license. The company started in property management insurance and is now moving into trucking as its first expansion outside the startup market. Payroll and small business coverage are flagged as future targets. Co-founder Emily Yuan noted that insurance is still running on infrastructure built centuries ago - Corgi's pitch is that AI-era risk needs AI-era underwriting.
The risks worth watching
The product is genuinely useful, but there are real constraints to understand. AI liability claims are still rare enough that actuarial models for pricing them are immature. Corgi is writing policies in a risk category where historical loss data is thin, which means pricing will be calibrated over time and could shift significantly as more claims are filed. It is also worth noting that coverage requires an existing Tech E&O policy - the AI module is an add-on, not a standalone product for companies that have not already purchased base coverage.
There is also a definitional question around what counts as an AI system failure versus a human one. A model producing a bad output because it was prompted incorrectly by the user is a different situation from a model failing on its own - and how those claims are adjudicated will define whether this product delivers what it promises.
Sources
AI & Technology Researcher
Brian Weerasinhe is the founder and editor of AI Eating The World, where he covers artificial intelligence, tech companies, layoffs, startups, and the future of work. His reporting focuses on how AI is transforming businesses, products, and the global workforce. He writes about major developments across the AI industry, from enterprise adoption and funding trends to the real-world impact of automation and emerging technologies.


