top of page

Insurtech Solutions

Jevons’ Paradox in Insurance: The Impact of Plummeting AI Costs on Underwriting and Claims

  • Writer: Don Seibert
    Don Seibert
  • Feb 16
  • 3 min read



A steam engine, and a computer illustrating Jevon's Paradox

Jevon's Paradox

Jevons’ Paradox was once a fairly obscure 1865 observation about steam engines and coal consumption. The idea? As technology makes something more efficient, total usage increases instead of decreasing. 

In 1865, William Stanley Jevons predicted that as steam engines get more efficient, the demand for coal would increase. In 2025, people are talking about intelligence, now predicting that as AI gets dramatically cheaper and more powerful usage will grow even faster. 

In insurance, we're always making trade-offs with our capabilities - with no touch underwriting and claims handling for the easy, the small, the low impact, and reserving our trained underwriting and claims resources for the large, the tricky, the risky.  In a world of limited resources, that makes sense - reserving money and time for where it can have the highest impact.  What happens when intelligence and insurance acumen stops being scarce, and starts being unlimited?


AI Is Getting Cheaper—Fast

OpenAI CEO Sam Altman recently pointed out that the cost of using a given level of AI is dropping by a factor of 10 (or more) every year.

For example, the price per token from GPT-4 in early 2023 to GPT-4o in mid-2024 dropped by roughly 150 times. This rate of improvement outpaces Moore’s Law and suggests that intelligence—like computing power before it—is quickly becoming a commodity.

And when intelligence gets cheaper, Jevons’ Paradox tells us that people will demand more of it.

So what happens when underwriting and claims processing have nearly unlimited AI-driven intelligence at their disposal?


Underwriting: Every Detail, Every Account, Every Time

Today, underwriting is limited by time, cost, and human resources.

Not everything gets reviewed. Not everything gets double-checked. Some accounts get far more scrutiny than others—not because they should, but because of efficiency trade-offs.


  • Loss control reports go unread for many smaller accounts.

  • Small accounts often get only a basic data review, while large accounts get full inspections, risk modeling, and deeper investigation.

  • 'Safe' accounts skip deeper verification—risky ones get extra checks, but that’s based on broad assumptions, not always actual risk factors.

  • Third-party data sources are used selectively, with underwriters only digging deeper when a case seems uncertain or high-stakes.


Now imagine an insurer with unlimited underwriting assistants, each as skilled as a PhD-level researcher.

🔹 Every report would be analyzed—word for word, photo by photo, cross-checked against underwriting rules.

🔹 Every account—small or large—would get the same deep scrutiny. No more skipping details just because of size or assumed safety.

🔹 The web would be scoured—not just company websites, but social media, customer reviews, and even Google Maps images to refine classifications, placements, and risk assessments.

🔹 State and public databases would be mined—tracking similar businesses, ownership changes, and years in operation to spot hidden risks and trends.

🔹 Underwriting would shift from reactive to proactive, following hunches, identifying patterns, and flagging risks before they become claims.

Most importantly—every clever insight that a human underwriter comes up with, the AI can replicate at scale.


  • Find a great way to spot misclassifications? AI can check every account.

  • Discover an early warning sign of fraud? AI can scan every submission.

  • Identify a new underwriting factor that matters? AI can apply it instantly across the entire book.


Instead of AI replacing underwriters, it would expand underwriting’s scope—allowing deeper, more data-driven decision-making at scale.

Underwriting: Every Detail, Every Account, Every Time

Today, underwriting is limited by time, cost, and human resources.

Not everything gets reviewed. Not everything gets double-checked. Some accounts get far more scrutiny than others—not because they should, but because of efficiency trade-offs.


  • Loss control reports go unread for many smaller accounts.

  • Small accounts often get only a basic data review, while large accounts get full inspections, risk modeling, and deeper investigation.

  • 'Safe' accounts skip deeper verification—risky ones get extra checks, but that’s based on broad assumptions, not always actual risk factors.

  • Third-party data sources are used selectively, with underwriters only digging deeper when a case seems uncertain or high-stakes.


Now imagine an insurer with unlimited underwriting assistants, each as skilled as a PhD-level researcher.

🔹 Every report would be analyzed—word for word, photo by photo, cross-checked against underwriting rules.

🔹 Every account—small or large—would get the same deep scrutiny. No more skipping details just because of size or assumed safety.

🔹 The web would be scoured—not just company websites, but social media, customer reviews, and even Google Maps images to refine classifications, placements, and risk assessments.

🔹 State and public databases would be mined—tracking similar businesses, ownership changes, and years in operation to spot hidden risks and trends.

🔹 Underwriting would shift from reactive to proactive, following hunches, identifying patterns, and flagging risks before they become claims.

Most importantly—every clever insight that a human underwriter comes up with, the AI can replicate at scale.


  • Find a great way to spot misclassifications? AI can check every account.

  • Discover an early warning sign of fraud? AI can scan every submission.

  • Identify a new underwriting factor that matters? AI can apply it instantly across the entire book.


Instead of AI replacing underwriters, it would expand underwriting’s scope—allowing deeper, more data-driven decision-making at scale.






 
 
 

Comments


bottom of page