The most consequential general-purpose technologies — fire,
agriculture, the printing press, electricity, the internet —
have one thing in common. The people who recognized them early
and built around them captured most of the value. The people
who were skeptical or waited for clarity captured very little.
None of those technologies waited for permission.
AI is on that list. It has been, for at least a decade. The
curves above are not speculation; they are
measured.
The base case isn't maybe AI will be a big deal in fifteen
years. The base case — the trend extrapolation, the thing
that has happened consistently every year since 2010 — is that
frontier intelligence per dollar will be ~1,000× cheaper
five years from now than it is today. Plan for that.
For a risk professional, that means three things. First, every
risk model that assumes today's labor cost or today's
judgment-as-a-bottleneck is wrong on a 5-year horizon. Second,
the firms that integrate AI deeply — not as a chatbot, but as a
re-architecting of their core processes — will compound
advantages that compound advantages. Third, the second-order
risks (concentration, model failure modes, systemic prompt
injection, regulatory whiplash) are not yet priced into most
risk frameworks, and the gap between the firms that price them
and the firms that don't will be enormous.
None of this is forecasting. It is bookkeeping on a curve that
has held for fourteen years. The only real question is whether
you choose to be the person who saw it — and adjusted — or the
person who didn't.