The Speed Problem

We keep telling ourselves a comforting story about artificial intelligence: that it’s just another technology, another productivity boost, another chapter in the long history of tools. This story persists because it allows us to avoid a harder reckoning. Tools don’t argue. They don’t persuade. They don’t generate entire epistemic environments at scale. AI does, and pretending otherwise is a failure of moral imagination.

What’s happening now isn’t subtle. Systems are being deployed that can mimic understanding well enough to dissolve long-standing distinctions between expertise and plausibility, authorship and synthesis, truth and fluency. And instead of pausing to ask what constraints should exist, we’ve treated deployment itself as justification. It works, therefore it ships. That’s not judgment. It’s abdication.

I find it strange how quickly dependence sets in. A model is released, it performs adequately, and within weeks it’s framed as indispensable. At that point, questioning its role sounds reactionary, even when the costs are obvious. We’ve inverted the burden of proof. Rather than asking why such systems should exist everywhere, we ask skeptics to explain why they shouldn’t. This is how infrastructure captures thought.

There’s also a persistent category error in how people talk about risk. We fixate on whether a given system is biased, or safe, or well aligned, while ignoring the more basic fact that we’re delegating judgment to processes that have no concept of human well-being. Intelligence without consciousness isn’t insight. It’s amplification. And amplification doesn’t care what it amplifies.

I’m not claiming that catastrophe is inevitable. I am claiming that our current posture is irrational. We are moving faster than our ethical frameworks, faster than our institutions, faster than our willingness to say no. We are building systems that shape belief and behavior while reassuring ourselves that we’ll figure out the consequences later. That confidence is unearned.

If this goes badly, it won’t be because no one raised concerns. It will be because those concerns were treated as aesthetic objections rather than structural ones. We mistook speed for progress, capability for wisdom, and convenience for legitimacy. That mistake is familiar. What’s new is the scale at which we’re making it.

And the most troubling part is how normal all of this already feels. That, more than anything, should worry us.We keep telling ourselves a comforting story about artificial intelligence: that it’s just another technology, another productivity boost, another chapter in the long history of tools. This story persists because it allows us to avoid a harder reckoning. Tools don’t argue. They don’t persuade. They don’t generate entire epistemic environments at scale. AI does, and pretending otherwise is a failure of moral imagination.

What’s happening now isn’t subtle. Systems are being deployed that can mimic understanding well enough to dissolve long-standing distinctions between expertise and plausibility, authorship and synthesis, truth and fluency. And instead of pausing to ask what constraints should exist, we’ve treated deployment itself as justification. It works, therefore it ships. That’s not judgment. It’s abdication.

I find it strange how quickly dependence sets in. A model is released, it performs adequately, and within weeks it’s framed as indispensable. At that point, questioning its role sounds reactionary, even when the costs are obvious. We’ve inverted the burden of proof. Rather than asking why such systems should exist everywhere, we ask skeptics to explain why they shouldn’t. This is how infrastructure captures thought.

There’s also a persistent category error in how people talk about risk. We fixate on whether a given system is biased, or safe, or well aligned, while ignoring the more basic fact that we’re delegating judgment to processes that have no concept of human well-being. Intelligence without consciousness isn’t insight. It’s amplification. And amplification doesn’t care what it amplifies.

I’m not claiming that catastrophe is inevitable. I am claiming that our current posture is irrational. We are moving faster than our ethical frameworks, faster than our institutions, faster than our willingness to say no. We are building systems that shape belief and behavior while reassuring ourselves that we’ll figure out the consequences later. That confidence is unearned.

If this goes badly, it won’t be because no one raised concerns. It will be because those concerns were treated as aesthetic objections rather than structural ones. We mistook speed for progress, capability for wisdom, and convenience for legitimacy. That mistake is familiar. What’s new is the scale at which we’re making it.

And the most troubling part is how normal all of this already feels. That, more than anything, should worry us.