The Suspicion Tax
November 2025
Last month I caught myself discounting a thoughtful comment on something I'd written. Well-constructed, engaged with the actual argument, the kind of response you hope for. My first instinct wasn't gratitude. It was suspicion. Probably synthetic. I moved on without replying.
The profile was real. Fifteen years of history. An actual person who'd taken time to engage, and I'd defaulted to dismissal because the cost of being fooled felt higher than the cost of being wrong.
This is the suspicion tax. I think it's the thing worth examining - more than the AI content itself.
The signal, not the content
Social interaction rewards us not because of what's said but because of what it signals. Someone with finite time and attention chose to spend some on you. That scarcity creates meaning. The comment matters because a conscious being, with other options, selected your work as worth engaging with.
AI removes this signal. It responds because responding is what it does. No opportunity cost, no selection, no scarcity. The content might be indistinguishable from human output, but the meaning is absent. You weren't chosen. You were processed.
Our reward circuits aren't sophisticated enough to check provenance. The dopamine fires anyway. But something doesn't land. The itch gets scratched; the need persists.
A systems design failure
This is where my usual lens applies: trust, not capability, is the bottleneck.
Social platforms deployed generative AI without designing for trust. The capability existed, so it shipped. No one asked: what happens when you can no longer verify whether engagement is genuine?
The "mistake" - synthetic content passing as human - isn't absorbed cheaply. It distributes across every interaction as degraded trust. Diffuse, hard to measure in quarterly metrics, but compounding.
Contamination
Once bitten, twice suspicious. But suspicion doesn't stay contained to confirmed fakes. It bleeds everywhere.
This functions like chronic deception in relationships. Individual lies aren't the problem - it's the contamination of truth. When you can't reliably distinguish genuine from synthetic, you stop fully crediting either. Every interaction carries the tax.
Worse: AI content optimises for engagement. Humans competing for attention mimic what works. The authentic voice that might connect with a few people gets drowned by content engineered for reach. Human output converges toward synthetic patterns. The signal degrades everywhere.
What trust-first architecture would require
The question isn't how to filter bots. It's what interaction mechanics can't be faked.
Synchronous conversation - you can't pre-generate both sides of a real-time exchange. Costly signals - effort that would be irrational for a bot to expend. Small groups - Dunbar-constrained spaces where reputation compounds. Reciprocal commitment - skin in the game on both sides.
These share a property: friction. The effort is the proof. The inconvenience is the signal.
The business model problem
Platforms that could implement this have no incentive to. Engagement metrics don't distinguish synthetic from genuine. The ad model is indifferent to whether you're connecting or just feeling like you might be.
There's possibly an opportunity in high-trust, high-friction spaces. Subscription-based, verification-heavy, deliberately small. But "solving loneliness at scale" isn't a thesis - the solution is antithetical to scale. Companies building this would need comfort with a different kind of success. Depth over reach. Margin over growth.
The interactions that cost something
The loneliness epidemic predates generative AI. Social media was already substituting performance for presence. But AI accelerates the decay in a specific way: it degrades trust infrastructure that took years to build.
We keep treating AI deployment as a capability question. Can we generate content that passes as human? Can we automate engagement?
The harder question is the trust question. What systems absorb these capabilities without the signal collapsing? What architectures maintain meaning when the counterfeit is perfect?
For social platforms, I don't see anyone seriously working on this. The incentives point elsewhere.
For the rest of us, the implication is simpler: the interactions that still mean something will be the ones that cost something. Time, presence, inconvenience, risk.
Connection requires friction. The effort is the message.