3 items

Three domains, one pattern: AI compresses cost and increases volume, but the gap between "approximation" and "automation" persists. Writing gets slop, not singularity. Clean-room reimplementation gets legal ambiguity, not settled IP. Market research gets faster backtesting, not predictive intelligence. The ceiling question — does AI raise it or just raise the floor? — remains open and domain-dependent.

The Intrinsic Perspective 2026-03-08-1

Bits In, Bits Out

Hoel argues writing is the canary domain for AI capability — 6 years in, LLMs produced efficiency gains and slop, not a quality revolution. The Amazon book data is compelling (average worse, top 100 unchanged), but the extrapolation from writing to all domains is structurally weak: verifiable domains like code and math behave differently from taste-dependent ones. Best articulation of the "tools not intelligence" thesis, but cherry-picks the hardest domain for AI to show measurable ceiling gains.

Simon Willison's Weblog 2026-03-08-2

Can coding agents relicense open source through a "clean room" implementation of code?

Coding agents can now reimplement GPL codebases against test suites in hours, making copyleft economically unenforceable. The chardet LGPL→MIT relicensing dispute is the first clean test case, but the real bomb is training data contamination: if the model was trained on the original code, no "clean room" claim holds. Generalizes to any governance mechanism that relies on cost-of-reimplementation as friction.

Wall Street Journal 2026-03-08-3

Can AI Replace Humans for Market Research?

$100M Series A announcement dressed as trend piece. CVS's "95% accuracy" claim is backtested against known answers — the real test is predicting unknown findings, which nobody's shown. Digital twins for market research are a cost/speed optimization, not a new form of intelligence. The hard-to-reach population simulation (chronic disease patients from sparse data) is where overconfidence becomes actively dangerous.