reliability
6 items · chronological order
Can AI Replace Humans for Market Research?
$100M Series A announcement dressed as trend piece. CVS's "95% accuracy" claim is backtested against known answers — the real test is predicting unknown findings, which nobody's shown. Digital twins for market research are a cost/speed optimization, not a new form of intelligence. The hard-to-reach population simulation (chronic disease patients from sparse data) is where overconfidence becomes actively dangerous.
Can coding agents relicense open source through a "clean room" implementation of code?
Coding agents can now reimplement GPL codebases against test suites in hours, making copyleft economically unenforceable. The chardet LGPL→MIT relicensing dispute is the first clean test case, but the real bomb is training data contamination: if the model was trained on the original code, no "clean room" claim holds. Generalizes to any governance mechanism that relies on cost-of-reimplementation as friction.
Bits In, Bits Out
Hoel argues writing is the canary domain for AI capability — 6 years in, LLMs produced efficiency gains and slop, not a quality revolution. The Amazon book data is compelling (average worse, top 100 unchanged), but the extrapolation from writing to all domains is structurally weak: verifiable domains like code and math behave differently from taste-dependent ones. Best articulation of the "tools not intelligence" thesis, but cherry-picks the hardest domain for AI to show measurable ceiling gains.
Anthropic's AI Hacked the Firefox Browser. It Found a Lot of Bugs.
The independent credibility piece for Anthropic's security capabilities. Claude found 100+ Firefox bugs (14 high-severity) in two weeks -- more high-severity than the world reports to Mozilla in two months. The Curl counter-narrative is the buried lede: AI bug reports are 95% garbage (Stenberg data), making Claude's hit rate the real differentiator, not the volume. Most important detail: Claude is better at finding bugs than exploiting them -- the defender/attacker asymmetry currently favors defenders, but that gap is temporary.
Making frontier cybersecurity capabilities available to defenders
Product announcement dressed as research disclosure. Claude Code Security uses multi-stage self-verification to scan codebases beyond pattern-matching SAST. The 500-vuln claim has no CVEs, no false positive rates, and no comparison to existing tools. Zero external validation in the announcement itself -- the WSJ/Firefox piece did that work. The real play: security scanning as a loss-leader wedge for enterprise platform deals. Neither lab announced pricing.
Oracle and OpenAI End Plans to Expand Flagship Stargate Data Center
Nvidia paid $150M to a DC developer to ensure its GPUs — not AMD's — fill the expansion, making it an infrastructure intermediary, not just a chip vendor. The deeper signal: OpenAI's "often-changing demand forecasting" suggests even the largest training compute buyer is uncertain about forward requirements, cracking the infinite-linear-scaling thesis. Cooling failures taking buildings offline in winter are the first concrete evidence of operational fragility at hyperscale AI density.