agentic-ai-viability
2 items · chronological order
Can coding agents relicense open source through a "clean room" implementation of code?
Coding agents can now reimplement GPL codebases against test suites in hours, making copyleft economically unenforceable. The chardet LGPL→MIT relicensing dispute is the first clean test case, but the real bomb is training data contamination: if the model was trained on the original code, no "clean room" claim holds. Generalizes to any governance mechanism that relies on cost-of-reimplementation as friction.
Codex Security: now in research preview
Same-day competitive counter to Anthropic with stronger receipts: 15 named CVEs in the appendix (GnuTLS heap overflows, GnuPG stack buffer overflow, GOGS 2FA bypass), published improvement curves (84% noise reduction, 90%+ severity over-reporting reduction, 50%+ false positive reduction). The threat model architecture -- building an editable intermediate artifact before scanning -- is the most interesting pattern: it generalizes as "make the agent's understanding inspectable before execution." Broader tier access (Pro through Edu) weakens the dual-use containment narrative but maximizes adoption velocity.