Discussion about this post

User's avatar
Neural Foundry's avatar

The reference-frame dependence issue is fascinatng, it's wild that the same sequence of models can look like zero algorithmic progress from one perspective and massive gains from another. This mirrors issues in classical algorithm analysis where comparing different complexity classes becomes reference-dependent. The 91% figure for scale-dependent gains really undercuts the conventional narrative about "brilliant researchers discovering clever new techniques." Most gains are actualy just bigger models exploiting properties that only emerge at scale.

No posts

Ready for more?