The Fiscal Frame

The Fiscal Frame

We’ve Been Training LLMs Backwards

Carlos Ramirez's avatar
Carlos Ramirez
Sep 23, 2025
∙ Paid
Share

I’ll be honest…this MIT paper flipped a switch for me.

We’ve been treating large language models like they’re incapable of logic. “They’re just fancy autocomplete machines,” people say. But what if we’ve been looking at it the wrong way?

The breakthrough wasn’t in a bigger model. It wasn’t in throwing more data at the problem. It was in the teaching metho…

Keep reading with a 7-day free trial

Subscribe to The Fiscal Frame to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Carlos Ramirez
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture