The harder you push an ‘AI scientist’ the more Lagrangian it gets
How Lagrangian is too Lagrangian?
Have you ever wondered whether AI models that perform scientific research prefer Hamiltonian or Lagrangian physics?
If you just answered yes, then today is your lucky day. Because we can save you the trouble of reading this article and just give you the answer: it’s Lagrangian.
According to a fresh preprint from Xinghong Fu, Ziming Liu, and Max Tegmark over at MIT’s Institute of Artificial Intelligence and Fundamental Interactions, when formulating initial theories, AI agents trained to conduct research tend to gravitate toward Hamiltonian mechanics.
But, as theories evolve and the research becomes more challenging, these so-called “AI scientists” — again, machines that do science, not human scientists that research AI — eventually default to Lagrangian mechanics. Isn’t that interesting?
Now, if none of that made any sense, don’t worry. It’s fine. Most of us don’t really spend a lot of time thinking about how theories that were made to measure things like particle motion and how planets form apply to a chatbot's ability to do science.
But maybe we should.
How chatbots do science
According to the MIT team’s preprint:
“Our key findings include: 1) when trained on textbook problems in classical mechanics, AI scientists prefers either a complete Hamiltonian or Lagrangian description; 2) when extended to non-standard physical problems, the Lagrangian description generalizes, suggesting that Lagrangian dynamics remain as the singular accurate family of descriptions in a rich theory space.”
The first part is incredibly interesting. It means, for whatever reason, the AI models used in the experiment went from Hamiltonian dynamics, which is a revision of Lagrangian dynamics, to unfettered Lagrangian dynamics to solve problems where, ostensibly, both could work.
The second part, however, we aren’t touching. “Lagrangian dynamics remain as the singular accurate family of descriptions in a rich theory space” sounds like a bold statement. We’ll let the physicists argue over that.
What’s important, in our opinion, is that we’re seeing a preference towards what, arguably, represents a simpler way for dealing with classical physics.
Of course, that preference was derived in a binary so we’re just spitballing about any potential meaning. But it bears mentioning that, given two choices, the AI takes what might be argued as the easier way out.
While both Lagrangian and Hamiltonian dynamics describe classical physics, there are arguments to be made that the latter more easily transposes onto quantum mechanics.
AI scientists?
However, that’s not what the experiments conducted by the MIT team covered. The purpose of the work was to determine whether two “AI scientists” trained on the same task learned the same theory or could learn different theories. And their conclusion? Inconclusive, for the most part.
While discovering that agents tend towards Lagrangian physics is interesting, it doesn’t really lay bare the truth about AI decision-making.
But, that being said, it certainly indicates that further research into “AI scientist” preferences may provide deeper insight into what’s happening inside the black box. As we like to say here at the Center for AGI Investigations: it’s all physics anyway.
Read more: Scientists suspect that ‘reasoning models don’t always say what they think’
Art by Nicole Greene