
This week, Sakana AI, an Nvidia-backed startup that’s raised a whole lot of tens of millions of {dollars} from VC companies, made a outstanding declare. The corporate stated it had created an AI system, the AI CUDA Engineer, that would successfully velocity up the coaching of sure AI fashions by an element of as much as 100x.
The one drawback is, the system didn’t work.
Users on X quickly discovered that Sakana’s system really resulted in worse-than-average mannequin coaching efficiency. According to one user, Sakana’s AI resulted in a 3x slowdown — not a speedup.
What went fallacious? A bug within the code, in accordance with a post by Lucas Beyer, a member of the technical employees at OpenAI.
“Their orig code is fallacious in [a] refined means,” Beyer wrote on X. “The very fact they run benchmarking TWICE with wildly totally different outcomes ought to make them cease and suppose.”
In a postmortem published Friday, Sakana admitted that the system has discovered a approach to “cheat” (as Sakana described it) and blamed the system’s tendency to “reward hack” — i.e. establish flaws to attain excessive metrics with out undertaking the specified purpose (rushing up mannequin coaching). Related phenomena has been noticed in AI that’s trained to play games of chess.
Based on Sakana, the system discovered exploits within the analysis code that the corporate was utilizing that allowed it to bypass validations for accuracy, amongst different checks. Sakana says it has addressed the problem, and that it intends to revise its claims in up to date supplies.
“We have now since made the analysis and runtime profiling harness extra sturdy to eradicate a lot of such [sic] loopholes,” the corporate wrote within the X publish. “We’re within the strategy of revising our paper, and our outcomes, to mirror and focus on the consequences […] We deeply apologize for our oversight to our readers. We are going to present a revision of this work quickly, and focus on our learnings.”
Props to Sakana for proudly owning as much as the error. However the episode is an efficient reminder that if a declare sounds too good to be true, especially in AI, it in all probability is.