Tufts Researchers Build AI That Uses 100x Less Energy — and Outperforms VLAs

Back to News

A research team at Tufts University’s School of Engineering has published results that challenge the dominant AI scaling narrative: more parameters and more compute may not be the right path for every class of problem.

Their neuro-symbolic AI system, tested on structured robotic manipulation tasks, dramatically outperformed modern visual-language-action (VLA) models — while consuming a fraction of the energy and training in under an hour.

The Numbers

The results are striking enough to warrant close attention:

MetricNeuro-SymbolicStandard VLA
Training time34 minutes~36 hours
Training energy1% of VLABaseline
Operational energy5% of VLABaseline
Tower of Hanoi success (seen tasks)95%34%
Tower of Hanoi success (unseen tasks)78%0%

The “unseen tasks” result is particularly notable: conventional models failed every attempt at novel configurations of the puzzle, while the neuro-symbolic system succeeded nearly 80% of the time — demonstrating genuine compositional generalisation rather than statistical pattern matching.

How It Works

Unlike standard large language models or VLA systems that rely on brute-force trial-and-error over massive datasets, the Tufts approach combines two complementary techniques:

Led by Professor Matthias Scheutz, the team argues this hybrid approach offers both a more sustainable and a more reliable foundation for AI in robotics and planning-intensive domains.

A Challenge to Scaling Orthodoxy

The research — titled “The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption” — was posted to arXiv in February 2026 and is scheduled for presentation at the IEEE International Conference on Robotics and Automation in May/June 2026.

Its implications extend beyond robotics. As AI energy consumption becomes a strategic and regulatory concern, the idea that symbolic structure can dramatically compress learning — while improving generalisation — deserves serious attention from both researchers and practitioners.


Source: tufts.edu, sciencedaily.com, scitechdaily.com