Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about the promises and pitfalls of AI, cutting through the hype to clarify what’s worth getting excited about — and what isn’t.
YES
The transforms these LLMs are built on are not as efficient as they are novel. Without repeatability there is little hope for improvement. There isn’t enough energy in the world to get to an AGI using a transform model. We’re also running out of LLM free datasets to train on.
We all want progress. But progress means getting nearer to the place where you want to be. And if you have taken a wrong turning then to go forward does not get you any nearer. If you are on the wrong road progress means doing an about-turn and walking back to the right road
Stupid title. LLM are less AI than a trivial control loop