Skip to main content


I think the term "AGI" is a bit of a historical artifact, it was coined before the deep learning era when previous AI winters had made everyone in the field reluctant to think they could make any progress toward general intelligence. Instead, all AI had to be very extensively hand-crafted to the application in question. And then some people felt like they still wanted to do research on what the original ambition of AI had been, and wanted a term that'd distinguish them from all the other people who said they were doing "AI".

So it was a useful term to distinguish yourself from the very-narrow AI research back then, but now that AI systems are already increasingly general, it doesn't seem like a very useful concept anymore and it'd be better to talk in terms of more specific cognitive capabilities that a system has or doesn't have.

in reply to Kaj Sotala

Intuitively it feels useful to have a name to a point at which AI can do same tasks that humans can. But maybe you think we're partly over that point and partly behind, so the point itself is kind of blurred.