One of the biggest problems with measuring AI progress is the ambiguity of measuring intelligence itself.
AGI is treated as a milestone we have yet to cross, but there is no central definition of AGI.
Depending on who you ask, AGI is achieved when a system:
* can fool humans into
Humans are generally fucking dumb
AI is always fucking dumb
Ergo, artificial general intelligence is already here
Easy demonstration.
AI outsmarting clever humans however is a tougher nut to crack. At the moment, the only thing AI excels at is slick-talking, and the only professional it can 100% replace right now are all humans who make a living out of slick-talking: salespeople, marketdroids, lawyers and politicians.
Humans are generally fucking dumb
AI is always fucking dumb
Ergo, artificial general intelligence is already here
Easy demonstration.
AI outsmarting clever humans however is a tougher nut to crack. At the moment, the only thing AI excels at is slick-talking, and the only professional it can 100% replace right now are all humans who make a living out of slick-talking: salespeople, marketdroids, lawyers and politicians.