One of the biggest problems with measuring AI progress is the ambiguity of measuring intelligence itself.
AGI is treated as a milestone we have yet to cross, but there is no central definition of AGI.
Depending on who you ask, AGI is achieved when a system:
* can fool humans into
Ok boss, your arguments?
Nyet
Nope
Economically? The stuff a human can solve with 3 pizza slices as fuel, you need rivers of water and cities of power for.
You got to be kidding me