(2024-06-05) Marcus AGI By 2027

Gary Marcus: AGI by 2027? Apparently OpenAI’s internal roadmap alleged that AGI would be achieved by 2027, according to the suddenly ubiquitous Leopold Aschenbrenner (recently fired by OpenAI, previous employed by SBF), who yesterday published the math to allegedly justify it. ((2024-06-01) Leopold Aschenbrenner's AGI Situational Awareness Paper)

I was going to write a Substack about how naive and problematic the graph was, but I am afraid somebody else has beaten me to it...

Well, ok I will add a few words:

GPT-4 is not actually equivalent to a smart high schooler. It can do some things better, some worse, very few reliably

Just because it can write average high school term papers based on massive numbers of samples doesn’t mean it’s on par with a smart high schoolers in their capacity to deal with the real world

The double Y-axis makes no sense, and presupposes its own conclusion

The graph assumes that all continued progress will be same — ignoring qualitatively unsolved problems (hallucinations, planning, reasoning etc)

All that said, Aschenbrenner does actually make and develop a very good (though not original) point in his new 165 page manuscript: we are (as Eliezer Yudkowsky has rightly argued for years) woefully underprepared for AGI whenever it comes

If the fiasco that has been GenAI has been any sign, self-regulation is a farce


Edited:    |       |    Search Twitter for discussion