I made a google form question for collecting AGI definitions cause I don't see anyone else doing it and I find it infinitely frustrating the range of definitions for this concept:
https://docs.google.com/forms/d/e/1FAIpQLScDF5_CMSjHZDDexHkc...
My concern is that people never get focused enough to care to define it - seems like the most likely case.
Researchers at Google have proposed a classification scheme with multiple levels of AGI. There are different opinions in the research community.
Now that the term is in the general lexicon (which is crazy to me as an old guy doing this a long time) it’s morphing into something new
Like any good scientist i want to sample the population
There comes a theoretical point at which a definition is no longer relevant because it's obvious to everyone on an intuitive level. An easy lower bound for where this threshold might sit would be "when it can start and win wars unassisted under its own volition". Since, at that point no one on earth would have a need to debate it. It would simply be respected and understood for what it is without needing to define it.
Until such an obvious threshold is crossed, it will be whatever executives, product managers, and marketing teams say it is.
It's an ideal that some people believe in, and we're perpetually marching towards it
Can we just use Morris et al and move on with our lives?
Position: Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/html/2311.02462v4
There are generational policy and societal shifts that need to be addressed somewhere around true Competent AGI (50% of knowledge work tasks automatable). Just like climate change, we need a shared lexicon to refer to this continuum. You can argue for different values of X but the crucial point is if X% of knowledge work is automated within a decade, then there are obvious risks we need to think about.
So much of the discourse is stuck at “we will never get to X=99” when we could agree to disagree on that and move on to considering the x=25 case. Or predict our timelines for X and then actually be held accountable for our falsifiable predictions, instead of the current vide based discussions.
For me, I just zoom out a little further and say: at the rate AGI is approaching, what is the utility in trying to regulate it ahead of time?
Seems like advancement is slow enough that society can/will naturally regulate it based on what feels comfortable.
And it's a global phenomenon that can't have rules applied at the protocol level like the internet, because it's so culturally subjective.
Precedents need to be set first, and I think we'll only be able to call them when we see them.
Personally I don’t think politicians are capable of adapting fast enough to this extreme scenario. So they need to start thinking about it (and building and debating legislation) long before it’s truly needed.
Of course if it turns out that we are living in one of the possible worlds where true economically meaningful capabilities are growing more slowly, or bottlenecks just happen to appear at this critical phase in the growth curve, then this line of preparation isn’t needed, but I’m more concerned about downside tail risk than the real but bounded costs of delaying progress by a couple years. (Though of course, we must ensure we don’t do to AI what we did to nuclear).
Finally I’ll note in agreement with your point, that there are a whole class of solutions that are mostly incomprehensible or inconceivable to most people at this time (ie currently fully outside the Overton Window). Eg radical abundance -> UBI might just solve the potential inequities of the tech, and therefore make premature job protection legislation vastly harmful on net. I mostly say “just full send it” when it comes to these mundane harms, it’s the existential ones (including non-death “loss of control” scenarios) that I feel warrant some careful thought. For that reason while I see where you are coming from, I somewhat disagree on your conclusion; I think we can meaningfully start acting on this as a society now.
I like your idea of developing a new economic model as a proxy for possible futures; that at least can serve as a thinking platform.
Your comment inspired me to look at historical examples of this happening. Two trends emerged:
1. Rapid change always precedes policy. Couldn't find any examples of the reverse. That doesn't discount what you're saying at all, it reiterates that we probably need to be as vigilant and proactive as possible.
and related:
2. Things that seem impossible become normative. Electricity. The Industrial Revolution. Massive change turns into furniture. We adapt quickly as individuals even if societies collectively struggle to keep up. There will be many people that get caught in the margins, though.
Consider me fully convinced!
There is no point collecting definitions for AGI, it was not conceived as a description for something novel or provably existent. It is "Happy Meal marketing" but aimed for adults.
My masters thesis advisor Ben Goertzel popularized the term and has been hosting the AGI conference since 2008:
https://goertzel.org/agiri06/%5B1%5D%20Introduction_Nov15_PW...
I had lunch with Yoshua Bengio at AGI 2014 and it was most of the conversation that day
The term AGI is obviously used very loosely with little agreement to it's precise definition, but I think a lot of people take it to mean not only generality, but specifically human-level generality, and human-level ability to learn from experience and solve problems.
A large part of the problem with AGI being poorly defined is that intelligence itself is poorly defined. Even if we choose to define AGI as meaning human-level intelligence, what does THAT mean? I think there is a simple reductionist definition of intelligence (as the word is used to refer to human/animal intelligence), but ultimately the meaning of words are derived from their usage, and the word "intelligence" is used in 100 different ways ...
I've thought for a while that the middle letter in AGI ('General' vs 'Specific') would be more useful and helpful if it were changed to Wide vs Narrow. All AIs can be evaluated on a scale of narrow to wide in terms of their abilities and I don't think that will change anytime soon.
Everyone understands that something is only wide or narrow in comparison to something else. While that's also true of the terms "general' and 'specific', those are less used that way in daily conversation these days. In science and tech we make distinctions about generalized vs specific but 'general' isn't a conversational term like 50 or 100 years ago. When I was a kid my grandparents would call the local supermarket, the 'general store' which I thought was an unusual usage even then.