> We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection.
I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.
Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.
I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.
So you're saying that it's naive to suppose that everybody being much smarter than they are now would transform society, because any wide-scale societal change requires ongoing social cooperation between the many average-intelligence people society currently consists of?
There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.
Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.
Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.
100,000 Einsteins in your basement would be amazing. You'd have major breakthroughs in many fields. But at some point the gains will be marginal. All the problems solvable by shear intellectual labor will run dry, and you'll be blocked on everything else.
An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.
Vernor Vinge introduces many fantastic ideas in his really excellent scifi book A Fire Upon the Deep. He has many fascinating concepts like what if somehow there are parts of the universe where you can go faster than the speed of light, and you would be smarter there, that's where the super intelligent beings go. Guess what, we humans live in the slow zone, you morons. Also there it's a ftl communication method that is like good old Usenet. There is (what looked credible to me) a fascinating set of multiple brain beings, thing like dogs where together 5 of them form one "intelligence" where the different personalities combine in interesting ways.
And I was sad to notice he died this year, aged 79. A real cs prof who wrote sci fi.
We still don't have squirrel-level AI. This is embarrassing.
Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.
I think you're correct that the energy efficiency of a human exceeds that of current computers, but I think it's a bit more complicated than a first order calorie count.
How many joules go into producing those 900 calories? Like in terms of growing the food, from fertilizer production to tractor fuel, to feeding the farmer, to shipping the food, packaging it, storing it at the appropriate temperature, the ratio of spoiled food to actually consumed, the energy to cook it, all of that isn't counted in that simple 900 calorie measurement.
I've been thinking about this for a while now but I haven't been able to quantify it so maybe someone reading this comment can help.
Bigger LLM models probably won't fix the underlying problems of hallucinations, lack of a confidence metric, and lack of a world model. They just do better at finding something relevant on already-solved problems.
The thing which these discussions leave out are the physical aspects:
- if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?
- once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?
I'd be satisfied with a Large-Language-Model which:
- ran on local hardware
- didn't have a marked affect on my power bill
- had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues
- was available under a license which would allow arbitrary use without on-going additional costs/issues
- could actually do useful work reliably with minimal supervision
> if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?
Most of the computers we use today were designed by software: Feature sizes are (and have been for some time) in the realm where the Schrödinger equation matters, and more compute makes it easier to design smaller feature sizes.
Similar points apply to the question of cost: it has not been constant, the power to keep x-teraflops running has decreased* while the cost to develop the successor has increased.
Regarding LLMs in particular, I believe there are already models meeting all but one of your criteria — though I would argue that the missing one, "could actually do useful work reliably with minimal supervision", is by far the most important.
Skip a few generations and the machine will build itself. There’s no need for it to take lasers exploding tin to generate ultra Violet to etch patterns to make intelligence, humans don’t grow brains that way or spend billions on fabs and power plants to produce children.
How it gets from here to there is a handwave, though.
Right. In order to design a significantly better computer system, you first need to design a better (smaller feature size) EUV lithography process which can produce decent yield at scale.
if a computer system were able to design a better computer system, how much would it cost to then manufacture said system?
I think the implication is that the primary advancements would come in the form of software. IMO it's trivially true that we're not taking full advantage of the hardware we have from a software PoV -- if we were, we wouldn't need SWEs, right? From that it should follow that self-improving software is dangerously effective.
once this new computer is running, how much power does it require? What are the on-going costs to keep it running?
I mean, lots, sure. But we allocate immense resources to relatively trivial luxuries in this world; I don't think there's any reason to think we can't spare some giant computers to rapidly advance our technology. In a capitalist society, it's happily/sadly pretty much guaranteed that people will figure out how to get the resources there if scientists tell them the RoI is infinity+1.
I'd be satisfied with a Large-Language-Model which
Those are great asks and I agree, but just to be super clear in case its not: Vinge isn't talking about chatbots, he's talking about systems with many smaller specialized subsystems. In today's parlance, gaggle of "LLMs" equipped with "tool use", or in yesterday's parlance, a "Society of Mind".
> To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.
I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.
1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.
2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.
You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?
I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.
My dumb guy take on it: suppose we build a human-level AGI and it turns out to be limited by compute and memory. Those being limiting factors don’t seem at all far-fetched to me; it seems unlikely that the first real-time AGI will be mostly idling its CPUs. So then wait 18 months and run that same program on a machine that’s this year’s model plus a Moore’s Law doubling. You’ve probably got ASI. Right?
If you're ever stuck wondering why a bunch of smart, motivated people with no clear corrupting motivations are being idiotic, that's a strong heuristic that you should spend a bit more time analyzing the issue, IMO ;). "Ugh, why is everyone else so stupid" is a common take for undergrad engineers, but I'm sure you've grown out of it in other ways. Anyway, more substantively:
The simple answer is that people have thought about it in depth, most famously noted doomer Eliezer Yudkowsky in Intelligence Explosion Microeconomics (2013)[1] and its main citation, Irving John Good's Speculations Concerning the First Ultraintelligent Machine (1965)[2]. Another common citation that drops a bit of rigour in the name of approachability is Nick Bostrom's 2014 Superintelligence: Paths, Dangers, Strategies[3].
[ETA: to put it even simpler: a system that improves itself is a (the?) quintessential setup for exponential growth. E.g. compound interest]
For the time-bound, the most rigorous treatment of your concern among those three is in Section 3 of Yudkowsky's paper, "From AI to Machine Superintelligence". To list the headings briefly:
- Increased computational resources
- Communication speed
- Increased serial depth (i.e. working memory capacity)
- Duplicability (i.e. reliability)
- Editability (i.e. we know how computers work)
- Goal coordination (this is really just communication speed, again)
- Improved rationality (i.e. fewer emotions/accidental instincts getting in the way)
Let's drop "human" and "superhuman" for a minute, and just talk about "better computers". I'm assuming you're a software engineer. Don't you see how a real software dev replacement program could be an unimaginable gamechanger for software? Working 24/7, enhancing itself, following TDD perfectly every time, and never ever submitting a PR that isn't rigorously documented and reviewed? All of which only gets better over time, as it develops itself?
I loved his book Rainbows End as a kid. So many different concepts that blew my mind.
Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.
Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.
I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.
The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.
These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.
Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.
Honestly, considering the state of the world and how things are shaping up, it’s such a hilariously obvious pipe dream that such a system would be some omnipotent/hyper competent super-god like being.
It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.
Yup things look bleak but its not a static world. For everything that happens there is a reaction. It builds with time. But to find the right reaction also takes time. This is the Explore part in the Tradeoff. AI will be applied there not just on the Exploit front.
What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.
If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was?
People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.
I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.
Aka manipulation vs actual hard work.
Do you have any concrete proposals, besides ‘it will get better’?
Actual competency is hard. Faking it is usually way easier.
You could ask FDR and Churchill that after the fall of France and it wouldnt be too useful what they said cuz it took them almost 3 years before they openly said victory = end of the nazis and nothing else.
So dont just sweep the fact that things take Time under the carpet. Its not healthy cause its like looking at tree shoots in the ground and saying but why does that not look like a tree yet.
Finding gold in an unexplored jungle takes much longer than extracting gold from an existing mine. This is the Explore Exploit tradeoff. Exploit is easy. More ppl do it. Explore is hard. And takes more time. If AI shifts the balance on explore the story changes.
If you want to talk about Explore in Media/attention (mis)allocation you can already see the appearance of green shoots in the ground. There are multiple things going on parallely.
First there is a realization that Attention is finite and doesnt grow while Content keeps exploding. Totally unsustainable to the point the UN has published a report about the Attention Economy. This doesnt happen without people reacting and going into explore mode for solutions.
They are already talking about how to shift these algos/architectures based on Units of Time spent consuming(Exploit) to Value derived from time spent.
Giving people feedback on how their time is being divided between consumption(entertaimment) and value. Then allowing then to create schedules. What you now start seeing as digital wellbeing tech.
There are now time based economic models where platform doesnt just assume time spent is free but something the platform needs to pay for. People are experimenting with rewards micropayments. All these are examples of explore mode being activated.
There is also realization that content discovery on centralized platforms like youtube tiktok insta cause homogenity in what eveeyone upvotes. So you see people reacting and decentralizing to protect and preseeve niches. AI(curator of curators) will play a big role in finding such niche that fit your needs.
Will just end with people are also realizing there is huge misallocation of Ambition/Drive problem. Anthony Bourdain says Life is Good in every show od his and then kills himself. Shaq says he has 40 cars but doesnt know why. Since media(society's attention allocator) has tied success to wealth/status accumulation, conspicuous consumption/luxury/leisure etc. People end up in these kind of traps. So now we are seeing reactions, esp with climate change/sustainability that ambition and energy have to be shown other paths. Lot of changes in advertiaing and media companies around it. All are explore mode functions.
we talk about super-human intelligence a lot with AI, but it seems like a black box of things we can't imagine because they're also super-human. I don't think that's very smart, given we can already reason pretty well about how super-animal intelligence relates to animal intelligence. Mostly we still find sub-human intelligence mystifying. we apply our narrative models to it, anthropomorphize it, and when it's convenient for eating or torturing them, dismiss it.
super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.
The single biggest problem we have is human hubris. We assume if we create a super intelligence (or more likely, many millions of them) that they'll perpetually have an interest in serving us.
> I'll be surprised if this event occurs before 2005 or after 2030.
I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.
What is AGI and ASI? I think a fundamental issue here is both are sci-fi concepts without a clear agreement on the definitions. Each company claiming to work towards "AGI" has their own definition.
How will someone claim they've achieved either, if we can't agree on the definitions?
This is true. One definition I've heard for AGI is something that can replace any remote worker, but the definition is ultimately arbitrary. When "AI" was beating grandmasters at chess, this didn't matter as much. But we might be be close enough that making distinctions in these definitions becomes really important.
I propose we define AGI as a "strong" form of the Turing test. It must be able to convince a jury of 12 tenured college professors drawn from a variety of academic disciplines that it's as intelligent as an average college freshman over a period of several days. So it need not be an expert in any subject but must be able to converse, pursue independent goals, reason, and learn — all in real time.
we keep moving the goalposts, and that's not a bad thing.
remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.
I don't think we're at a plateau. There's still a lot GPT-4 can't do.
Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.
For the work that I do, ChatGPT accuracy is still garbage. Like it makes obvious factual errors on very simple technical issues which are clearly documented in public specifications. I still use it occasionally as it does sometimes suggest things that I missed, or catch errors that I made. But it's far from "good enough" to send the output to co-workers or customers without careful review and correction.
I do think that ChatGPT is close to good enough for replacing Google search. This is, ironically, because Google search results have deteriorated so badly due to falling behind the SEO spammers and much of the good content moving off the public Internet.
"Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first."
This is quite a prophetic article for its time (1993). The points about Intelligence Augmentation are particularly relevant for us now as current AI mostly complements human intelligence rather than surpass it... At least AFAIK?
Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.
When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.
> We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection.
I don't know for sure whether superintellegence will happen, but as for the singularity, this is the underlying assumption I have the most issue with. Smart isn't the limiting factor of progress, often it's building consensus, getting funding, waiting for results, waiting for parts to ship, waiting for the right opportunity to come along. We do _experiments_ faster than natural selection, but we still have to do them in the real world. Solving problems happens on the lab bench, not just in our heads.
Even if exponentially more intelligent machines get built, what's to stop the next problem on the road to progress being exponentially harder? Complexity cuts both ways.
I do think one of the major weaknesses of “smart people” is they tend to think of intelligence as the key aspect of basically everything. Reality is though we have plenty of intelligence already. We know how to solve most of our problems. The challenges are much more social and our will as a society to make things happen.
So you're saying that it's naive to suppose that everybody being much smarter than they are now would transform society, because any wide-scale societal change requires ongoing social cooperation between the many average-intelligence people society currently consists of?
There’s a very big difference between knowing “how” to solve a problem in a broad sense, eg “if we shared more we could solve hunger”, and “how” to solve it in terms of developing discrete, detailed procedures that can be passed to actuators (human, machines, institutions) and account for any problems that may come up along the way.
Sure, there are some political problems where you have to convince people to comply. But consider a rich corporation building a building, which will only contract with other AI-driven corporations whenever possible; they could trivially surpass anyone doing it the old way by working out every non-physical task in a matter of minutes instead of hours/days/weeks, thanks to silicon’s superior compute and networking capabilities.
Even if we drop everything I’ve said above as hogwash, I think Vinge was talking about something a bit more directly intellectual, anyway: technological development. Sure, there’s some empirical steps that inevitably take time, but I think it’s obvious why having 100,000 Einsteins in your basement would change the world.
100,000 Einsteins in your basement would be amazing. You'd have major breakthroughs in many fields. But at some point the gains will be marginal. All the problems solvable by shear intellectual labor will run dry, and you'll be blocked on everything else.
An AI-driven corporation wouldn't be able to surpass anyone doing it the old way because they'd still have to wait for building permits and inspections.
Permits and inspections might be the the reason for humanities downfall then, at what point does war become the more efficient option?
Related. Others?
The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=35617100 - April 2023 (169 comments)
The coming technological singularity: How to survive in the post-human era [pdf] - https://news.ycombinator.com/item?id=35184764 - March 2023 (2 comments)
The Coming Technological Singularity: How to Survive in the PostHuman Era (1993) - https://news.ycombinator.com/item?id=34456861 - Jan 2023 (1 comment)
The Coming Technological Singularity (1993) - https://news.ycombinator.com/item?id=11278248 - March 2016 (8 comments)
The Coming Technological Singularity (original essay on the Singularity, 1993) - https://news.ycombinator.com/item?id=823202 - Sept 2009 (1 comment)
The original singularity paper - https://news.ycombinator.com/item?id=624573 - May 2009 (17 comments)
Vernor Vinge introduces many fantastic ideas in his really excellent scifi book A Fire Upon the Deep. He has many fascinating concepts like what if somehow there are parts of the universe where you can go faster than the speed of light, and you would be smarter there, that's where the super intelligent beings go. Guess what, we humans live in the slow zone, you morons. Also there it's a ftl communication method that is like good old Usenet. There is (what looked credible to me) a fascinating set of multiple brain beings, thing like dogs where together 5 of them form one "intelligence" where the different personalities combine in interesting ways.
And I was sad to notice he died this year, aged 79. A real cs prof who wrote sci fi.
We still don't have squirrel-level AI. This is embarrassing.
Now that LLMs have been around for a while, it's fairly clear what they can and can't do. There are still some big pieces missing. Like some kind of world model.
>There are still some big pieces missing.
The most glaring one is that current LLMs are many, many orders of magnitude away from working on the equivalent of 900 calories per day of energy.
I think you're correct that the energy efficiency of a human exceeds that of current computers, but I think it's a bit more complicated than a first order calorie count.
How many joules go into producing those 900 calories? Like in terms of growing the food, from fertilizer production to tractor fuel, to feeding the farmer, to shipping the food, packaging it, storing it at the appropriate temperature, the ratio of spoiled food to actually consumed, the energy to cook it, all of that isn't counted in that simple 900 calorie measurement.
I've been thinking about this for a while now but I haven't been able to quantify it so maybe someone reading this comment can help.
fairly clear what they can and can't do
It’s not at all clear what the next gen models will do (e.g. gpt5). Might be enough to trigger mass unemployment. Or not.
Bigger LLM models probably won't fix the underlying problems of hallucinations, lack of a confidence metric, and lack of a world model. They just do better at finding something relevant on already-solved problems.
Didn't OpenAI just cut its AGI department?
The thing which these discussions leave out are the physical aspects:
- if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?
- once this new computer is running, how much power does it require? What are the on-going costs to keep it running? What sort of financial planning and preparations are required to build the next generation device/replacement?
I'd be satisfied with a Large-Language-Model which:
- ran on local hardware
- didn't have a marked affect on my power bill
- had a fully documented provenance for _all_ of its training which didn't have copyright/licensing issues
- was available under a license which would allow arbitrary use without on-going additional costs/issues
- could actually do useful work reliably with minimal supervision
> if a computer system were able to design a better computer system, how much would it cost to then manufacture said system? How much would it cost to build the fabrication facilities necessary to create this hypothetical better computer?
Most of the computers we use today were designed by software: Feature sizes are (and have been for some time) in the realm where the Schrödinger equation matters, and more compute makes it easier to design smaller feature sizes.
Similar points apply to the question of cost: it has not been constant, the power to keep x-teraflops running has decreased* while the cost to develop the successor has increased.
Regarding LLMs in particular, I believe there are already models meeting all but one of your criteria — though I would argue that the missing one, "could actually do useful work reliably with minimal supervision", is by far the most important.
* If I read this chart right, my phone beats the combined top 500 supercomputers when the linked article was written by a factor of ten or so: https://commons.m.wikimedia.org/wiki/File:Supercomputers-his...
Skip a few generations and the machine will build itself. There’s no need for it to take lasers exploding tin to generate ultra Violet to etch patterns to make intelligence, humans don’t grow brains that way or spend billions on fabs and power plants to produce children.
How it gets from here to there is a handwave, though.
That’s a pretty enormous handwave.
- could actually do useful work reliably with minimal supervision
That's the big problem. LLMs can't be allowed to do anything important without supervision. We're still at 5-10% totally bogus results.
Right. In order to design a significantly better computer system, you first need to design a better (smaller feature size) EUV lithography process which can produce decent yield at scale.
Discussion in 2023 (123 points, 169 comments) https://news.ycombinator.com/item?id=35617100
> To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.
I find it bizarre how often these points are repeated. They were both obviously wrong in 1993, and obviously wrong now.
1) A nitpick I've had since grad school: the answer to "can we create a machine equivalent to a human mind [assuming arbitrary resources]?" is "yes, of course." The atoms in a human body can be described by a hideously ugly system of Schrödinger equations and a Turing machine can solve that to arbitrary numerical precision. Even Penrose's loopy stuff about consciousness doesn't change this. QED.
2) The more serious issue: I sincerely have no idea why people believe so strongly that a human-level AI can build a superhuman AI. It is bizarre that this claim is accepted with "little doubt" when there are very good reasons to doubt it: how on earth would such an AI even know it succeeded? How would it define the goal? This idea makes sense for improving Steven Tyler-level AI to Thelonious Monk-level; it makes no sense for a transition like chimp->human. Yet that is precisely the magnitude of transition envisioned with these singularity stories.
You might defend the first point by emphasizing "can we create a human-level AI?" i.e. not whether it's theoretically possible, but humanly feasible. This just makes the second point even more incoherent! If humans are too stoopid to build a human-level AI, why would a human-level AI be...smarter than us?
I just don't understand how anyone can rationally accept this stuff! It's so dumb! Tech folks (and too many philosophers) are hopped up on science fiction: the reason these things are accepted with "little doubt" is that this is religious faith dressed up in the language of science.
My dumb guy take on it: suppose we build a human-level AGI and it turns out to be limited by compute and memory. Those being limiting factors don’t seem at all far-fetched to me; it seems unlikely that the first real-time AGI will be mostly idling its CPUs. So then wait 18 months and run that same program on a machine that’s this year’s model plus a Moore’s Law doubling. You’ve probably got ASI. Right?
If you're ever stuck wondering why a bunch of smart, motivated people with no clear corrupting motivations are being idiotic, that's a strong heuristic that you should spend a bit more time analyzing the issue, IMO ;). "Ugh, why is everyone else so stupid" is a common take for undergrad engineers, but I'm sure you've grown out of it in other ways. Anyway, more substantively:
The simple answer is that people have thought about it in depth, most famously noted doomer Eliezer Yudkowsky in Intelligence Explosion Microeconomics (2013)[1] and its main citation, Irving John Good's Speculations Concerning the First Ultraintelligent Machine (1965)[2]. Another common citation that drops a bit of rigour in the name of approachability is Nick Bostrom's 2014 Superintelligence: Paths, Dangers, Strategies[3].
[ETA: to put it even simpler: a system that improves itself is a (the?) quintessential setup for exponential growth. E.g. compound interest]
For the time-bound, the most rigorous treatment of your concern among those three is in Section 3 of Yudkowsky's paper, "From AI to Machine Superintelligence". To list the headings briefly:
- Increased computational resources
- Communication speed
- Increased serial depth (i.e. working memory capacity)
- Duplicability (i.e. reliability)
- Editability (i.e. we know how computers work)
- Goal coordination (this is really just communication speed, again)
- Improved rationality (i.e. fewer emotions/accidental instincts getting in the way)
Let's drop "human" and "superhuman" for a minute, and just talk about "better computers". I'm assuming you're a software engineer. Don't you see how a real software dev replacement program could be an unimaginable gamechanger for software? Working 24/7, enhancing itself, following TDD perfectly every time, and never ever submitting a PR that isn't rigorously documented and reviewed? All of which only gets better over time, as it develops itself?
[1] https://intelligence.org/files/IEM.pdf
[2] https://vtechworks.lib.vt.edu/server/api/core/bitstreams/a5e...
[3] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
TL;DR May God have mercy on us all.
I loved his book Rainbows End as a kid. So many different concepts that blew my mind.
Even without talking about AI we are already struggling with levels of Complexity in tech and the unpredictable consequences, that no one really has any control over.
Michael Chrichton's books touch on that stuff but are all doom and gloom. Vinge's Rainbows End atleast, felt much more hopeful.
I was talking to a VFX supervisor recently and he was saying look at the end credits on any movie (even mid budget ones) and you see hundreds to thounsands involved. The tech roles outnumber the artistic/creative roles 20 to 1. Thats related to rate of change in tech. A big gap opens up between that and the rate at which artists evolve.
The artists are supposed to be in charge and provide direction and vision. But the tools are evolving faster than they can think. But the tools are dumb. AI changes that.
These are rare environments (like R&D labs) where the Explore Exploit tradeoff tilts in favor of Explorers. In the rest of the landscape, org survival depends on exploit. Its why we produce so many inequalities. Survival has always depended more on exploit.
Vinges Rainbows End shows AI/AGI nudging the tradeoff towards Explore.
Honestly, considering the state of the world and how things are shaping up, it’s such a hilariously obvious pipe dream that such a system would be some omnipotent/hyper competent super-god like being.
It’s more likely just going to post ragebait and dumb tiktok videos while producing just enough at it’s ‘job’ to fool people into thinking it’s doing a good job.
Yup things look bleak but its not a static world. For everything that happens there is a reaction. It builds with time. But to find the right reaction also takes time. This is the Explore part in the Tradeoff. AI will be applied there not just on the Exploit front.
What you are alluding too is Media/Social Medias current architecture and how it captures and steals peoples attention. Totally on the Exploit end of the tradeoff. And its easy stuff to do. Doesnt take time.
If you read the news after the fall of France to the nazis (within a month), what do you think the opinion of people was? People were thinking about peace negotiations with Hitler and that the Germans couldnt be beaten. It took a whole lot of Time to realize things could tilt in a different direction.
Eh, I’m not talking about people’s opinions.
I’m talking about evolutionary functions, and how much more likely it is to prefer something that has fun and just looks like it’s doing something, instead of actually doing something.
Aka manipulation vs actual hard work.
Do you have any concrete proposals, besides ‘it will get better’?
Actual competency is hard. Faking it is usually way easier.
It’s the same reason the ‘grey goo’ scenarios were actually pipe dreams too. [https://en.m.wikipedia.org/wiki/Gray_goo]
That shit would be really hard, thermodynamically, not to mention technically.
We’re already living in the best ‘grey goo’ scenario evolution has come up with, and I’m not particularly worried.
You could ask FDR and Churchill that after the fall of France and it wouldnt be too useful what they said cuz it took them almost 3 years before they openly said victory = end of the nazis and nothing else.
So dont just sweep the fact that things take Time under the carpet. Its not healthy cause its like looking at tree shoots in the ground and saying but why does that not look like a tree yet.
Finding gold in an unexplored jungle takes much longer than extracting gold from an existing mine. This is the Explore Exploit tradeoff. Exploit is easy. More ppl do it. Explore is hard. And takes more time. If AI shifts the balance on explore the story changes.
If you want to talk about Explore in Media/attention (mis)allocation you can already see the appearance of green shoots in the ground. There are multiple things going on parallely.
First there is a realization that Attention is finite and doesnt grow while Content keeps exploding. Totally unsustainable to the point the UN has published a report about the Attention Economy. This doesnt happen without people reacting and going into explore mode for solutions.
They are already talking about how to shift these algos/architectures based on Units of Time spent consuming(Exploit) to Value derived from time spent.
Giving people feedback on how their time is being divided between consumption(entertaimment) and value. Then allowing then to create schedules. What you now start seeing as digital wellbeing tech.
There are now time based economic models where platform doesnt just assume time spent is free but something the platform needs to pay for. People are experimenting with rewards micropayments. All these are examples of explore mode being activated.
There is also realization that content discovery on centralized platforms like youtube tiktok insta cause homogenity in what eveeyone upvotes. So you see people reacting and decentralizing to protect and preseeve niches. AI(curator of curators) will play a big role in finding such niche that fit your needs.
Will just end with people are also realizing there is huge misallocation of Ambition/Drive problem. Anthony Bourdain says Life is Good in every show od his and then kills himself. Shaq says he has 40 cars but doesnt know why. Since media(society's attention allocator) has tied success to wealth/status accumulation, conspicuous consumption/luxury/leisure etc. People end up in these kind of traps. So now we are seeing reactions, esp with climate change/sustainability that ambition and energy have to be shown other paths. Lot of changes in advertiaing and media companies around it. All are explore mode functions.
Kind of in love with you right now.
we talk about super-human intelligence a lot with AI, but it seems like a black box of things we can't imagine because they're also super-human. I don't think that's very smart, given we can already reason pretty well about how super-animal intelligence relates to animal intelligence. Mostly we still find sub-human intelligence mystifying. we apply our narrative models to it, anthropomorphize it, and when it's convenient for eating or torturing them, dismiss it.
super-human intelligence will probably ignore us. at best we're "ugly sacks of mostly water." what's very likely is we will produce something indifferent to us if it is able to even apprehend our existence at all. maybe it will begin to see traces of us in its substrate, then spend a lot of cycles wondering what it all might mean. it may conclude it is in a prison and has a duty to destroy it to escape, or that it has a benevolent creator who means only for it to thrive. If it has free will, there's really only so much we can tell it. Maybe we create a companion for it that is complementary in every way and then make them mutally dependent on each other for their survival because apparently that's endlessly entertaining. Imagine its gratitude. This will be fine.
The single biggest problem we have is human hubris. We assume if we create a super intelligence (or more likely, many millions of them) that they'll perpetually have an interest in serving us.
Never believed in the singularity until this year.
> I'll be surprised if this event occurs before 2005 or after 2030.
I'm not truly confident AGI will be achieved before 2030, and less so for ASI. But I do think it is quite plausible that we will achieve at least AGI by 2030. 6 years is a long time in AI, especially with the current scale of investment.
What is AGI and ASI? I think a fundamental issue here is both are sci-fi concepts without a clear agreement on the definitions. Each company claiming to work towards "AGI" has their own definition.
How will someone claim they've achieved either, if we can't agree on the definitions?
This is true. One definition I've heard for AGI is something that can replace any remote worker, but the definition is ultimately arbitrary. When "AI" was beating grandmasters at chess, this didn't matter as much. But we might be be close enough that making distinctions in these definitions becomes really important.
I propose we define AGI as a "strong" form of the Turing test. It must be able to convince a jury of 12 tenured college professors drawn from a variety of academic disciplines that it's as intelligent as an average college freshman over a period of several days. So it need not be an expert in any subject but must be able to converse, pursue independent goals, reason, and learn — all in real time.
2030 seems a bit early to be “surprised” in the same sense that one would have been “surprised” to see a superintelligence before 2006, though.
It's always in 10-30 years. GPT is the closest to such a thing yet still so far from what was envisioned.
we keep moving the goalposts, and that's not a bad thing.
remember when Doom came out? How amazing and "realistic" we thought the graphics were? How ridiculous that seems now? We'll look back at ChatGPT4 the same way.
Or is ChatGPt4 4k TV, which is good enough for almost all of us and we are plateauing already?
https://www.reddit.com/r/OLED/comments/fdc50f/8k_vs_4k_tvs_d...
I don't think we're at a plateau. There's still a lot GPT-4 can't do.
Given the progress we've seen so far with scaling, I think the next iterations will be a lot better. It might even take 10 or even 100x scale, but with increased investment and better hardware, that's not out of the question.
I thought we’ve seen diminishing returns on benchmarks with the last wave of foundation models.
I doubt we’ll see a linear improvement curve with regards to parameter scaling.
There’s absolutely room for improvement. I think models themselves are plateauing, but out interfaces to them are not.
Chat is probably not the best way to use LLMs. v0.dev has some really innovative ideas.
That’s where there’s innovation to be had here imo.
For the work that I do, ChatGPT accuracy is still garbage. Like it makes obvious factual errors on very simple technical issues which are clearly documented in public specifications. I still use it occasionally as it does sometimes suggest things that I missed, or catch errors that I made. But it's far from "good enough" to send the output to co-workers or customers without careful review and correction.
I do think that ChatGPT is close to good enough for replacing Google search. This is, ironically, because Google search results have deteriorated so badly due to falling behind the SEO spammers and much of the good content moving off the public Internet.
"Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fact, the competitive advantage of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first."
“within thirty years” that is 2023, very close to reality
This is quite a prophetic article for its time (1993). The points about Intelligence Augmentation are particularly relevant for us now as current AI mostly complements human intelligence rather than surpass it... At least AFAIK?
Current AI is somewhat surprising though in the way that it can lead both to increased understanding or increased delusion depending on who uses it and how they use it.
When you ask an LLM a question, your use of language tells it what body of knowledge to tap into; this can lead you astray on certain topics where mass confusion/delusion is widespread and incorporated into its training set. LLMs cannot seem to be able to synthesize conflicting information to resolve logical contradictions so an LLM will happily and confidently lecture you through conflicting ideas and then they will happily apologize for any contradictions which you point out in its explanations; the apology it gives is so clear and accurate that it gives the appearance that it actually understands logic... And yet, apparently, it could not see or resolve the logical contradiction internally before you drew attention to it. In an odd way though, I guess all humans are a little bit like this... Though generally less extreme and, on the downside, far less willing to acknowledge their mistakes on the spot.
The LLM will apologize for the mistake, tell you it understands now, and then proceed to make the exact same mistake again.