Menu
About me Kontakt

Is the explosion of superintelligence near? Reality vs predictions (film, 10m)

Sabine Hossenfelder discusses the topic of artificial intelligence and the controversial predictions made by Leopold Aschenbrenner, a former employee of OpenAI, in her latest blog post. In his essay, Aschenbrenner predicts that artificial general intelligence (AGI) will significantly surpass human intelligence within a few years. Despite Aschenbrenner's optimism, Hossenfelder is skeptical about the actual timeline for such changes, emphasizing critical limitations regarding energy and data acquisition. Observing the rapid advancements in AI in recent years, Aschenbrenner suggests that the speed of technological evolution is nearing a tipping point that will lead to a revolution in science and technology; however, Hossenfelder believes this process may take much longer.

As Sabine Hossenfelder explains, one of Aschenbrenner's key arguments is the increasing computational power and ongoing algorithm improvements. Although she agrees that AI is becoming more powerful computationally, she argues that energy and data limitations are significant hurdles. Estimating that the most advanced AI models could soon require 10 gigawatts of energy, which comes with enormous costs, Hossenfelder points to the impracticality of such forecasts. She expresses skepticism towards Aschenbrenner's assertion that building the necessary energy infrastructure to support ubiquitous AGI is achievable in the short term.

Despite her objections, Hossenfelder acknowledges that AGI could indeed lead to substantial progress in science and technology. While it seems clear that artificial intelligence has the potential to improve knowledge accessibility and reduce human error, one major challenge will be gathering the relevant data. Hossenfelder emphasizes that even the best algorithm will not deliver significant insights without proper informational resources, highlighting that Aschenbrenner overlooks this critical gap. This makes the need for a long-term approach to AI development even more apparent.

In Hossenfelder's analysis, she also critiques the security concerns accompanying AGI. She notes that Aschenbrenner focuses primarily on the U.S.-China tech rivalry, neglecting the influence of other countries and addressing imminent issues like the climate crisis, which could soon become a crucial limiting factor. Furthermore, Hossenfelder points out her concerns regarding the potential for governments to exert tight control over AI's development. This doubt about AGI's future and the potential limitations it may face raises further questions about how humanity will manage this powerful tool.

In conclusion, Hossenfelder references past predictions about artificial intelligence that turned out to be inaccurate, reinforcing her argument for caution when it comes to forecasting technological advancements. Given the numerous misguided estimates made by specialists over the years, she observes that the pace of change may be much slower than recent predictions suggest. Combined with data and energy being significant challenges, this indicates we may need much more time before the real implementation of general machine intelligence enters our reality. Currently, the video's statistics are also compelling, boasting 627051 views and 29184 likes, indicating significant interest in the topic of artificial intelligence.

Toggle timeline summary

  • 00:00 Introduction to the rise of AI and a quote from Leopold Aschenbrenner.
  • 00:06 Background on Aschenbrenner, his departure from OpenAI, and his views on AI.
  • 00:12 Aschenbrenner's essay details the imminent arrival of artificial superintelligence.
  • 00:28 Context on Aschenbrenner’s background and current projects.
  • 00:56 Discussion on the rapid scaling of AI capabilities.
  • 01:30 Prediction of achieving artificial general intelligence by 2027.
  • 01:56 Explanation of the limitations of current AI systems and possible improvements.
  • 02:36 Critique of Aschenbrenner's belief in an intelligence explosion.
  • 03:02 Discussion on limitations regarding energy and data necessary for AI development.
  • 03:17 Aschenbrenner’s projection on power requirements for advanced AI models.
  • 04:10 Criticism of the unrealistic optimism surrounding nuclear fusion as an energy solution.
  • 04:44 Concern over the availability of data for training AI after initial online data.
  • 05:45 Importance of data for AI functionality and limitations of current AI systems.
  • 06:29 Potential benefits of AGI in correcting human errors in scientific research.
  • 07:18 Acknowledgment of AGI’s impact and the potential for governmental responses.
  • 07:54 Historical context of predictions regarding machine capabilities.
  • 08:31 Analysis of past predictions and the tendency to overstate technological progress.
  • 08:42 Summary of skepticism regarding the immediacy of an intelligence explosion.
  • 09:08 Promotion of Brilliant.org for learning about AI and related topics.
  • 09:45 Special offer for viewers to explore Brilliant's educational resources.

Transcription

Everyone's now talking about AI, but few have the faintest glimmer of what's about to hit them. That's a quote from Leopold Aschenbrenner, who was recently fired from OpenAI. He believes that artificial superintelligence is just around the corner, and has written a 165-page essay explaining why. I spent the last weekend reading this essay and want to tell you what he says and why I think he's wrong. Let me start with some context on Aschenbrenner, who you see talking here. Young man, early 20s, German origin, had a brief gig at the Oxford Centre for Global Priorities, now lives in San Francisco, and according to his own website, recently founded an investment firm focused on artificial general intelligence. In his new essay, Aschenbrenner says that current AI systems are scaling up incredibly quickly. He sees no end to this trend, and therefore they'll soon outperform humans in pretty much anything. I can't see no end, says man who earns money from seeing no end. He explains that the most relevant factors that currently contribute to the growth of AI performance is the increase of computing clusters and improvements of the algorithms. Neither of these factors is yet remotely saturated. That's why, he says, performance will continue to improve exponentially for at least several more years, and that's sufficient for AI to exceed human intelligence on pretty much all tasks. By 2027, we'll have artificial general intelligence, AGI for short, according to Aschenbrenner. He predicts that a significant contribution to this trend will be what he calls unhobbling. By this, he means that current AIs have limitations that can easily be overcome, and will soon be overcome. For example, a lack of memory, or that they can't themselves use computing tools. Like, why not link them to a math software? Indeed, let them livestream on YouTube! The future is bright, people! I know it sounds a little crazy, but I'm with him so far. I think he's right that it won't be long now until AI outsmarts humans. Because, I mean, look, it isn't all that hard, is it? I also agree that soon after this, artificial intelligence will be able to research itself and to improve its own algorithms. Where I get off the bus is when he concludes that this will lead to the intelligence explosion accompanied by extremely rapid progress in science and technology and society overall. Do you get off the bus and miss the boat, or get off the boat and miss the bus? These damn English idioms always throw me off. The reason I don't believe in Aschenbrenner's predictions is that he totally underestimates the two major limiting factors, energy and data. Training bigger models takes up an enormous amount of energy. According to Aschenbrenner, by 2028, the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, they'll run at 100 gigawatts at a cost of a trillion dollars. For comparison, a typical power plant delivers something in the range of a gigawatt or so. That means by 2028, they'd have to build 10 power plants in addition to the supercomputer cluster. Can you do that? Totally. Is it going to happen? You've got to be kidding me. What did all of those power stations run on anyway? Well, according to Aschenbrenner, on natural gas, even the 100 gigawatt cluster is surprisingly doable, he writes, because that'll take only about 1,200 or so new wells. Totally doable. And if that doesn't work, I guess they can just go the Sam Altman way and switch to nuclear fusion power. Honestly, I think these guys have totally lost the plot. They're living in some techno utopian bubble that has groupthink written on it in capital letters. Yes, Helion Energy says they'll produce net power from neutron free nuclear fusion by 2028, leaving aside that there are some reasonable doubts about how neutron free this neutron free fusion actually is. And I for sure wouldn't go anywhere near the thing. No one has ever managed to get net energy out of this reaction. I talked about all the fusion startups in an earlier video. Then there's the data. OK, so you've trained your AI on all the data that was available online. Now what? Where are you going to get more data? Aschenbrenner says no problem. You deploy robots who collect it. Where do you get those robots from? Well, Aschenbrenner thinks that AIs will solve all remaining robot problems and the first robots will build factories to build more robots. All right. But what will they build the factories with? Are resources that will be mined and transported by, let me guess, more robots that will be built in the factories that will be constructed from the resources mined by the robots? I think that isn't going to work. Creating a huge robot workforce will not just require AGI. It will require changing the entire world economy. This will eventually happen, but not within a couple of years. It'll take decades at best. And until then, the biggest limiting factor for AGI will be lack of data. The best algorithm in the world isn't going to deliver new insights if it's got new data to work on. That said, I think he's right that AGI will almost certainly be able to unlock huge progress in science and technology. This is because a lot of scientific knowledge currently goes to waste just because no human can read everything that's been published. AGI will be able to do this. There must be lots of insights hidden in the published scientific literature without doing any new research whatsoever. The other relevant thing that AGI will be able to do is to just prevent errors. The human brain makes a lot of mistakes that are usually easy to identify and correct. Logical mistakes, biases, data retrieval error, memory lapses. Why did I go to the kitchen? And so on. Even before AGI actually does anything new, it'll change the world by basically removing these constant everyday errors. The second half of his essay is dedicated to the security risks that go along with AGI. His entire discussion is based on the US versus China, like the rest of the world basically doesn't exist. That's one of the symptoms of what I want to call the Silicon Valley bubble syndrome. But leaving aside that he forgets the world is more than just two countries and that the world economy is about to be crushed by a climate crisis. I agree with him. Most people on this planet, including all governments, currently seriously underestimate just how big an impact AGI will make. And when they wake up, they rapidly try to gain control of whatever AGI they can get their hands on and put severe limitations on its use. It's not that I think this is good or that I want this to happen, but this is almost certainly what's going to happen. In practice, it'll probably mean that high compute queries will require security clearance. Let's step back and have a quick look at past predictions of the impending machine revolution. In 1960, Herbert Simon, a Nobel Prize laureate in economics, speculated that machines will be capable within 20 years of doing any work a man can do. In the 1970s, Marvin Minsky predicted that human level machine intelligence was just a few years away. In a 1993 essay, the American computer scientist, Werner Wensch, predicted that the technological singularity would come in less than 30 years. All these predictions were wrong. What I take away from this long list of failed predictions is that people involved in frontier research tend to vastly overestimate the pace at which the world can be changed. I wish we'd actually live in the world that Aschenbrenner seems to think we live in. I can't wait for superhuman intelligence, but I'm afraid the intelligence explosion isn't as near as he thinks. So in the meantime, don't give up on teaching your toaster to stop burning your toast. Artificial intelligence is really everywhere these days. If you want to learn more about how neural networks and large language models work, I recommend you check out the courses on Brilliant.org. Brilliant.org offers courses on a large variety of topics in science, computer science, and mathematics. All their courses have interactive visualizations and come with follow-up questions. Some even have executable Python scripts or videos with little demonstration experiments. Whether you want to know more about large language models or quantum computing, want to learn coding and Python, or know how computer memory works, Brilliant has you covered, and they're adding new courses each month. And of course, I have a special offer for users of this channel. If you use my link brilliant.org slash Sabine, you'll get to try out everything Brilliant has to offer for full 30 days. And you'll get 20% off the annual premium subscription. So go and check this out. Yes, helium energy cells still produce net power from neutron-free nucleotides. Neutron-free fusion. Neutron-free fusion. Neutron-free. Thanks for watching. See you tomorrow.