Menu
About me Kontakt

In a recent interview, Sam Altman discussed with Bill Gates the future of artificial intelligence, especially focusing on GPT-5. During their conversation, they addressed several key topics that could significantly impact everyone's lives. Some points of the interview that went unnoticed relate to multimodality, personalization of AI, and advancements in the reasoning capabilities of algorithms. Altman emphasized the necessity of increasing reliability in AI models and their utility in everyday life. Changes in algorithms aim to provide much better responses to queries, which could completely transform the way we interact with technology.

It is crucial to pay attention to the significant advancements that are coming in AI models. Altman mentioned how future AI models would allow customization to meet individual user needs, which is vital for the next generation of GPT. The ability to utilize personal data, such as calendars and user preferences, to create more effective personal assistants will open new possibilities in both private and professional lives. These changes may lead to the emergence of autonomous AI agents capable of operating in more than one format, heralding an era of personalized AI tools.

The discussion also touched upon robotics. Altman indicated that investments in robotics are inevitable as such technologies are poised to become integral to society. Robots powered by AI models could revolutionize various industries by automating tasks currently performed by humans. There are already prototypes of humanoid robots that can carry out tasks requiring human skills. As a result, the upcoming evolution of robotics and the integration of AI could raise questions regarding the future job market and humanity's place in an increasingly automated world.

Furthermore, the conversation addressed the challenging issues surrounding the future, particularly concerning the threats posed by autonomous systems and robots. Altman and Gates outlined the uncertainty that arises with rapidly evolving technologies that could dominate various aspects of life. A pivotal issue is the future of work and the sense of purpose individuals find in their activities when technology takes over more tasks. How can we define human value in a world where AI performs most tasks?

Lastly, it is worth noting that this interview garnered significant attention online, amassing over 229651 views and 4123 likes, highlighting the growing interest in the topic of artificial intelligence. It is undoubtedly worthwhile to keep an eye on what the future holds regarding AI and robotics. What seems inevitable is progress that will allow for an even deeper integration of technology into our daily lives, prompting serious reflections on the future of humanity and the role of AI within it.

Toggle timeline summary

  • 00:00 Sam Altman discusses key points from his recent interview with Bill Gates.
  • 00:10 The focus of the discussion is on advancements like GPT-5 and its implications for the future.
  • 00:25 Important insights are shared about milestones that are set to impact all of us.
  • 00:44 Altman emphasizes the significance of audio from the interview, highlighting fascinating revelations about upcoming AI models.
  • 01:10 Multimodality is identified as a critical area for future development in AI, including speech, images, and video.
  • 01:23 Progress in reasoning ability and reliability in AI models, addressing limitations of GPT-4.
  • 01:51 Customization and personalization in AI systems are noted as essential for meeting diverse user needs.
  • 02:20 Predictions are made about future models potentially featuring AI agents and advanced capabilities.
  • 03:04 The possibility of AI systems effectively understanding and generating video content is discussed.
  • 03:40 Questions arise about AI's potential impact on blue-collar jobs and how robotics will evolve.
  • 05:49 Concerns are raised about changes in the labor market and the socio-economic implications of advanced robotics.
  • 06:20 The philosophical dilemmas regarding human purpose and existence in an AI-driven future are explored.
  • 10:30 Consideration of how AI may affect day-to-day life and interactions with technology.
  • 15:11 Discussion on the role of companies like OneX Robotics in advancing physical AI capabilities.
  • 17:21 Gates and Altman share insights on the ethical considerations and future of work in a world with advanced AI.
  • 20:50 Reflections on the inevitable nature of technological advancement and the ongoing need for problem-solving.
  • 22:57 The conversation concludes with encouragement for viewers to engage and comment on these transformative topics.

Transcription

So Sam Altman did a recent interview with Bill Gates. And in this interview, he actually touched on some key things, especially GPT-5 and what's going to be coming for the future. So in this video, I'll be breaking down every key point from that interview because there are some points that many people did miss simply because they weren't in the video format and many people may have overlooked them. And I'll be covering these to show you all exactly why this interview is so important and some of the things we really need to pay attention to. So let's take a look at some of the key things that were revealed in the interview. And of course, some of the things that you do need to know for the future, because some of the things that we did talk about are actually going to affect all of us. And there are some really, really important questions that they did address. So one of the things that I did talk about that's going to be in the video, of course, is the key milestones. Now, this there isn't actually like an interview. It's just completely audio. So you're going to want to take a listen to this to hear exactly what they say, because trust me, it's fascinating to what Sam Altman reveals is going to be in the next set of AI models that OpenAI are going to be working on slash releasing. You know, when you look at the next two years, what what do you think some of the key milestones will be? Multimodality will definitely be important. We started speech in speech out speech in speech out images, eventually video. Clearly, people really want that. We launched images and audio, and it had a much stronger response than we expected. We'll be able to push that much further. But maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways and also reliability. You know, if you if you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one. And you'd like to get the best response of 10,000 each time. So that'll be that that that increase in reliability will be important. Customizability and personalization will also be very important. People want a very different, very different things out of GPT-4, different styles, you know, different sets of assumptions will make all that possible. And then also the ability to have it use your own data. So the ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important. So from that audio excerpt, we can understand exactly where GPT-5 is going to be heading. Now, some of the points that he did make were made before that I did cover in an old video, but I think we need to pay attention to exactly what he said and break it down. So as the clip started to end, he talked about the fact that GPT-4 and future models, everyone's going to want something different from that, which is why I'm guessing the release of the GPT still was there and the ability to create custom GPTs. But what that does mean for us in the future is, of course, that we probably are going to get some kind of AI agent to our system from future chat GPT models. And we do know since the company is working on AGI, that isn't out of the picture. Now, what he also did talk about, and we will talk about AI agents later on in the video because agents are a huge deal. But he did actually talk about multimodality and, of course, video. Now, the thing is, is that we do know that Sam Altman has discussed AI doing video for quite some time now. He actually did talk about this in an interview around seven months ago where he discussed that it will be very interesting to when they try video. There are a lot of things about coding that I think are particularly great modality to train these models on. But that won't be, of course, the last thing we train on. I'm very excited to see what happens when we can really do video. There's a lot of video content in the world. There's a lot of things that are, I think, much easier to learn with video than text. There's a huge debate in the field about whether a language model can get all the way to AGI. Can you represent everything that you need to know in language? Is language sufficient or do you have to have video? I personally think it's a dumb question because it probably is possible. But the fastest way to get there, the easiest way to get there will be to have these other representations like video in these models as well. Again, like text is not the best for everything, even if it's capable of representing everything. We can see there that Sam Altman literally just said that, of course, a video is going to be one of the key milestones that we do get in the future and something that he is training on. And of course, what's crazy is that not only in the past year, in the past couple of months, there have been some really, really advanced video models come out. If you guys have been paying attention to Pico Labs, you'll notice that their video models have been very, very true. Now, another question I did want to sort of ask was that does this now bring credence to some of the rumors that we saw before? If you remember, there were certain rumors that were stating that, you know, OpenAI is in possession of a powerful model code named Arrakis. And remember, this screenshot you're currently seeing is complete speculation, which means that it doesn't really hold much weight. But this is just speculation. I'm just going to let you guys know what it says. And it says it is everything to everything, all modalities to modalities. And Arrakis exceeds GPT-4 capabilities and performs very close to human experts in many different fields. Hallucination rates are much lower than GPT-4 and half of all its training data was synthetic. For instance, cost is around the same as GPT-4. And it's also very, very good at an autonomous agent, which is scheduled for release in 2024. Now, even if these leaks are completely bogus, I wouldn't be surprised if there is a model that they're working on that does potentially contain some of these abilities, because although these abilities could be potentially, you know, leaks or, you know, awful leaks, we do know that these things are possible. Synthetic data was something that we do know has been brought up time and time again and has been proved when a model such as PHY-2 and PHY-1 to be very, very efficient at giving models very high quality data. Autonomous agents have started to come around. And of course, we do know that exceeding GPT-4 capabilities isn't going to be far away due to the open source community ramping up their efforts in order to catch open AI. So what we do have is a clear, very, very clear idea of what these future models are going to be, because if we do get models that do have video, which is basically going to be AGI at this point, if we have an autonomous agent that is able to be nearly expert at every human in every field, I mean, that is going to be some kind of general AI system that we really can't ignore. Now, of course, remember how Sam Altman at the end of his, you know, little talk right there was actually talking about how this is going to be customizable and it's going to know exactly about you. OK, so I'm going to play that bit one more time. And personalization will also be very important. People want a very different, very different things out of GPT-4, different styles, you know, different sets of assumptions. We'll make all that possible. And then also the ability to have it use your own data. So the ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those would be some of the most important areas. So you can see right there that this is going to be some of the most important areas that open AI are focusing on, which means in the future, it seems that the shift is going to be away from just the traditional AI models, but towards personalized AI agents that will change how we live. And of course, that isn't going without saying that this is exactly what Bill Gates was talking about just two months ago. If you aren't aware, Bill Gates wrote an article saying that AI is about to completely change how you use computers and pretty much the world. In this article, I did a full deep dive on this article because it's absolutely incredible some of the things that he stated. And it really did change my entire view on what I view the future of AI is because Bill Gates is someone of a futurist. He didn't he did actually quite predict a lot of stuff. And I do believe some of the things that he talked about in this article are most certainly going to come true. There is a full 29 minute video in which I dissect every point. But basically, he said that we are going to be using agents for pretty much everything. And gone are going to be the days of where we hop onto a computer and do some work when we have agents that can pretty much do everything for us. And as we are not far away from that future, agents are already a thing. So take a look at this. If you've been paying attention, you can remember. And we will dive into some of the other points from the interview, because trust me, there are a lot. But if you've been paying attention to the interview, there has been this, OK, that has been blowing up on the scene. This, if you don't know, is rabbits are one. It's an agent device, which basically can do anything for your personal life. So in this demo, they essentially showed how this agent device was able to pretty much do anything you really wanted. Unlike Siri and other older large language models, which, you know, they're not even large language models. They're just basic AI systems. They just haven't been that effective. And these guys have made their own kind of system that essentially is an agent that is super, super, duper effective at being able to do anything you want. And it absolutely changes how you interact with the computer. For example, imagine I'm going to a computer. Instead of writing a 1500 word essay, you just ask the agent to do it. It browses Google. It manages to get, you know, all the research, writes the essay and then publish it and sends off the email. That is going to be the future of this. If you haven't seen this, I'd recommend watching the full demo because some of the things that it shows you are pretty crazy. In this demo, he simply says, OK, book me an Uber ride. And, you know, it pretty much does it really, really quickly. Right now, if you ask Siri to book you an Uber ride, the best it can do is open up the app. And we know that Apple is going to be really slow to release anything. So it could be a couple of years before we get anything from Apple because all they want to do is polish it. So like Sam Altman said, it's likely that we're going to be getting an autonomous version of GPT-5, maybe some kind of GPT-5 agent version, maybe a future version. But personality and personalizing the software is going to be a key focus. Remember, these are his words. He said that is going to be the key focus. And it makes sense because this device, this video blew up on the channel. It blew up on Twitter. It blew up on every social media platform. And understandably so. Robotic vocals and innovative use of synthesizers has left a lasting impact on the genre. I can also use R1 to call a ride. Get me a ride from my office to home now. Of course, I will book an Uber ride for you from your office to your home. Please confirm the ride. I have six people with three luggages. Find me an Uber that can fit all of us. For six people and three pieces of luggage, I recommend booking an Uber XL as it provides ample space for all passengers and luggage. Please confirm the ride. The ride shows up. I just hit confirm. Uber's on my way. I can't. So that right there, ladies and gentlemen, is largely going to be the future because right now, in order to interact with your phone and interact with your computer, it is, I wouldn't say tedious because we all know how to use them very effectively. But we could save so much time if we just spoke and just entered a command and the AI agent slash autonomous agent was able to pretty do pretty much do anything that we wanted it to. That would change. That would pretty much change the entire world in terms of how we. Then this is where things start to get a little bit more concerning, and this is why I would pay attention here if I were you, because this is where we talk about robotics. So take a listen to this, because Sam Altman and Bill Gates are here discussing the robotics side of AI. One aspect of AI is is robotics or blue collar jobs when you get sort of hands and feet that are at human level capability. And, you know, the incredible chat GPT breakthrough has kind of gotten us focused on the white collar thing, which is super appropriate. But I do worry people are losing the focus on the blue collar piece. So how do you see robotics? Super excited for that. We started robots too early, and so we had to put that project on hold. It was hard for the wrong reasons. It wasn't helping us make progress with the difficult parts of the ML research. And, you know, we were like dealing with bad simulators and breaking tendons and things like that. And also we realized more and more over time that what we really first needed was intelligence and cognition, and then we could figure out how to adapt it to physicality. And it was easier to start with that, with the way we've built these language models. But we have always planned to come back to it. We've started investing a little bit in robotics companies. I think on the physical hardware side, there's finally, for the first time that I've ever seen, really exciting new platforms being built there. And at some point, we will be able to use our models, as you were saying, with their language understanding and future video understanding to say, all right, like, let's do amazing things with a robot. But it's the hardware, guys. So essentially right there, that is OpenAI's stance on robotics. And we've covered this previously, where we've spoken about how OpenAI is investing in companies like OneX Robotics. And trust me, guys, this is a bigger deal than you all think, because if robotics actually is successful, if it does take off, this is going to change the world on the last frontier that everyone thinks is the hardest. And that is the physical interaction with the real world. If you know anything about OpenAI, you'll know that the company that they've invested in is a company called OneX Technologies. And recently, what they've done is they've actually secured $100 million in a Series B funding round to bring Neo to the consumer market. Now, if you don't know what Neo is, Neo is their humanoid robot that embodies artificial intelligence. If you haven't seen the multiple videos I've posted on this before, I'll give you guys a quick demo. Now, you can see right here that this is the robot that they're planning on building. Now, the robot was supposed to be revealed last year summer, but I'm guessing they hit some kind of small issues that they are trying to steamroll out. But recently, they've actually announced that there's going to be some kind of small demo or some kind of, I guess you could say, in-person meet that includes Neo. And I'll show you guys a screenshot of that in a moment. But this kind of robot will absolutely change everything. Because imagine having an AI agent that's able to use a computer or this AI system that's able to run around and do everything on the internet and on the web that, you know, people use in day to day, people at work, receptionists, you know, CEOs, you know, all you do is a voice prompt and then everything is done with 100% accuracy. Imagine we get that in a robot that's able to physically interact with the world, which is essentially what they're building. And you might be thinking that, no, this is just a company. They're going to fail. They're going to have issues. And you've seen some of the other Androids, like, for example, if we go over some of the other Androids, Eve is actually a pretty decent robot that does interact with the environment. And this one isn't based on AI, but you can see right here that logistically it's very, very good. It can move shipments. It can run around. It can be a security guard. It is really good at opening doors. You can see it recreates tasks. The only problem we have with robotics now is that it's really, really expensive. But this brings us to another issue. OK, and that is what they do talk about. And this is the one that concerns all of us, because this brings us to a big issue. Now, like I said, there is an update here. You can see January 25th towards the 26th in 2024. It says we're inviting 25 of the world's most forward thinking engineers to the inaugural open house at One X HQ in Mos Norway. The two day event offers a unique opportunity to deep dive into the heart of One X, where innovation meets robotics. And it says our selected guests get this will get to engage with the inner workings of our androids, Eve and Neo. OK, and Neo is the one that OpenAI is investing in and is planning on collaborating in in AI. And as they witness the meticulous craftsmanship and interact with engineers shaping their intelligence. So that would be fascinating, because I really do want to see if this is kind of public, if they have some public event, that would be amazing. But this is the biggest concern from this interview, OK? And this is something I talked about before, but it's changes in the labor market. If they manage to successfully and I think they will, they manage to successfully get AI powered robots that are, you know, 100 percent, 90 percent effective and they get them cheap. This is going to change the entire labor market. So take a look at this because it's just going to explain everything. Get the arms, hands, fingers piece, and then we couple it, you know, and it's not ridiculously expensive. That could change the job market for a lot of the blue collar type work pretty rapidly. Certainly the prediction, like the consensus prediction, if we rewind seven or 10 years, was that the impact was going to be blue collar work first, white collar work second, creativity, maybe never, but certainly less, because that was magic and human. Obviously, it's gone exactly the other direction. And I think there's like a lot of interesting takeaways about why that happened. You know, creative work, actually the hallucinations of the GPT models is a feature, not a bug. It lets you discover some new things, whereas if you're, you know, having a robot move, having machinery around, you better be really precise with that. And I think this is just a case of you've got to follow where technology goes and you have preconceptions. But sometimes the science doesn't want to go that way. So what one. So right there, they're essentially saying that previously we thought that AI is going to be never affecting AI artists, but it did. We also thought that AI, when it gets good, it's only going to be, you know, affecting people that aren't really smart. But now it's affecting literally the white collar jobs, which are some of the most highly rated professions like accountants and all these kind of industries where it's affecting. So this is going to change absolutely everything. And of course, this is talked about, OK, because this brings us to a real question. What do we do when, you know, these robots are better at us than absolutely everything? And they're able to do some of the mundane tasks that do give us, you know, quote unquote, meaning, OK, like they give us a sense of purpose because Bill Gates will hear, you know, he perfectly, you know, explains why this is such an issue, because if an AI system, an autonomous agent is able to research, is able to do everything better than you, what is the point of your existence? So take a look at this, because one of these fundamental questions, once we overcome this barrier and some of it even says like this is something that is going to be overcome. And when it is, what are we going to be able to do about it? So take a look, because Bill Gates himself even looks almost confused because he doesn't even understand. And, you know, I don't think anyone does at this moment how to get over this issue. This is one of the smartest people, you know, out there. And he's really struggling to grapple with this concept. Incredible capability, you know, AGI, AGI plus. I guess I you know, there's three things I worry about. One is that a bad guy is in control of the system. And so we have good guys who have equally powerful systems that hopefully minimizes that problem. There's the chance of the system taking control. And for some reasons, I'm I'm less concerned about that. I'm glad other people are. The one that that sort of befuddles me is human purpose. I get a lot of excitement that, hey, I'm good at working on malaria and malaria eradication and getting smart people and applying resources to that. When the machine says to me, Bill, go play pickleball. I've got malaria eradication. You're just a slow thinker. Then, you know, it is a philosophically confusing thing. And how you organize society. Yes, we're going to improve education, but education to do what? If if you get to this extreme, which we still have a big uncertainty, but for the first time, the chance that might come in the next 20 years is not zero. There's a lot of psychologically difficult parts of working on the technology. But this is the for me, the most difficult because I also. Yeah, you're a lot of satisfaction from that. And it's like in some real sense, this might be like the last hard thing I ever do. Well, our minds are so organized around scarcity, scarcity of teachers and doctors and good ideas that partly I do wonder if a generation that grows up without that scarcity will find the philosophical notion of how to organize society and what to do. Maybe they'll come up with a solution. And I'm afraid my mind is so shaped around scarcity. I mean, I have a hard time thinking of it. That's what I tell myself, too. And I truly believe that that although we are giving something up here in some sense, you know, we are going to have things that are smarter than us. If we can get into this world of post-scarcity, we will find new things to do. They'll feel very different. You know, maybe instead of solving malaria, you're deciding which galaxy you'd like and what you're going to do with it. I'm confident we're never going to run out of problems and we're never going to run out of different ways to find fulfillment and do things for each other and sort of understand how we play our human games for other humans in this way. That's going to remain really important. It's going to be different for sure. But I think the only way out is through we just have to go do this thing. It's going to happen. This is like now an unstoppable technological course. The value is too great. And I'm pretty confident, very confident we'll make it work. But yeah, that clip right there. I mean, I'm definitely going to be watching that clip again and again, because it's a question that you do have to ask yourself. Once they're just better at everything you you're good at. I mean, you know, where do where do you where do you go? I mean, I'm actually just speechless at this point. But, you know, like you said, you know, in the next 20 years, the chance is not zero. That doesn't mean it's coming today. It doesn't mean it's coming tomorrow. But, you know, like Sam Altman said, this is something that they are working on and something that is eventually going to happen. And this is a reality that we are going to have to face. So it's going to be something that we will have to realize someday. But it is a concern of mine, because I do know that there are some examples and some industries where it's just no matter what, it's not going to affect it. For example, entertainment and sports. You know, you have AI systems that are better at chess than any of the chess grandmasters. You know, you've got AlphaGo grandmasters, you know, systems that are better than any of the systems. But some things we always just want to see a human do. And sometimes we're always going to want a human for. But, you know, I think that's the one percent of roles, because for the 99 percent of us, it's going to be a robots better than us at many other things. And where is the mass of humanity going to exist in that scenario? So that's something that question that we also do have to face. But of course, they also do talk about the cost of intelligence and the cost of intelligence dropping down to zero is going to be a very good thing. And in the future, I'm not sure how this is all going to impact us. But let me know what you thought about this interview with Bill Gates and Sam Altman, because in this clip, they essentially just talk about how, you know, the cost of intelligence in terms of these large language models is just dropping and dropping and dropping and being able to run them. For example, GPT 3.5, they keep dropping the cost of that. And we've seen that a lot of these models are getting cheaper, cheaper to run when I'm on your iPhone or something that literally ran Mistral on their phone. And the other day, yeah, every single month we get cheaper and cheaper devices. But I don't know how that problem is going to be solved. But like Sam Altman said, one thing is for sure, we always will have problems to solve. But let me know what you thought about this interview, what you thought about GPT 5, what you think about future and robotics. I thought this interview was definitely phenomenal as long as you paid attention to some of the most key parts. But yeah, leave your comments down below.