Menu
About me Kontakt

Fireship in his latest video reveals serious coding errors in the Rabbit R1, which have led to significant user risks. It turns out that the developers of this device made an unforgivable mistake, calling it downright catastrophic. The faulty code allows third parties to view all messages sent from all devices and modify them. Worse still, there exists the potential to brick every R1 through malicious code. The developers of Rabbit R1 now have to face the consequences of their actions. In today's update, Fireship details how not to make similar mistakes when developing tech products.

Fireship mentions that he first encountered the Rabbit R1 at CES in January. Even then, our author was stunned by its utter uselessness and the abundance of cringe-inducing buzzwords used by its CEO. Despite early skepticism, the device sold out pre-orders almost instantly. However, following the initial hype, the product quickly gained a reputation as a laughed-at piece of tech in 2024. Its origins in crypto and NFT scams further fueled the criticism. Yet, even before this, no one could have expected mistakes so fundamental made by the development team.

The exposure of the errors comes from a group called Rabbitude, specializing in reverse engineering. Upon gaining access to the Rabbit codebase, they discovered hard-coded API keys for various external services, with the most concerning being 11Labs. The 11Labs API key facilitates communication with a text-to-speech platform. This means that every interaction with the Rabbit 1 requires sending queries to the 11Labs platform, putting users at risk if those keys fall into the wrong hands. It is important to underline that this situation is not 11Labs' fault.

Fireship notes that what needs to be understood is that Rabbit may have ignored the exposed 11Labs API key for a month. The overseeing programmers clearly displayed a lack of responsibility by thinking the issue would resolve itself. Fortunately, at the time of writing this article, Rabbit has rotated its API keys, preventing catastrophe, but too many questions remain about their security procedures. It is crucial to avoid hard-coding API keys, which should be treated like passwords. Otherwise, if a key is stolen, severe financial and reputational losses may follow.

For those interested in the Rabbit R1 product, Fireship humorously suggests a drastic solution: to 'douse it in gasoline and throw it into a deep borehole,' which may serve as both a joke and a serious note. In conclusion, it’s worth noting that the video has already accumulated 972,702 views and 40,007 likes at the time of writing this article, indicating that this controversial topic is sparking significant interest among viewers in the tech community.

Toggle timeline summary

  • 00:00 Introduction to the controversy surrounding the Rabbit R1.
  • 00:06 Developers made catastrophic coding errors affecting user messages.
  • 00:12 Serious vulnerabilities allow unauthorized message access and alterations.
  • 00:20 Discussion of how the company addressed these issues.
  • 00:32 Overview of the speaker's initial impressions of the Rabbit R1.
  • 00:41 Critique of the Rabbit R1's functionality and buzzwords from the CEO.
  • 00:53 Reveal of the Rabbit R1 as mainly an Android app.
  • 01:07 Description of a major coding error related to hard-coded API keys.
  • 01:20 Rabbitude's discovery of the hard-coded API keys.
  • 01:30 Impact of mishandled API keys on the Rabbit R1's functionality.
  • 01:50 Potential exploits stemming from exposed API keys.
  • 02:06 Clarification on the responsibility of 11Labs regarding API security.
  • 02:27 Speculation about how the API key leak occurred.
  • 02:51 Rabbitude's claims of Rabbit's negligence towards the exposed API.
  • 03:15 Advice on treating API keys with caution.
  • 03:38 Challenges of key rotation for production apps.
  • 03:57 Discussion of encryption and security tools for sensitive APIs.
  • 04:16 Dark humor regarding disposal of the Rabbit R1.
  • 04:22 Closing remarks and sign-off from The Code Report.

Transcription

Another day, another controversy for the Rabbit R1. Apparently this time, its developers wrote some bad code. Like inexcusably, catastrophically bad code. Code that allows someone to view every message ever sent on all devices. Code that allows the attacker to alter the messages sent to the end user. And code that can brick every R1 in existence. It's outrageous, egregious, and preposterous. Mischievous, salacious, outrageous. But the most shocking part about this story is what the company did to fix it. In today's video, we'll find out how this is even possible, so you don't make the same mistake when shipping your own half-baked AI product. It is June 27th, 2024, and you're watching The Code Report. I first encountered the Rabbit R1 at CES in January, where I was blown away by its utter uselessness, along with the amount of cringe buzzwords used by its CEO. Despite setting off my bullsh** detectors, these devices sold out pre-orders within the first few minutes. But after the initial hype, the Rabbit R1 has been the lolcal of tech products of 2024. It was exposed as being nothing more than an Android app under the hood. It was revealed to have origins in crypto and NFT scams, and when it actually shipped, it was even more useless than people imagined. But never in my wildest imagination would I expect their developers to make a mistake like this, hard-coding API keys directly into the codebase. This mistake was discovered by a group called Rabbitude, which is dedicated to reverse engineering the R1. According to their statement, back on May 16th, they obtained access to the Rabbit codebase. And inside that codebase, they found hard-coded API keys for 11Labs, Azure, Yelp, and Google Maps. The most problematic one is 11Labs, which is an AI text-to-speech platform. When you talk to the Rabbit R1, it turns your speech into text. It then passes that text off to a large language model to generate a response. But before that response goes back to the end user, that text needs to be converted back into speech. And that's what 11Labs does. DEVELOPMENT OF THE ATOMIC BOMB That means the R1 needs to make an API call to 11Labs for every response ever sent by the R1. And that means if someone ever got the 11Labs API key, they'd be able to read every R1 response in history, they'd be able to change responses, and they could just delete the AI voices from 11Labs to brick every single R1 in existence in a matter of seconds. And that is quite the exploit. And just to be clear, it's not 11Labs' fault. But there's one thing we need some clarification on. In the statement, it says Rabbit 2 gained access to the Rabbit codebase, which I assume is referring to the back-end Rabbit codebase. The details are sparse, but I don't think they actually put an API key in their Android APK, which would be an even more absurd mistake, because you should never put secret API keys in client-side code. Even my 5-year-old knows that. It seems more likely that Rabbit has a leaker, like an employee dumping the code onto a USB and potentially breaking the law to share it. Leaking is a risky business. Julian Assange once leaked war crimes for which he was treated like a criminal. Even Hillary said, can't we just drone that guy? And he just regained his freedom a few days ago. Now the thing is, when those war crimes leaked, the war criminals didn't stop doing war crimes. And what's crazy is that Rabbit took a similar approach. According to Rabbitude, they've known about this exposed 11Labs API key for a month, and their solution was to just ignore it and hope the problem goes away. This information, I assume, is also coming from the leaker. Now, at this point, Rabbit has rotated its API keys, and this group is operating for the public good, thus nothing catastrophic has happened to user data, but there's a lot of good reasons not to hard-code API keys into your code. In fact, I have a new full Linux course coming up for Fireship Pro members that coincidentally talks about this issue. Might as well leave you a discount code if you want to get early access. An API key is like a password and should be treated with the same amount of respect. If a hacker gets a hold of one, they could retrieve and delete all your data, and cost you a bunch of money in the process. When an API key is hard-coded, one thing you might do is accidentally push it to a public Git repo. And right this very moment, there are scraperbots out there watching every Git repo for exposed API keys to exploit. Problem number two is that it makes key rotation more difficult. Generally, production apps should rotate API keys every 30 to 90 days, but a high-profile app like the R1, where it's known that people are actively trying to reverse-engineer it, could be even more paranoid and rotate their keys every week or so. And this process can be automated with zero downtime. There's really no excuse. Another reason you don't just hard-code is that for a key that's sensitive, it should also be encrypted. Tools like AWS Secrets Manager offer multiple layers of protection, so even if someone gets access to your server, they still shouldn't be able to get the API key. And even if someone does try to get access to it, those requests are logged, which would immediately identify the leaker, so Rabbit management could then put cyanide in their soy latte. Now, if you own the Rabbit R1, there is a recommended solution, and that is to douse it in gasoline, apply a flame to it, and chuck it in the cola super-deep borehole. This has been The Code Report. Thanks for watching, and I will see you in the next one.