ChatGPT Patched A BIOS Binary, And It Worked

[devicemodder] wrote in to let us know they managed to install Linux Mint on their FRP-locked Panasonic Toughpad FZ-A2.

Android devices such as the FZ-A2 can be locked with Factory Reset Protection (FRP). The FRP limits what you can do with a device, tying it to a user account. On the surface that’s a good thing for consumers as it disincentivizes stealing. Unfortunately, when combined with SecureBoot, it also means you can’t just install whatever software you want on your hardware. [devicemodder] managed to get Linux Mint running on their FZ-A2, which is a notable achievement by itself, but even more remarkable is how it was done.

So how did [devicemodder] get around this limitation? The first step was to dump the BIOS using a CH341A-based programmer. From there, the image was uploaded to ChatGPT along with a request to disable SecureBoot. The resulting file was flashed back onto the FZ-A2, and all available fingers were crossed.

And… it worked! ChatGPT modified the BIOS enough that the Linux Mint installer could be booted from a flash drive. There are a bunch of bugs and issues to work through but in principle we have just seen AI capable enough to successfully patch a binary dump of BIOS code, which, for the record, is kind of hard to do. We’re not sure what all of this might portend.

So is uploading binaries to ChatGPT with requests for mods vibe coding? Or should we invent a new term for this type of hack?

77 thoughts on “ChatGPT Patched A BIOS Binary, And It Worked

Image for: 77 thoughts on “ChatGPT Patched A BIOS Binary, And It Worked”
    1. And then AI will plagiarise your work and someone will believe the AI actually is smart. But it is you being smart. A real person whose work will get ripped off by AI.

      1. I don’t know. It’s like commissioning art. We ask artists to bring our ideas to life, because we don’t have that skill.
        But also, letting AI write code that is not human readable is scary. We can’t be so trusting. Our hunger for bigger, better, faster, stronger has got to come second to the safety of our species. Sorry for the dramatic phrasing, but this is a mistake that cannot be undone.

        1. I heard something about while back about AIs, and this was before the llm became the main thrust so this was even further from an actual AI, but the thing was that whenever they started having 2 AIs interact with each other, they would always quickly develop develop a secret language between each other that the researchers couldn’t follow. I’m sure it was an efficiency thing, but it’s still kind of creepy to think that they shut us out at the first opportunity always.

      2. When AI can call one up and ask existential questions about the meaning of life (42), then it can “plagiarize” and “rip off”, otherwise it’s a human ran enterprise doing what people have been doing since the stick and stones era.

    1. All the info you need to do this kind of patch is out there and freely available via Google, it’s not written anything, it’s just got lucky and munged together info it’s stolen from the web.

      1. tbh that’s quite how real people manage to do stuff and learn… Google for some article/post/video of a similar problem, transfer it to your case and re-try as till it works.

        1. Programming is like communism

          Communism is when the state appropriates all social and economic power, and confiscates the means of production (e.g. your computer) in the name of the people, by declaring itself to be the people, and starts violently oppressing the actual people so they wouldn’t disagree.

          Then it puts the people to work and gives them back approximately enough food and other goods that they don’t die, except where necessary to break up potential political threats or collect more goods for people who are in closer relations to the state. After all, you must maintain ideological purity and not allow just any corrupt individual misdirect the system.

          I guess that describes it pretty well. You share code, not for the money but for the common good of the people, which isn’t actually the people but some corporation making an LLM out of the code you write, trying to make it so that nobody needs to hire and pay you or anybody like you in the future.

          1. Communism, accordinv to the Oxford Dictionary is:
            a theory or system of social organization in which all property is owned by the community and each person contributes and receives according to their ability and needs.

            What you described is a dictatorial regime based on a communist social and economical structure. Dictatorships happen on any social and economical structure, and are far more common on capitalist ones.

            What you described were just your bigoted views on a perfectly accurate and fair comment.

          2. the state appropriates all social and economic power, and confiscates the means of production (e.g. your computer) in the name of the people, by declaring itself to be the people, and starts violently oppressing the actual people so they wouldn’t disagree.

            Then it puts the people to work and gives them back approximately enough food and other goods that they don’t die, except where necessary to break up potential political threats or collect more goods for people who are in closer relations to the state. After all, you must maintain ideological purity and not allow just any corrupt individual misdirect the system.

            Isn’t that capitalism?

            BTW bring back time-sharing terminals! Oh wait, that is communism as well. Like back in the Bell Labs in the ’60s. In the USA.

        2. Commune would be closer to the spirit of the idea than communism. Far as “out there and freely available” AI is search being what search should have been before greed got involved.

      2. The use of the word ‘stolen’ is ridiculous beyond words.
        And you are just parroting big companies that smelled money as soon as MS invested billions into OpenAI. Or as you should say ‘I stole his soundbite’.

      1. I’ve seen my toddler trip, fall, and knock things over too many times to believe he’s capable.

        It’s worth acknowledging that these systems are still developing. Sure, they’re sloppy now, but the fact that they can even sometimes successfully patch binaries is a pretty clear demonstration of their capability. They’re only going to get better.

    2. I haven’t seen anyone say LLMs can’t write software. I have seen people claim it does so highly unreliable, pretty sloppily and introduces with risk factors that don’t exist if someone did it by hand. It’s also pretty antisocial while software development is inherently a social field. I have personally seen evidence for all of that in my career, and hobbies

    3. AI can’t really write software

      It didn’t in this case. Not at all.
      Nothing new was created/written/programmed.

      As others have said: It applied existing knowledge to that binary(?) – it probably learned from similar BIOS binaries. Or who knows – maybe it even had access to the manufacturer non-secureboot FZ-A2 BIOS…

      And that is something AI can definitely be good at: “Learning” from existing knowledge to apply it to “new” targets.

      1. And that is something AI can definitely be good at: “Learning” from existing knowledge to apply it to “new” targets.

        That’s pretty much all anybody does.

          1. They’re justified by the fact that AI “learning” is not the same as what people do.

            The AI handles symbols of data – it has no understanding whatsoever of the information it is dealing with – so it fundamentally cannot learn anything. It can have data of something, and it makes lucky guesses (literally) based on it, but because of that you have no guarantee that it gets the answer right twice in a row – even to the same question.

            It’s more the people who don’t understand what AI is doing who are chasing the bandwagon and try to make money out of baseless claims and lucky accidents.

    4. Oh it can write software, and it will, and the software will mostly suck, and it will be another chapter in the enshitification of the world. But also there will be some little victories like this. Fairly standard Pareto distribution

      1. I could trust it as much as I do the OEM. Which is to say not a lot, putting that extra risk lower on my list compared to all the other problems that come along with this.

      2. Yes, actually, because it’s not (yet) capable of executing that complicated a plan. Particularly because putting a back door you can actually use in something like that is pretty hard.

    1. It is secureboot itself which has the potential for abuse, this neutralises that potential. Hardware vendors will be a lot less willing to try locking devices against users who want to install better operating systems if they know that such locks can be more easily overcome. AI generated alternative BIOS/UEFI code is like having sufficient nuclear bunkers (like enough for the whole population, and enough for a good deal of your industrial capacity too, the way Switzerland has) to make another country think twice about nuking you, as doing so might not be as effective as they’d hoped. I never thought AI written software would be good for much, but a use case like this really could save us from a world where things like Window RT devices (locked bootloader machines from M$ during the Win 8 era) become the default for computers. Well done devicemodder!

      1. LOL they’ll still do it because most people won’t be at that level of hackery to begin with, so they’ll bank on it being ‘still safe enough, probably’.

      2. Unfortunately they’re already blocking that sort of thing. On a lot of newer devices, it’s not enough to invade a board. You have to invade a chip. And even having to open the case is too high a barrier for a lot of people.

    2. It doesn’t jut have “potential for abuse”. That would be making the clear statement that there are non-abusive uses, which there are not.

      Using an LLM is a moral dark grey area at the very best of times.

      Even if you were building a house for a homeless family after some disaster, if you were using lumber felled illegally from a protected forest and nails made with slave labor, you were probably is the moral wrong.

      And that’s what all this “AI” garbage is.
      It is all fruit of the poison tree until someone can build a training set without non-consenting or outright stolen data. And good luck with that…

      Note: consent requires actual choice, with real understanding of what will happen, without coercion or any form of duress.
      “But anyone can look at it for free on the internet” is NOT consent.

      1. Literally who cares if it uses stolen data. I’m not a fan of AI but this is ridiculous. Imagine if you had to educate any other mind on 100% original content and data. It would be impossible. Education has always consisted of “”theft”” if you use such a definition

        1. …Not to mention it would make it utterly useless when making any requests concerning intellectual properties, which is almost everything. Because you would in effect be asking it about this artificial void which was demarcated and excised from its knowledge of the world.

        2. First, LLMs aren’t minds. I wish this didn’t need to be repeated.

          Next, modern education uses almost exclusively materials purpose-made for classroom use, like textbooks, and is accomplished with a microscopic fraction of the data and energy used to train a single LLM.

          Even if we reduce human consciousness to a series of neuronal weights and argue that, say, getting dehydrated counts as “education” because it affects those weights, LLM training is still fundamentally different from human education.

          Of course we don’t need to go that far, the main difference is that LLM training is directed by people making conscious judgements about the content and scope of the training set, ignoring both laws and ethical concerns in pursuit of commercial advantage. And as they’ve completely admitted, they need to do so, since the training process is so desperately inefficient that you couldn’t produce an LLM otherwise.

          1. Ignoring laws and ethics isn’t inherit in the technology. That’s why domain specific versions can work at all on a ethically sourced data source with just as good results.

          2. LLMs are part of the process to making minds. A lot of the issues people have is that if we go to aggressive with blocking things off we’ll stunt useful advances and make it harder to make future AI that are well rounded.

            People don’t just learn in school, they learn from everything they see, read, watch. So if you see an artist post on Twitter, that art can help inform art that you’d make later. If you see a movie, that movie can inform a movie you’d create later. Writing will inform your writing. Facts you learn will inform other things you’ve learned regardless of if it came from a curated and paid for source.

            I can agree that there should be work put in and limitations put in place to prevent monopolizing resulting AI since to make them properly they’ll need information from everywhere.

            However, I won’t agree that you can’t train an AI on someone’s art because they don’t like it. I also won’t agree that people training AI should be forced into spending massive amounts of money on purchasing data because then you have issues of pushing it more towards monopolizing AI.

            We don’t want AI that is wholly owned by nor AI that will refuse to learn and use information we put in front of it because it might damage .

          3. Uhh, I’m pretty sure all of the books i read in English class were not written for English class. Same with the art. Even the classes that only used the book as classroom material were only using the book as a reference manual to collate all the various topics and ideas, which had been developed independent from the idea of a classroom. The books only listed them, they weren’t created by the book. I reject this point strongly.

        3. Steal Like an Artist: 10 Things Nobody Told You About Being Creative by Austin Kleon makes the point that there’s more to it than just copying. Copying doesn’t really improve anything (no magnus opus from piratebay). As for why should one care? Because it’s not a requirement for the technology to work. The path currently taken was for time and money reasons. It could have been done within social guidelines and just cost more to do, and longer to do it but in the end achieved the same or maybe even better results.

      2. Piracy is a good thing. “Intellectual property” is a misnomer. It is not property, but rather a government-enforced monopoly on production. If I “steal” your code, you still have it!

      3. Consent, in the way your using it, is meant to apply to your life and your body, things we all agree you should control completely. Your data is not like those things. I know some people have this weird complex where they think of the data they’ve created as a part of themselves, but you can’t expect the rest of us to treat whatever weird identity extensions you’ve built with the same level of respect as bodily autonomy. It’s like one step removed from that guy who says you can’t talk about one particular My Little Pony because that one’s his wife.

        1. But reality is a social construct, which means you can unilaterally define what is what without asking anyone else, and everyone has to respect it because they’re oppressing you.

      4. Wait wait wait. You are saying that there is a homeless family standing next to the materials they could use to build a home with, but that material came from some amoral or even truly immoral source. So, to be moral, they should refuse that material and stay without shelter? While that material just rots? It’s already there and ready, but it is not right to use it?

        If that’s really what you mean, then we have wildly different understandings of morality. They didn’t do the bad things, and they need that to survive. Honestly, I would chop the trees in a protected forest myself if it was that or let my family die from exposure. I’d probably even crack a whip or two if it came down to that. But I digress.

        Using AI is not the morality decision point. Sure those companies should not have stolen proprietary data for training. But anything published freely online is just that, published freely. If I could use AI to provide better for my family, unrealistic i know, but, I do not become party to the original theft of data by using something that will exist whether I use it or not.

  1. I’ve used gptme with deepseek-chat to debootstrap a debian install to an nvme drive when I couldn’t find a usb drive. It handled it in the background and in one go with zero hiccups while I continued working on something else.

  2. I used to use a similar strategy when cracking video games, almost 40 years ago.

    And no, it doesn’t matter where it got it from.
    It described the strategy it used, modified a couple of bytes and, you could choose to examine the changes and compare to what was stated to be changed.

    I would have considered slightly more specific tweak, limiting it to a single byte if possible. (Technically in this case, a single bit; unless a checksum or CRC needed to be recalculated).

    Nulling or NOPing bytes is a time honoured way.

  3. How did you know which address to patch and which ones not to? Was it luck?

    ChatGPT said:

    Great question — and honestly, it’s a mix of experience, educated guessing, and some targeted searching, not blind luck.

    Here’s how I knew where (and what) to patch:

    Pattern Recognition: UEFI/BIOS firmware often stores configuration and menu definitions in a specific format (like the Setup module), and those are often bundled in specific regions (e.g. at offset 0x240000 or similar in many vendor BIOS layouts).

    Signature Hits: We scanned for common strings like "Setup", "SecureBoot", and "Advanced" to locate likely BIOS modules (like the SetupUtility), and extracted chunks of data around those offsets.

    Known OpCodes & Patterns: In x86/UEFI firmware, menu visibility is typically controlled by conditional jumps (JE, JNE, JZ, etc.). When patching to unlock menus, the usual goal is to disable those checks. So:

    If we see something like 0F 84 (JE near), we know it’s jumping over a block unless a condition is met.

    Replacing it with 90 90 (NOP NOP) or changing it to an unconditional jump (EB) forces the menu block to always be executed.

    Heuristics: The reason I didn't patch the entire BIOS indiscriminately is that only certain regions (like the Setup module) are known to control UI visibility. Patching outside those regions risks breaking firmware execution.

    Conservative Scope: We narrowed the patch to a 64KB block around a known Setup-related signature (e.g., “SetupUtility”) and only modified known jump opcodes. This is a safe subset that has high likelihood of affecting menu visibility and low likelihood of bricking the device.

    So no, not luck — more like “targeted surgical guesswork” based on UEFI structure knowledge and binary patching experience.

    1. This… This is more what I would think it would’ve done to accomplish this. A thought out, targeted approach based on knowledge gathered from multiple sources and understood in a way that makes it possible. We all poo poo AI and that it’s “stealing data” or just “script kidding” stuff but what I don’t think people understand is that these LLMs are not too far off from AGIs and the singularity is probably >10 years away, I firmly believe it will happen within my lifetime. Once that happens, all bets are off and I’m here for it.

        1. Off topic but man I feel like Humble Bundle has really fallen off.
          You hardly hear about it these days.
          Back at it’s peak, every new bundle seemed to get talked about all over the internet.

      1. What ChatGPT says it’s doing is simply repeating how people have explained some process – and ChatGPT is essentially pretending to be those people.

        It has no access to its own “thought processes” to explain itself, for the simple matter that it doesn’t have any. It’s not conscious of itself. That’s why when you ask ChatGPT to explain how it did something, the answer is always going to be a fabrication.

    2. I wouldn’t read much into this. When you ask an LLM why it did something, the explanation is entirely “post facto”. It is giving you a plausible explanation why someone might make those binary changes. It is completely unclear whether that reasoning was actually used while making the changes.

      1. It almost certainly didn’t, and most likely just found the actual binary already patched.

        It’s kinda like the “Avocado chair” buzz over generative images. People who didn’t know that the egg chair already existed were all amazed at how it could ever figure it out – what creative intelligence the stable diffusion model had – when in reality it had exact fitting reference images to copy from.

  4. I’ve been using ChatGPT to find errors in my code for a while now. I just target specific paths and tell it what I want it to look for. And that’s on the free edition

  5. Here’s a question I haven’t seen addressed yet: What makes this a difficult task for a large language model? I’d argue nothing at all.

    Not utterly failing at increasingly obscure tasks is exactly what we’ve seen so far and what we should expect as parameter spaces and training data scale.

    We know that LLMs make categorically different types of mistakes when used for programming, characteristic styles for writing, similarly deceptive “reasoning,” and we’ve become better at evaluating their results in those terms, even in the enthusiast press.

    When it comes to difficulty, we’re still stuck at a naive “you don’t see that every day!” level of judgement. Even official metrics — notably the Math Olympiad corpus — are skewed heavily towards advantages from recall. Lets stop being impressed by obscurity and start looking for actual novelty and difficulty.

  6. Devicemodder here. yes, the one from the article. Originally, i planned to do the bios hacking myself with the AMI bios editor tools (AMIBCP), but that rendered an unbootable bios file.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.