Note to self: you should have added random delays before and after making the POST code visible on the external pins.
This is one of my go-to case study videos for the development effort required to architect a computer to resist attackers who have physical access.
Obligatory: https://www.youtube.com/watch?v=U7VwtOrwceo
Microsoft also allowed any console to switch to developer mode and run homebrew, massively reducing the need for people to try find exploits.
"The Xbox 360 hypervisor is probably the most secure piece of code Microsoft has ever written." from the excellent article Tony Hawk's Pro Strcpy
Otherwise on a retail console you can't do much. The hard drives are not encrypted but all content that can possibly contain code / save data is signed. Save data cannot contain code but introduces scripting engine / save parsing attack surface, but you can't modify it without first dumping keys from a retail console.
To dump keys from a retail console you have to get code exec in the hypervisor. To attack the hypervisor you have be able to dump the hypervisor to audit it.
To dump the hypervisor you have to be able to read its contents or dump it from flash. The flash is encrypted with a per-console key (and I don't think you can sniff the bus?) and RAM is encrypted.
Realistically if it weren't for the original syscall handler bug and dev kits getting into researcher's hands, the Xbox 360 may have never been hacked.
Dev kits are keyed differently and most of the console's keys for signing / encryption are in various SDK DLLs that if you reverse engineer you can find.
Apple continuously patches zero-day kernel exploits against the latest iOS and hardware, https://support.apple.com/en-us/100100
> NSO Group Technologies, was accused in a lawsuit by Meta’s messaging app of infecting and surveilling the phones of 1,400 people over a two-week period in May 2019 via its notorious Pegasus software. The judge.. found the company had violated state and federal US hacking laws.. was used to infiltrate not only WhatsApp but also iPhones to extract pictures, emails and texts.. victims of the hack identified by Meta were senior government officials, journalists, human rights activists, political dissidents and diplomats.
From the excellent: https://icode4.coffee/?p=954
Essentially as the platform owner, you want to ensure games sold for the platform "just work", and if you have a bunch of third parties running bad software, consumers would lose faith in the platform altogether.
To add a little more color to this, it wasn't solely to ensure games worked. The lesson of the video game crash was that third party publishers would make knock-off games similar to popular titles and flood the market with them at much lower cost - sometimes as low as $5 vs for a $40 for a top title. These games were generally low budget and rushed to market to capitalize on looking like a top-selling title - while being just different enough to (hopefully) avoid trademark infringement.
These games usually "worked" (as in booting up and playing), the issue was more that that they were just bad versions of the title they were ripping off due to having little development time and minimal play testing along with poorer artwork and fewer levels (thus saving ROM memory). The flood of cheap, bad versions of more popular games is credited as the main factor that killed the Atari VCS.
Another big factor was that later console manufacturers charged game publishers a license fee for the proprietary library code required for a console to run a game. This fee could allow manufacturers to sell game consoles at cost or even below cost and recoup the lost profit over time in the per game license fee.
This wasn't always the case in the early days of hardware cartridge systems. Initially, some early console manufacturers didn't charge much more than a game publisher could buy blank cartridges for from a third party. Some other manufacturers chose to generate revenue simply by building more margin into the wholesale price they charged game publishers for blank cartridges. Of course, when console manufacturers started increasing their cartridge profit margin, game publishers were motivated to use third party cartridges - which led to console makers deploying "genuine hardware" checks or, later, disc checks and encryption. Nintendo popularized enforcing their business model both technically and legally (by requiring an IP license). Today, console manufacturer business models rely on 1) Collecting per game license fees, 2) Blocking piracy, 3) Limiting game supply.
There is a lot of interesting history around how game console business models and the legal landscape evolved over time. (https://en.wikipedia.org/wiki/Atari_Games_Corp._v._Nintendo_....)
A big draw as well is that people can't (within the economic viability timeframe of the games/console) hack the games on a console, meaning you get a much more predictable online experience than you might on PC.
Not quite. You (were? I don't know if this is changed) limited in how much of the hardware you could access: it wasn't 100% access. Enough for most homebrew, emulators and so on, but it wasn't carte blanche "replacement for a dev-kit" access.
Still great, and good enough for most use cases
IIRC the homebrew you can run is mostly UWP stuff? But if you want to launch a _game_, built for an Xbox, it requires to be in the program.
Famously the reason no one ever used Microsoft Windows.
Microsoft in its early days invested a shit ton of money and effort into backwards compatibility testing and fix development. Up until Windows 7 you could be reasonably sure that any piece of software from the Windows 95 32-bit days would still work without major issues - even 16-bit software would run under a 32-bit W7 host, only W7 x64 finally dropped support for that.
Nobody is buying consoles for the hardware. 99% of the product is software. The accounting that you are using, a popular one, is so opppositely-informative that people who make consoles - and smartphones for that matter - clearly do not make decisions with it. It is 200% wrong to characterize it as dumping. Nothing is being dumped.
Here’s a simple idea for you: show me the vibrant market for Xbox 360 2005 era computer hardware. There isn’t any right? What about Xbox One 2013 era? And yet we still play games that were developed earlier than 2013, like League of Legends. The product is software. Nobody loses anything by being unable to run Linux on 2013 hardware today, and nobody loses anything by being unable to run Linux on the Xbox One in 2013 because, if they wanted cheap computers then, they had plenty to choose from!
This makes zero sense. Both have existed simultaneously forever, and hundreds of millions of people eagerly buy both for the same households. I cannot understand your point of view here, other than invoking a word "dumping" and "anticompetive" that you are using 200% wrong. Consoles and open platform PCs do not compete with each other.
> In both software and hardware the open PC platform is far more competitive which drives value for consumers.
The market for high budget single player games exists solely because of DRM protected consoles. So this category of product, that people eagerly have paid for for decades, to the tune of hundreds of billions of dollars, would cease to exist if you required console makers to allow people to bypass DRM. Ask 20 people in the game industry and 19 would agree. I understand the core and spirit of what you are saying, but it is reflecting your aspirations for a world that doesn't exist. Markets aren't art exhibits!
https://www.gamespot.com/articles/ps3-manufacturing-costs-do...
Another possible (even worse) future could be cloudification of everything. Enjoy your thin client.
Of course software is bigger than entertainment which might represent a problem. We're increasingly societally locked into this digital shit.
Oh and I'd just like to say thank you for your contribution to my childhood/adolescence.
So I had no real hardware to test any of the software I was writing, and no other chips (like the Apple G5 we used as alpha kits) had the custom security hardware or boot sequence like the custom chip would have. But I still needed to provide the first stage boot loader which is stored in ROM inside the CPU weeks before first manufacture.
I ended up writing a simulator of the CPU (instruction level), to make progress on writing the boot code. Obviously my boot code and hypervisor would run perfectly on my simulator since I wrote both!
But IBM had also had a hardware accelerated cycle-accurate simulator that I got to use. I was required to boot the entire Xbox 360 kernel in their simulator before I could release the boot ROM. What takes a few seconds on hardware to boot took over 3 hours in simulation. The POST codes would be displayed every so often to let me know that progress was still being made.
The first CPU arrived on a Friday, by Saturday the electrical engineers flew to Austin to help get the chip on the motherboard and make sure FSB and other busses were all working. I arrived on Monday evening with a laptop containing the source code to the kernel, on Tuesday I compiled and flashed various versions, working through the typical bring-up issues. By Wednesday afternoon the kernel was running Quake, including sound output and controller input.
Three years of preparation to make my contribution to hardware bring-up as short as possible, since I would bottleneck everyone else in the development team until the CPU booted the kernel.
Eric Mejdric from IBM called on Friday and said we have the chips, when are you guys getting here?
I took a red eye that night and got to Austin on Saturday morning.
We brought up the board, the IBM debugger, and then got stuck.
I remember calling you on Sunday morning. You had just got a big screen TV for the Super bowl and had people over and in-between hosting them you dropped us new bits to make progress.
I think Tracy came on Sunday or Monday and with you got the Kernel booted.
Good times!
This is Harjit by the way.
Edit: added super bowl.
Just the thought of how many people you touched with your work....just amazing! :)
Had a question if you don't mind: Can you talk about the thought process behind the power supply design? Its very large even in the super slim models. Were you following a specific design driven by the hardware architecture or were there other reasons? I always wondered about that.
I presume you're referring to this one: https://www.xbox.com/en-US/power-on#watch
As someone who recently got interested in emulation and wrote two lc-3 emulators, would really love to learn from the masters.
You filed a bug report and then dug into them and used SBox to figure out what must have been going wrong.
The chip supplier came back with a workaround and within five minutes you simulated it on SBox and said it wouldn't work, why, and then said how it should be fixed.
The supplier didn't believe you as yet. And you worked out a workaround so we could be unblocked. Two weeks later they agreed with your fix...
So on PPC interlocked-increment is implemented as:
loop: lwarx r4,0,r3 # Load and reserve r4 <- (r3) addi r4,r4,1 # Increment the value stwcx. r4,0,r3 # Store the incremented value if still reserved bne- loop # Loop and try again if lost reservation
The idea is that the lwarx places a reservation on an address that it wants to update at some later time. It doesn't prevent any other thread or processor from reading or writing to that address, or cause any sort of stall, but if an address being reserved is written to, conditional or otherwise, then the reservation is lost. The stwcx instruction will perform the store to memory if the reservation still exists clears the NE flag, otherwise it doesn't do the write and sets the NE flag and software should just try again until it succeeds.
On the Xbox 360 we provided the compiler which would emit sequences like these for all atomic intrinsics, but developers could also write assembler code directly if they wanted to. We'll get back to this point in a moment.
As the V1 version of the Xbox 360 CPU was being tested by IBM, they discovered that an error with the hardware implementation of these two instructions and issued an errata for software to work around it, which we implemented. Unfortunately, after further testing IBM discovered that the errata was insufficient, so issued a second errata, which we also implemented and assumed all was well.
Then the V2 version of the CPU comes out and months go by. But early one morning I get a phone call from IBM letting me know that the latest errata was still insufficient and that the bug is in the final hardware. Further, Microsoft has already started final production of CPU parts, even before full testing was fully complete (risk buy), so that they could have sufficient supply for the upcoming November release. I was told that they are stopping manufacturing of additional CPUs, and that I had 48 hours to figure out if there is anything software can do that could work around the hardware issue. They also casually mentioned that millions of dollars of parts would need to be discarded, a hardware fixed implemented which would take weeks, then the production could resume from scratch.
Bottom line is that, yes, there was a set of software changes that would work around the bug, but it required very specific sequences of instructions, the disabling of interrupts around these sequences, a change to the hypervisor, and updating the compiler to emit the new sequences. To make sure that developers didn't introduce code sequences that uses lwarx/stwcx in a way that would expose the bug (via inline assembly, for example), the loader would scan the code and refuse to load code that didn't obey the new rules.
Interesting fact: the hardware bug existed in every version of the Xbox 360 ever shipped, because software needed to run on any console ever shipped, there was no advantage to ever fixing the bug since software always needed to work around it anyway.
I'm just curious, what are the instructions that replace the lwarx/stwcx "atomic" pair? From my understanding, basic you need to generate a pair of load reserved/save instructions, and you have to replace the pair with a series of instructions. But I don't understand why do you have to disable interrupts -- is it because actually multiple instructions were used to facilitate the load, and an interrupt may disturb a value stored in a register?
Sorry I know little about assembly and arch.
Rule: On a given hardware thread (there are two hardware threads per processor on the Xbox 360), every lwarx reservation of an address must be paired with a stwcx conditional store to that same address before a reservation is made to a different address. So a sequence like lwarx A / lwarx B / stwcx B / stwcx A is forbidden. But lwarx A / stwcx A / lwarx B / stwcx B is fine.
So I changed the compiler to emit atomic intrinsics that obeyed this rule.
But there was still the issue of logical thread scheduling. Imagine there are two logical threads running, one has a sequence of lwarx A / stwcx A and the other has lwarx B / stwcx B. The first thread is running on a hardware thread and just after executing lwarx A, the timer interrupt fires and the kernel decides to switch to the second logical thread, which executes lwarx B, and thus violates the rule.
To make sure that never happens, the compiler also emits disable-interrupts / lwarx A / stwcx A / enable-interrupts. That prevents the scheduler from switching threads in the middle of the atomic sequence.
But there was still one more problem. It is possible for a page-fault to occur in the middle of the sequence should it span the end of one page and the beginning of another, and the second page is not in the TLB. So the thread is running along and executes disable-interrupts / lwarx A, then when trying to fetch the next instruction it faults to the hypervisor because it isn't yet mapped by the TLB. The hypervisor executes a bunch of code to add the mapping of the new page to the TLB and then returns to the faulting thread to complete the stwcx A / enable-interrupts sequence.
The problem is that the TLB is a shared resource between the two hardware threads of a processor, so the two hardware threads need a way to atomically update the TLB, and the obvious way to do that is to use a spin-lock that is naturally implemented by a lwarx B / stwcx B pair of instructions. But the hypervisor TLB handler can't use those instructions because the code causing the TLB fault might be in the middle of using them and thus would cause the hardware bug to manifest.
The solution was to use non-reservation load/store instructions to implement a simple spin-lock. If the two hardware threads were both trying to update the TLB then hardware thread 2 would simply wait for hardware thread 1 to clear its lock before proceeding.
Cheers.
Sounds a little bit like the situation with Xbox Series? The SDKs were released late because Microsoft was waiting for certain features in AMD APU
It’s going to be a doozy
Can anyone confirm if I'm on the right track with my guess?
> XBOX 360 Security defeated - 2011
I realize this post is more about hardware security than software security, but if the benchmark is unsigned code execution then the author should at least mention the 2007 (King Kong shader exploit) and 2009 (SMC hack — same root flaw but executed automatically at boot) methods of achieving the same:
- https://github.com/Free60Project/wiki/blob/master/docs/Hacks...
- https://github.com/Free60Project/wiki/blob/master/docs/Hacks...
Lately I've been having a blast exploring the X360 library via Xenia (since I had a PS3 during that era and never got to see some of these titles).
Then came along the reset glitch hack and I moved away from discs to an external hard drive and never looked back. I did a few for me and a couple friends. The soldering involved was pretty precise in the fact that it was a very very small pad you needed to solder to and if you screwed up it was very easy to lift the pad and putting yourself into a big heap of trouble fixing it. I was also using a crappy $15 soldering iron with a bad tip because I was poor. But never did I have an issue. Depending on your install you could get the glitch to happen sometimes on the first reset or for some it took multiple resets. I was happy because all mine seemed to work first if not second reset which a lot of people struggled to get. I still have my rgh 360 my kids have it with a hdd full of games I backed up from my own games you know.
Back in the day, I managed to create a working RGH modchip based on Atmega8 (8 bit micro) running at 20 MHz with hand-crafted assembly code. I named it RWH (Reset Witch Hack) and it was able to boot Xbox 360 Jasper in 1-2 minutes. Old motherboards had a physical pad on the motherboard allowing for slowing down the CPU, so no i2c was required. I also have to connect the whole 8 bit POST bus to read the current value in one instruction.
PCB was made at home, and since AVR is 5V system, I used NPNs for voltage conversion (all values were inverted in the software).
Why? I didn't have money to buy the "real" CPLD modchip.
Rush and happiness when it first booted - priceless.
I still should have the source code for it somewhere on backup.
photos: https://gist.githubusercontent.com/JaCzekanski/c02ed11c30fac... https://gist.githubusercontent.com/JaCzekanski/c02ed11c30fac...
> I have a Saleae 8 channel 100Mhz, which turned out not to be fast enough > I found a not too expensive 200Mhz Kingst LA2016 Logic Analyzer on Amazon
The author is confusing MHz with MS/s (mega samples per second). Salaea has a logic analyzer that works on 100MHz signals (with 500 MS/s), but I suspect the author had the unit with 100 MS/s that only works up to 25MHz signals.
The cheap Kingst unit has 200 MS/s but only works with signals up to 40MHz.
If there's no CLK line on the mobo, does this mean newer X360s have everything that might be clocked (I assume at least CPU, GPU and V/RAM?) in a single chip, SoC-like?
Today, GPUs connected via PCIe or the like use 8b/10b coding over differentially-signaled pairs. The signal itself has clock recovery.
(V)RAM is generally clocked at a different frequency than the CPU as well, and all DDR utilizes strobes to determine when data is valid because access time is variable.
In some SOCs/FPGA-based devices, a central clock generator will sometimes provide LVCMOS/HCSL/LVDS/etc. clock lines to each device, but these aren't often re-used. This allows for flexibility and later programmability. There's generally no assumed phase or frequency relationship between these derived clocks and the original source - especially after the signal has traveled 20cm across a board.
In the case of a CPU/GPU, though, a 20 cent crystal oscillator at each device feeding into internal PLLs is typically the go-to.