Ask many gamers about their memories of the PlayStation 2, and you’ll likely hear them gush about the best-selling console’s many great games, incredible controller, and revolutionary multimedia features. Ask a developer about their memories of the PlayStation 2, and you might just hear a war story about how incredibly difficult it was to get any of those great games to run on the console in the first place.
It’s easy to forget now in the aftermath of its historic success, but there was a time when the PlayStation 2’s notoriously complicated architecture made it a nightmare to work with. Long before rumors emerged that the PlayStation 2 could be converted into a weapon of mass destruction (seriously), the gaming industry’s biggest studio’s looked at Sony’s new hardware and wondered “What the hell are we supposed to do with this?”
Yet, the PS2’s developer-unfriendly reputation didn’t cause the console to lose any ground to the emerging competition. Actually, in a strange way, the fact that the PS2 was so difficult to develop games for may have actually contributed to its historic success.
The PlayStation 2’s Hardware Design, Explained
First off, I should probably let you know that this is more of an “Explain Like I’m Five” explanation of the PS2’s hardware rather than a full technical breakdown. If you really want to dive into the logistics of how the PS2 worked from a detailed programming and hardware perspective, I highly recommend this massive Arstechnica breakdown of the console’s architecture or even this amazing blog post that covers pretty much every aspect of the PS2’s hardware.
For now, though, it’s more important to realize what Sony was trying to accomplish with the PS2’s design. While there are still a few unanswered questions about the history of the PS2, it really seems like Sony’s two main goals during the console’s hardware design phase were to make a console that was incredibly fast and to make a console that was notably different from pretty much everything else out there.
“Fast” really does seem to be the keyword to understanding the logic behind the many strange (in retrospect) design decisions that defined the PS2 hardware. Seemingly frustrated by the PS1’s slow and simple rendering abilities, Sony approached the PS2’s design with speed in mind. Specifically, it seems that they wanted to address the PS1’s incredibly low fill rate (the number of pixels that can be rendered per second). While official, internal information regarding the PS1’s fill rate is difficult to find, it was almost certainly lower than the N64’s roughly 62.5 megapixels per second fill rate.
By comparison, the PS2 would eventually offer a fill rate that fell somewhere between 1.2 gigapixels per second and 2.4 gigapixels per second (depending on the operation). In order to understand how Sony was able to achieve such a drastic jump in rendering power (even based on the natural escalation of technology at the time), you really need to know about the Emotion Engine.
The Emotion Engine was a CPU that is often called the heart of the PS2. There’s a good chance it’s the one part of the PS2’s hardware you’re already familiar with, and that’s not a coincidence. Sony went out of its way to promote the power of the Emotion Engine in a way that gamers hadn’t really seen since Sega’s famous “blast processing” marketing campaigns. As this 2000 CNN article entitled “The Sony Emotion Engine: Will PlayStation 2 Replace Your PC?” demonstrates, there was a lot of mainstream buzz about the centerpiece of the PS2’s hardware. Even the name “Emotion Engine” was clearly intended to excite you about a new generation of gaming hardware that went beyond traditional computing and got you thinking about the future of dynamic (almost human-like) A.I. The Dreamcast’s “It’s Thinking” campaign even utilized a similar approach.
Furthermore, a lot of Sony’s marketing at the time focused on how much more powerful the PS2’s Emotion Engine was than the Xbox’s Pentium processor and many other processors. What that marketing didn’t always mention is that the Emotion Engine was really just one of several processors that the PS2 ultimately relied on.
In order to achieve the incredible technical goals they set for themselves, the PS2 design team created a number of secondary chips, processors, and functions that helped the Emotion Engine handle various functions. That wasn’t unusual. Consoles and PCs rely on separate components for various functions to this day. What made the PS2 unusual were the ways it tried to use all of those components simultaneously and how games that were developed for it needed to account for those simultaneous processes.
To make up for its lack of raw power in certain areas, the PS2 was constantly moving data between various processors. In theory, it was a brilliant idea. I apologize for this simple comparison, but it’s kind of like how sharks are top-notch predators despite the fact that some species of sharks need to keep moving to generate oxygen. The basic idea behind the PS2’s architecture was that developers would be fine so long as they kept the console supplied with a pretty steady stream of data that was designed to feed all of the PS2’s processors and functions.
In its own way, the PS2 was designed to be remarkably efficient at handling some pretty complex processes without relying on traditional (and sometimes expensive) methods of data processing and storage. Honestly, “expensive” is probably the other big keyword to consider here. Sony spent so much money on the Emotion Engine chips that they kind of had to find ways to cut some traditional corners and get that chip to handle as big of a workload as possible.
Again, though, it’s important to keep in mind that the PlayStation team’s basic approach wasn’t necessarily the problem. As that CNN article above notes, those looking at the PS2 architecture from the outside (or from a tech demo perspective) were in awe of the thing. Furthermore, the fact that modern shader technology is based on concepts that the PS2 helped innovate or push forward proves that the console’s design had serious long-term potential.
Of course, potential and practicality are two very different concepts.
Why Was It Hard To Develop Games For the PS2?
There are really two broad answers to this question.
The first is a variation of the “square peg in a round hole” problem. While the GameCube and Xbox were fairly similar to each other and more traditional gaming platforms (the Xbox was basically a miniature PC), the PS2 required developers to think differently about pretty much everything. Prior game development experience obviously helped, but it wasn’t like you could just drop your previous work into the PS2 and just smooth out a few of the rough edges. You really had to dedicate yourself to figuring out its many quirks and features.
It seems like it was pretty easy for things to go wrong when developing a PS2 game. If another gaming platform let you go from “A to B,” the PS2’s architecture asked you to go from “A through A1, A2, A3, and A4 to get to B.” Even if you got through A-A3, a failure at A4 meant that there was a chance the whole thing probably wouldn’t work or at least wouldn’t be utilizing the full potential of the console. Every part of the PS2’s architecture was a new, moving hoop, and developers had to usually navigate all of them successfully in order to get even simpler processes to run.
If all of that sounds incredibly inefficient…well, it certainly could be. As noted above, though, the PS2’s basic design wasn’t an inherently bad idea. Sony knew that more modern games required consoles that could process a lot more data, and they seemingly wanted to design a console that was capable of moving that data between key components in order to make sure it was rendered as effectively as possible. To put it another way, the PS2’s output was relatively efficient and certainly capable. It was the input that forced developers to learn to navigate an inherently complex structure that was, for the most part, pretty much new to all of them.
That leads us to the “real” reason why the PS2 was so hard to develop for: the console’s absurd learning curve.
How hard was it to learn the PS2 architecture from scratch? In a 2013 Reddit post, one developer perfectly described the process:
“You are handed a 10-inch thick stack of manuals written by Japanese hardware engineers. The first time you read the stack, nothing makes any sense at all. The second time you read the stack, the 3rd book makes a bit more sense because of what you learned in the 8th book…There are so many amazing things you can do, but everything requires backflips through invisible blades of segfault. Getting the first triangle to appear on the screen took some teams over a month because it involved routing commands through R5900->VIF->VU1->GIF->GS oddities with no feedback about what you’re were doing wrong until you got every step along the way to be correct.”
As Resident Evil director Shinji Mikami once noted, Sony’s biggest failure with the PS2 was simply failing to give developers the resources they needed to learn the console in a reasonable amount of time, much less maximize its potential. Not only was the PS2 obviously designed to be more “complicated,” but Sony seemed to embrace a “figure it out” mentality in the PS2’s early days. PS2 development kits were rough, instructional documentation was messy (if it was provided at all), and anyone who was trying to develop projects for multiple consoles equally soon found themselves needing to devote more resources to the PS2 version of that game.
You might think that it’s kind of a miracle that PlayStation didn’t alienate more studios during the PS2’s early days. After all, not every studio could afford to spend months figuring out a new console. Those that could must have wondered why Sony was being so cute with their new hardware when there were seemingly so many easier and more obvious options available. Actually, many early PS2 developers found ways to simply skip parts of the process and take a shortcut from “A to B.” Sure, they weren’t getting as much out of the console, but at least their games were reaching the PS2’s massive audience.
Yet, as crazy as this might sound, the fact that developers had to spend more time working on the PlayStation 2 may have been a feature rather than a bug.
Was the PS2 Designed to Be Intentionally Difficult to Develop For?
In 2015, Oddworld creator Lorne Lanning caught some heat for suggesting that former Sony Worldwide Studios President Ken Kutaragi didn’t deserve a lifetime achievement award due to the damage the design of his consoles did to the gaming industry.
“I was totally opposed to [the lifetime achievement award]!” Lanning said. “‘He changed half of the industry’ is what [presenter Shohei Yoshida] said. I said he put half of the development community out of business!”
Lanning goes on to note that the PS2 and PS2’s difficult designs meant that developers had “zero ability to predict [time and money costs]” and needed to spend more money on the discovery process but suddenly “[couldn’t get] financing to discover.” He went on to complement the original Xbox for being a more developer-friendly console that allowed studios to better predict their eventual time and budget.
Before you write Lanning off as bitter, keep in mind that Sony hasn’t exactly denied the implications that they intentionally made the PS2 and PS3 difficult to develop for. In fact, here’s what former Sony Computer Entertainment head Kaz Hirai had to say on the matter in an infamous 2009 interview:
“We don’t provide the ‘easy to program for’ console that [developers] want, because ‘easy to program for’ means that anybody will be able to take advantage of pretty much what the hardware can do. So then the question is, ‘What do you do for the rest of the nine-and-a-half years?’…So it’s a kind of – I wouldn’t say a double-edged sword – but it’s hard to program for, and a lot of people see the negatives of it. But if you flip that around, it means the hardware has a lot more to offer.”
Essentially, Hirai is saying that Sony felt the PS3 would have a longer shelf life if it took developers more time to “figure it out.” Sony believed that the console’s high-end potential would ultimately justify the figuring it out process. On top of that, other Sony executives (and others familiar with the PS2’s design) have suggested that the console was seen as a kind of rite of passage that would separate the most capable developers from the rest.
That “survival of the fittest” approach will always be controversial regardless of context, but it feels especially cruel in this instance. Sony knew it was going to take most developers a long time to get the most out of the PS2 hardware, and they seemingly weren’t opposed to the idea of that learning process artificially extending the console’s lifespan.
Though this is speculative, you also have to imagine that they saw the advantages of rewarding studios that developed exclusively or primarily for the PS2. Those who went to the “developer-friendly” competition ran the risk of falling behind. On some level, Sony bet big on the idea that developers would eventually need to prioritize their PS2 development plans, which meant it was actually to their advantage to ensure that developers put more resources (time and money) into PS2 development that could have gone elsewhere.
What’s most remarkable about that plan, though, is the fact that it absolutely worked.
Why Was the PS2 So Successful If It Was So Hard to Develop For?
If you’re wondering how a console that was so incredibly difficult to develop games for went on to become the best-selling console ever…well, that’s a great question. The answer ultimately comes down to a few key factors.
First off, as we’ve previously discussed, you can’t overestimate the appeal of the PS2’s DVD player. That DVD player got the PS2 in a lot of homes across the globe in a very short amount of time. Few developers could possibly justify not trying to make games for a console that outsold its closest competitor by over 130 million units. They gritted their teeth, and they put up with the PS2’s design headaches until they got it done. The truly dedicated learned to use the PS2’s hardware better than everyone else, even if it meant neglecting other platforms in the process.
Second, it’s important to remember that the PS2’s closest competition didn’t offer the kind of alternative that developers could easily flock to. The Dreamcast was dead and buried within two years of its release. Nintendo alienated a lot of studios with the GameCube, and they didn’t make much of an effort to repair those burnt bridges. The Xbox was incredibly easy to develop for, and packed a lot of power, but it had almost no market presence in Japan and still trailed the PS2 in the West. While Nintendo and Microsoft would find ways to take advantage of the PS3’s similar hardware headaches with the release of the Wii and Xbox 360 (their best-selling home consoles ever, respectively), neither was able to take full advantage of the PS2’s struggles and shortcomings during that console’s incredible run.
Finally, and this is really important to keep in mind, the PS2’s design may have been complicated, frustrating, and perhaps a little scummy, but the PS2 was still an incredibly powerful console that was capable of producing effects that even the “more powerful” Xbox and GameCube couldn’t match.
Some consoles that were difficult to develop for (like the Sega Saturn) ultimately end up with a library of games that show the public how hard it was for developers to deal with that platform. The PS2 was different. The PS2 was more than capable of delivering the things Sony promised it was capable of delivering. It’s just that Sony didn’t seem to be too concerned with helping developers deliver those experiences any sooner than “eventually.” It’s actually remarkable to think that a generation’s expectations on how graphics should improve over the course of a console’s lifespan may have been warped by the PS2’s steep learning curve.
In any case, it’s certainly fascinating to think that the PS2’s complicated architecture ended up being one of the many contributors to its likely unbeatable all-time hardware sales records.