Grumpy Gurevitz: Hardware Iteration

The original ‘FAT’ PS3. It’s been replaced twice, each time with a smaller version. Each version with less functionality. No upgrades to actual performance to be found though.

One of the greatest challenges console gaming faces is one of hardware iteration. Traditionally consoles have never had their internal hardware specs increased, as new functionality is added post-launch. We have seen smaller, slimmer and cheaper units released, and some of those have even lost features during the process (compare a PS3 today with one at launch to see the loss of functionality); however we have not seen memory improvements, changes to the CPU or other components. In recent years I can only think of one example where a console has gone against the grain and that was with the DSi, which added more internal memory, an SD card slot, and a slightly faster CPU to compliment its larger screen. This allowed the device to host a poor man’s app store and have some internal non-gaming apps. The core DS games themselves though were not affected and were rarely designed to take advantage of this small ‘bump’ of power available to them.

The 32X which sat on top of your Megadrive making it look, HUGE.

Sega famously released an ‘add on’ console – the 32X – which was designed to add a few more years to the Mega Drive. It’s famous (or infamous) for not working – as it split the marketplace resulting in developers not really supporting it, and sending out mixed messages to consumers about the console’s ‘hedged bet’ strategy on its lifespan. The end game was a lesson learnt that this was not a good strategy. Hence, now we never expect console firms to release upgraded consoles.

But in a world where the console is not the only gadget in town, this policy is coming under strain. Tablets, phones and event smart TVs are competing for our attention. Many, as we know, offer gaming experiences of their own, and whilst they still can’t match a traditional console for the hardcore experience, many are starting to. Additionally, it’s really not hard to hook up a traditional controller via bluetooth to a TV or tablet.

What number iPad are we now on? Even Apple doesn’t know and has just started calling it the NEW iPad. Expect the NEW iPad 3 (as opposed to the iPad 3, which technically is the ‘NEW’ iPad) soon then.

When it comes to tablets and phones we are also seeing updates to not only the operating systems, but the actual hardware every 9-12 months. It’s a rate of improvement which, apart from the PC, has never before been observed in mainstream technology and gadget land.

How can consoles possibly hope to compete? Both Xbox 360 and PS3 have operated on a 7-10 year life cycle, and largely they have been able to deliver a shelf life which has resulted in them still being relevant and current in today’s market. We have seen various firmware/OS updates to both platforms, which have resulted in improved functionality on the non-games side of the equation. However, when it comes to games themselves the specs are fixed and it’s been up to developers to find new ways of raising the bar when creating content specifically for the console. Any initial design flaws which the consoles launched with are issues which cannot always be worked around.

The PS3, especially, has had quite a few issues. Whilst first party content looks remarkable on this platform (God of War 3 and the Uncharted series come to mind), third party content, often designed for PC and Xbox first, has really suffered. The way in which the PS3 handles memory is so different to the Xbox that certain games look worse on the platform. Skyrim doesn’t even fully work, which has resulted in DLC for the game still lacking on the PS3 whilst Bethseda works on a solution to the memory issue plaguing its persistent world.

 

This is the original Xbox. In fact most of us probably still have one looking like this as opposed to the ‘Slim’. However the actual insides of the Xbox consoles have gone through quite a few changes. Mainly to reduce heat (and the console failing) and costs for Microsoft. The Elite, for example, was a very different beast to the original Xbox. No functionality has been removed (that I can think of) in each step, and in the situation with the Slim some was added (internal power for Kinect). Raw CPU and other specs though, stayed the exact same….

The Xbox lacking a Blu ray drive, and having some systems ship without a hard drive, is another example of hardware decisions holding back the platform (albeit this hasn’t been such an issue in comparison to the memory issue on the PS3).

Indeed, whilst the platform holders are busy working on new technology for their replacement consoles, there is an argument that simply upgrading the PS3 with way more memory (4-8 gigs would do it nicely!) would deliver a superb experience for the majority. The CPUs in the PS3 really are amazing and it’s the lack of memory that holds that platform back from consistent greatness. For sure, compared to a top of the line PC equipped with the latest graphics card the PS3 would need more than more memory to match it frame for frame, but most of us would cope – it’s the lack of memory that is the issue. Of course, they do not take up this option due to the Sega experience, and in reality it’s probably better to just continue to battle on with the existing tech, reducing its cost to produce and maximising sales where possible.

So, how do console manufactures make sure their offering isn’t redundant within nine months of release? Traditionally they offered a hardware solution that was so ahead of the curve it would take 24-36 months until the majority of PCs offered a similar performance at a similar price.

However, the number of units the phone and tablet producers (well, Samsung and Apple) are shipping per year are huge. They can offer tomorrow’s tech today and for less due to their economies of scale. Whilst Nintendo ship 5-10 million 3DS consoles in a year, Apple might ship 20-30 million phones, as well as tablets (which share much of the same technology). Not only that, but they will then replace it with a new model 9-12 months later.

We are told that the Microsoft subscription offer HAS been a success. Proof will be the launch of it’s replacement.

Based on the current business model there is no way a console platform can keep up with that level of pace. Should Sony, Nintendo and Microsoft go down the route of releasing a new £300+ console every 12 months very few of us could afford it or justify the expense even if we could? However, if gaming moves to a contract or subscription based model, perhaps it could adopt such a fast level of upgrade as people take on 24 month contacts with upgrades, as with phones. Yet, we can’t be sure that enough of the user base would buy a console under such a regime, rather than buying it outright, resulting in a two-tier user base ‘progressing’ through the product line at different speeds. One upgrades every 24 months and one every 6 years!

As stated in previous articles, Microsoft is clearly testing the waters with its $99 subscription option with the current hardware, and in turn Sony is using PS Plus as a way of testing a “content for subscription” model. They are trying to see how much of the userbase would be willing to move onto such a pricing model. It would certainly allow greater flexibility when it came to hardware updates.

Ultimately, we would still have multiple versions of hardware with the consumers, as clearly they wouldn’t all sign up in the same week – resulting in overlapping upgrade dates. This is exactly how it is for the phone market. Is it possible for developers to create versions of their games that respond to the hardware it’s on? For sure – see how some Xbox 360 games shipped with ‘Hi-Res’ art discs for those with hard drives (whilst those without had lower resolution textures), and some iphone and ipad apps recognise the hardware and adjust accordingly, in the same way that PC games can have adjustable settings.

However, ultimately it’s difficult to package in a complete range of options (just ask Android developers, who have to contend with hundred of devices, some with gyros and some without for example) and even many iPhone devices perform poorly with the latest games or are just not able to run them. This is a situation console developers do not want and the market won’t easily tolerate. The cost of developing a Call of Duty compared to an Angry Birds dictates that it must be able to address the largest possible consumer base. This is why GTA 5 is coming out next year on existing consoles rather than as a launch title for the ‘next gen’. It uses an upgraded engine from GTA 4 and will allow Rockstar to capitalise on years of investment and development work.

The WiiU. A console which is very powerful in some areas, but lacking, especially when it comes to it’s CPU. It would benefit from an iPad style ‘update’ to a WiiU 2 in around 18-24 months. Will Nintendo be brave enough to offer this?

In the short term the Wii U will probably benefit from being based on tech from this generation (perhaps with a lower spec CPU, but balanced out by having tons of RAM in comparison to the existing two consoles and a decent GPU) as it will allow developers to bring games already slated for Xbox 360 and PS3 to the platform without too much additional investment. Some predict that this benefit will only last two years as by year three it’ll be overshadowed by new consoles which by then might have a large enough user base to become the primary development focus for the likes of EA and Activision. At that point developers will have to decide whether it’s worth trying make a sub-standard version of a ‘next gen’ game or simply ditch the platform entirely.

Of course, those ‘next gen’ consoles might themselves be overshadowed by our next TV or tablet. This is why it is almost certain that we will see some kind of subscription model and that the life span of PS4 will be only 3-5 years and not the 7-10 years PS3 has been around for. I would imagine the same will be true of the new Xbox. What will this mean for developers?

Well, the upgraded console (PS4.2!) will be simply that – an upgraded console, able to improve the base game but ultimately not a new platform. Developers will still be able to put one game on disc (or download) and the right assets will ‘play’ in the right version of the machine. It’s been well leaked that unlike with the PS3, Sony are putting in non-bespoke parts in the PS4 to make it easier to develop for, cheaper to produce, easy to source and – one assumes – easier to upgrade. This is part of a strategy to make sure that the PS4 is not just another platform but is one which is easily cross-compatible with other hardware in the market. The same reason why for the first time there is a version of Windows running on ARM processors for their new tablet range. Consoles can no longer be islands inaccessible to those creating content for other systems.

Of course Sony are not putting all their eggs in one hardware basket. Earlier this year they bought Gakai, a game streaming service. It is clear that this is one way in which they feel they can allow console gaming to improve continually without having to sell a new piece of hardware at all. Simply by having the game run on servers, which they can control and upgrade at their leisure centrally, without having to roll out hardware back into consumer’s houses.

Microsoft, too, is moving in that direction. They already have the internal resources needed to offer an online cloud solution, and it’s clear that Windows 8 is being designed in such a way that it could go from offering a ‘closed’ gateway download service to a closed streaming service. After all it is already streaming everything from Office to audio and video so a move to games, whilst technically far more complex, would be simple from a storefront/consumer experience point of view.

This is the ultimate offering and it’s clear that this is where we are heading. If this is successful, and to be so a lot will depend on the roll out of fast fibre optic broadband across the USA, Europe and Japan, then perhaps 2013/2014 really will be the last new generation of consoles to be launched with regard to each being a new platform with new technologies inside them.

PSPlus is perfectly placed to become the ‘PS5′ as the cloud route is clearly the big revolution round the corner. However no one is quite sure how long that corner is. Fast broadband exists, and is within the reach of the consumer. However, it is NOT cheap and until costs falls by at least 50% it will be a barrier to solutions such as the now Sony owned Gakai replacing a download or physical disc as a mainstream alternative.

So where does that leave Nintendo? Whether their console is ‘next gen’ or simply ‘this gen with a cool twist’ is irrelevant. They have launched a new console firmly grounded in the last gen business model. It’s a cash up front model, with no subscription in sight. It’s ironic as if there is one company who has the IP to justify a subscription to them alone, it would be Nintendo. Nintendo are the HBO of gaming as they have the richest and deepest set of gaming IP available. Sony are catching them up with regard to this with an ever growing (and often more risky) set of new first party IP and gameplay experiences. However as things stand today, Nintendo could adopt the subscription model aggressively without devaluing their core experience and income stream.

In addition whilst they are adopting the download market we see no signs that they have the ability, capacity or imagination to implement a streaming option. Heck, they can’t even launch a console without a large day-one firmware patch which their own servers couldn’t handle, resulting in hours of downloading for some disappointed day-one consumers.

We’re still in the early days of the Wii U, and it’s possible for Nintendo to launch both a subscription service and to get into a partnership with a technology partner able to offer them a streaming platform. On the basis the Wii U will soon be underpowered, they might, out of necessity, try to seek that partnership sooner rather than later as a way of keeping their hardware relevant without being forced to release a Wii U 2 in three years time. If Nintendo do strike out and decide to maintain this traditional business model then they will need to release a Wii U 2 in 3-4 years time, and accept splitting their userbase.

The same console in a new colour IS a successful marketing ploy (would you believe), but making that your only ‘update’ is now not an option.

We have witnessed Apple releasing the iPad 2, followed by iPad 3 with a 3+ then being released soon after. I don’t think Nintendo should fear following this trend, as consumers are now used to fast hardware iterations. As long as the user experience, settings, downloads and all content for the Wii U passes to the Wii U 2 seamlessly and as long as developers find they can release product for both easily in the majority of cases (perhaps with some products only supporting the latest version), then consumers will accept this rate of change. 2-3 years value from a tech product is an eternity in today’s world.

This expectation for ‘change’ itself is now part of what consumers want and expect. Hardware iteration is now old school. Hardware reinvention is what we demand and when the guy next door gets the new model that does so much more than our old one, simply creating a new, smaller and cheaper version of your platform doesn’t cut it anymore.

Related Posts with Thumbnails

Written by Steven G

Steven Gurevitz is the owner of 2002 Studios. 2002 Studios started off as a music production company, but now project manages and collaborates content production in general from video to videogames. He also owns the Urban Sound Label, a small niche e-label. He is a freelance music tech writer, having co-written the Music Technology Workbook and is a regular contributor to CriticalGamer.co.uk. He enjoys FPS, Third person 'free world', narrative driven and portable gaming.

Leave a Reply

"));