Recently I caught a video from AMD talking about their upcoming Spider Platform. The Spider Platform consists of a Quadcore Phenom processor, the 790 chipset, and a Radeon HD3800 graphics card.
During the presentation one of the AMD representatives stated they were doing 80 times the power of a Playstation3.
I. Don't. Think. So.
Here's why. The AMD Phenom Processor is based on x86 technology, extended with x86-64. x86 generally isn't actually considered that great of a processor architecture. The IBM / Sony / Toshiba developed Cell processor is based on the PowerPC architecture. PowerPC is generally considered to be one of the better architectures on the market, with maybe the old Alpha architecture being considered better.
Now, bear in mind that there have been practical examples of PowerPC versus x86 in the market before. Exampling the Older Apple Mac G3 and G4 designs. At 500mhz they easily kept up with Intel and AMD designs over 1ghz. Consider the Gamecube, which many developers went on the record as saying it was easily more powerful than the 733mhz Celeron used in the Xbox, despite only having a 485mhz processing speed.
Now, keeping in mind that in terms if IPC, instruction per clock cycle, PowerPC is considered to be better than x86, consider this: The IBM Cell processor used in the PS3 not only has a PowerPC processor built in, it also has 7 SPE's (Synergistic Processing Elements). Anyways, the SPE is comprised of it's own memory controller and a 128bit RISC processor. One of the SPE's is reserved for the Linux based OS used in the PS3, leaving developers free with 6.
So, in a best case scenario, the Cell processor used in the Playstation3 can manage 6 different processing threads. Now keep in mind that the two most advanced engine on the Playstation3 today are Epic's Unreal Engine 3, and the NEON Engine developed by Codemasters and Sony. Right now, both engines are barely pressing 2 processing threads. They aren't even reliably getting beyond using a 3rd of the processing power available. Why? Well, because the engines have to work across multiple platforms where those 4 extra processing threads probably are not going to be available, say like an Single core AthlonXP.
This relatively low use of processing abilities at the start of a console's life isn't exactly unusual on the exotic hardware used in game consoles. Just go pick up a Super Nintendo, a Playstation 1, a Playstation 2, a Gamecube, or any other console but an original Xbox. You'll note a startling change in graphics quality as you look at launch titles and compare them to titles two or three years down the line. For example, go pick up Ratchet and Clank, Ratchet and Clank 3: Up Your Arsenal, Jak and Daxter, and Jak3. Now, play the original titles... then play the 3rd game. See a difference?
The odd one out in the console graphics improvement is of course the Xbox. Since it used a stock x86 processor and a not so stock Pixel Shader 1.4 graphics chip from Nvidia, there wasn't as much room for the system to grow as time went on. So what you had at the beginning of the consoles life... was pretty much what you had at the end of the consoles life.
So, what does this have to do with AMD's claim?
Well, lets think about it for a second. A quad-core Phenom processor is only going to be able to process 4 threads. An engine optimized for the x86-64 platform with 4 processing threads isn't going to run very well on the 6 thread Cell Processor, and vice versa. So, cross platform engines are never really going to be able to effectively push either system to it's maximum extreme. Engines that are designed from the ground up for each platform, however, will be able to actually use the platform, as witnessed in the Ratchet and Jak games.
At best then, in a dual socket enviroment, a Quad-Core Phenom processor system would be able to process 8 threads (4*2). Even with a high clock speed of 3ghz, a Quad-Core Phenom is going to have a hard time matching the raw performance of Cell.
From a strict processor viewpoint then, I think AMD is talking rubbish when it comes to outpowering the Playstation 3.
From a graphics viewpoint though, AMD might have a point. The Playstation 3's graphics are provided by an Nvidia chipset known as RSX. RSX was reportedly based on what was the 7800 series. Chances are, from the clock speed of the GPU at the Playstation3's launch, which was 550mhz, it is more likely the GPU shares more in common with the G71 7950 GT spin than the original G70 7800 spin. Thing is, either way, that is Shader Model 3 hardware.
AMD's new RadeonHD 3800 series though, is Shader Model 4 hardware, which means that all of the shaders are unified. In the RadeonHD 3800 the shaders can either do vertex shading, or pixel shading. Granted, the RadeonHD still can only do 16 textures per cycle versus the RSX's 24 textures per cycle, so the RSX will still probably be faster in texture based titles. However, with textures on the way out and shaders on the way in, the RadeonHD has much more to pass around.
It is possible then, that 4 RadeonHD 3800's coupled together, could approach 80 times the power of the Playstation3's RSX in terms of Shader Output.
The primary question is though... will that matter? The gut reaction is no. Most game developers have to target for a baseline of playability. Currently that baseline is the Intel Integrated Graphics, so most titles built against x86 computing are never going to press what 4 3800's coupled together can do.
That means that 80 times the power of a PS3 or no, it will be a while before such limits are even remotely pushed.