Tuesday, August 31, 2010

AMD to drop ATi branding

Quick note. I have received a confirmation email from AMD that there is a quote unquote brand transition on the horizon. The brand transitioning will see the removal of the ATi branding.

Now, as of this posting, I haven't actually had time to sit and talk with anybody from AMD over the subject. That doesn't mean I don't have some possibly awkward questions to ask them, if the questions can be answered.

First Question: Right now, every single Nintendo Wii that is shipping carries an ATi branding. I'm curious to know if the retirement of the ATi brand name will see Nintendo shipping retail boxes with the ATi logo replaced by an AMD logo.

Second Question: Are there any possible corporate relationships that could could deteriorate from listing an Intel Processor with an AMD graphics chip. Given some of the petty behavior witnessed by some tech corporations, is it possible that dropping the ATi branding will see competitors like Intel leaning on vendors to not use the AMD branding?

I guess what I'm going for can be demonstrated by Dell. Just go to http://www.dell.com/home/laptops and try to search for a laptop by the graphics chip. You'll find that you can search by Integrated Graphics and Dedicated Video Card on the Dell site. Once you make a selection the actual graphics chip information is still not displayed anywhere on the search results. In fact, it's not until you get to the Alienware models that Radeon graphics cards get any mention in the base search results.

I can just see more of this type of buried information tagging from some of the more petty tech companies. Yes, Intel has gotten better on the community relations front, and the not breaking every single Trade Law ever written front, but there's always the concern that the new friendly Intel is just a front in and of itself.

As far as I know, AMD has not addressed these particular questions, so this could be interesting if they do.

Monday, August 30, 2010

Net Neutrality: What does it actually mean?

I'm not going to say I get asked this question on a regular basis, but I do see it being asked on a regular basis. Just, exactly, what is Net Neutrality, and more importantly, why should I care about it?

The core concept of Net Neutrality is very simple. It is the idea and concept that data is data. What I mean by this is actually fairly simple. Look at this webpage for a moment. Not any particular part, just the whole page. Now I'm going to ask a few simple questions:
  • What do you see?
  • Do you see Text?
  • Do you see Pictures?
  • Do you see Links?
Well, you probably responded that yes, you did see those items. Now, let me pose another question:
  • If you were a router passing network traffic, and you had to download this page, can you tell the difference between a picture file that is 2mb in size and a text file that is 2mb in size?
The obvious answer is: No, you cannot.

Data is Data. To a computer, be it in file storage, file transfer, or file manipulation, a 2mb file is a 2mb file. It is the software applications, the actual programs that you use on your computer, that determine whether or not a 2mb file has any particular contextual meaning. A text editor such as Notepad, Wordpad, or Kwrite, will not be able to parse and display a picture file. An image manipulator, such as Microsoft Paint, will not be able to parse and display a file saved as a Text File or as an Open Document File. So how does this relate to Net Neutrality?

From a purely technical standpoint the Internet is basically a series of permenant network connections used to transfer data. Your own personal modem is connected to an Internet Service Provider, and your I.S.P. is connected to a larger collective of networked computers referred to as the Internet Backbone. The Internet Backbone systems are largely connected to each other. Here, for example, is a typical setup:

The Black lines indicate data transfer paths. The Green Lines indicate potential pathing, overflow capacity, or backup-lines.



Most Internet Service Providers are linked to at least two backbone providers. Thus, if one backbone provider were to go down, the I.S.P. would still be able to offer connection services. In this configuration the Internet Backbones are largely linked to one another, and the ISP is linked to the Internet Backbones.

When connecting to a server on the other side of the Backbone the configuration basically mirrors itself:

The Green Lines indicate Potential Pathing



Here we can see that the primary modem and computers are connected through a straight data line to the ISP. However, the Server that the user is trying to get to is connected to an I.S.P. that is not directly connected to a backbone provider that is directly connected to the users I.S.P.

Data transferring from the user's modem to the server's modem would have to make a minimum of 5 hops:
  • User's I.S.P.
  • Backbone Provider One
  • Backbone Provider Two
  • Server I.S.P.
  • Server Modem
This is where we start to see the problems of Internet congestion, as data spends more time in transit, or worming it's way through the network, then it should have to. If either of the green lines were active on that configuration, the connection to the user's target server would have less data hops. Less data hops would decrease the amount of time to transfer data, as the data would have to make less hops.

Now, if some backbone providers were actually upgrading their internet hardware instead of buying exclusive rights to promote and sell a smartphone, Internet Congestion wouldn't be a problem. However, several of the major I.S.P.'s, and I'll go ahead and say COX COMMUNICATIONS, COMCAST, and AT&T are not exactly interested in upgrading their networks. I've made no secret of Cox Communications business practices, specifically where the concept replacing and upgrading failing equipment was complete and utter anathema to Cox Communication's management. Cox's sole business model was based on being in markets with no other competitors, so that consumers had no choice but to buy Cox Communication's service. No, I'm not making that up. We were flat out told that by our own management. Cox Communications was not interested in spending money on developing a faster internet connection. Cox Communications was not interested in spending money on upgrading it's network infrastructure.

AT&T is a laughing stock among the technically literate for one of the nations worst 3G networks running. When I say that Apple and AT&T considered 30% of dropped calls to be normal, that is not a joke. It's not exactly a puzzle to figure out why nobody caught the design flaw on the Iphone4. Nobody in Apple could tell a difference between dropped calls caused from casual handling of the phone and dropped calls from AT&T's network. AT&T has basically set a standard for irresponsible spending of corporate monies. My personal opinion is that there is a good class-action lawsuit to be found in AT&T having spent money in every place but where it mattered: THEIR NETWORK INFRASTRUCTURE.

Comcast, often referred to in much less flattering terms, has an even lower opinion of it's users and it's networking options. Comcast has been slapped by both consumers and regulatory agencies for it's bandwidth throttling and invasive network scanning. Comcast contractors have been caught disconnecting competitors connections, then trying to sell consumers on the Comcast network while the competitor was down. If this sounds familiar, it's because I've written about Comcast before.

In fact, this is how Comcast views their Network:

Red Lines indicate PremiumTraffic. Black Lines indicate FilteredTraffic



In this configuration, Comcast actively filters the information that comes from the user's modem. Traffic that Comcast prioritizes has to take less hops to reach the same destination. The filtered traffic has to simply go:
  • Modem
  • I.S.P.
  • Backbone
  • I.S.P.
  • Modem
Network that is de-prioritized has to take this route:
  • Modem
  • I.S.P.
  • Backbone 1
  • Backbone 2
  • Backbone 3
  • I.S.P.
  • Modem
I feel I need to stress the point that this is not conjecture. Comcast has been fined by regulatory agencies for this behavior. They have been called out by consumer agencies for this network filtering. They have done this Network Filtering. They still do this Network Filtering in some markets.

In fact, what Comcast, AT&T, the R.I.A.A., the M.P.A.A., and other's like them want is a network configuration that looks something like this:

Pink, Blue, and Red Lines are Filtered Traffice. Black Lines are non-filtered traffic.



In this configuration the ISP's filter the data traffic that comes out of the user's modem. The I.S.P.'s deem which traffic is important, and fast track that traffic to it's destination, while shunting de-prioritized traffic off onto slower servers and network technologies.

Let's say you are a gamer for a minute, and you play something like City of Heroes or World of Warcraft. Presently your access to the game server looks a bit like... this:



The Game server has multiple game servers spread out across a few regional locations, and you are fairly well networked to all them. Some might be faster than others, but you can get to all servers.

Under a filtered internet, such as the one Comcast would dearly love to implement, you would have a much more, AOL-Style network configuration:



Here, in this network configuration that's been filtered, you don't have any access to one of the servers. You have a lightning fast filtered connection to one of the servers. Then you have a not as fast shunt to the backbone de-prioritized access.

Again, I need to stress, this is not a joke or an exageration. This is what AOL DID. This was AOL's entire business model. Aclosed network infrastructure available only to users who ponied up a subscription fee. What happened to AOL? Well, let me put it this way: When was the last time you saw an AOL disc on a counter?

The AOL style of a filtered and closed network access system was pretty much beat into the ground by competitors who offered a superior product. Which again, was this:



This then, in a nutshell, is whatnet neutrality is all about:
  • Net Neutrality is the conceptual idea that the Internet Service Providers and Internet Backbone Providers should make no difference in the traffic that is transmitted through the I.S.P.'s network.
  • Net Neutrality is the conceptual idea that when a consume pays for certain combination of speed-grade and data amount of Internet Connection, that they recieve that speed-grade and data amount without limitations. For example. If somebody pays for a 12mb download speed with 60gb's data-transfer a day, they get 12mb download's and 60gb's data-transfer a day, reguardless of if they are sending text documents, .ogg vorbis files, or WebM videoes.  
  • Net Neutrality is the conceptual idea that the Internet Service Providers and Internet Backbone Providers should not search the users traffic without an applicable legal warrent from appropriate legal authorities.
  • Net Neutrality is the very real fact that  data is data and that Internet Service Providers and Internet Backbone Providers should spend their money on upgrading the network infrastructure. I.S.P's and I.B.P.'s should not be spending client's money on buying exclusive access to a phone, or outright refusing to deal with hardware issues.
Hopefully this will help explain exactly what Net Neutrality is, and why you should care about it.

Sunday, August 29, 2010

ATi Catalyst 10.8 / Unigine: Performance Scaling on Higher End Hardware Part 4

While mucking about with Unigine on the little Asus F3Ka Laptop, I went back to trying to sort out the stability issues of the 5770's when coupled against the Phenom II platform. After witnessing Unigine's inability to scale at all I decided to show the engine scaling against a more powerful processor. The easiest way to do this would be to drop the multiplier on the processor and benchmark Unigine as the processor performance increased. Of the motherboards I have on hand though then, that wasn't quite working. The Intel D975XBX v1 motherboard, for example, was sold as a Crossfire Capable Gaming motherboard, but it's BIOS is complete and utter Junk, and the processor multiplier is not exposed within a BIOS made by Intel. The DFI LanParty Jr X58-T3H6 does expose the multiplier, but the Windows 7 64bit Operating System wouldn't recognize it. I'd set a really low multiplier for the OS, boot up, check CPUZ, and it would be at the stock 20x multiplier. The Asus M4N82-D I had performed exactly the same with the multiplier being manipulatable in the BIOS, and the Operating System not caring what the Multiplier was set to. Surprisingly, Nvidia's revamped nTune software package also refused to allow changing the multiplier in real time, although I could change everything else.

That just left me with the semi-broken Phenom II system. It's still not exactly what I would call stable as everytime Windows 7 is shut down the entire system blue-screens and reports a recovery from a serious error on the reboot. However, with AMD's OverDrive system software I can manipulate the processor's multipliers in real time. So once again I wind up with a series of benchmarks that are not applicable to real-world performance. The processor is still a Quad-Core Phenom II Processor, and it's still backed by 8gigs of ram.

Because of the previous benchmarks we know that enabling Tessellation carries a huge performance hit under the OpenGL API. We also know from the previous benchmarks that Crossfire support for OpenGL is either broken, or missing in action. So, for each multiplier setting I ran 3 different benchmarks:

  • OpenGL: tessellation off
  • DirectX: tessellation off / crossfire off
  • DirectX: tessellation on / crossfire on

This allows us to continue to compare the OpenGL and DirectX API's, while showing the benefit that a multi-gpu rig can have under certain circumstances.

X4: 800mhz



What you might not know, and probably won't even care about, is that 800mhz is the average operating speed of most AMD processor's under Windows NT5, Windows NT6, and most Linux Distributions. These operating systems largely support power-saving schema and unless you put the system into a high-performance mode the processor will default to a low operating speed under most computing tasks.

That being said, I did find an interface bug with the Overdrive UI. I have the multiplier set at x4, but the UI does not indicate this change. Which is why CPUZ is open to confirm that a x4 multiplier is set.

OpenGL:



DirecX 11: Tess off / CFX Off



DirectX 11: Tess On / CFX On



The results are odd to say the least. With this processor speed the OpenGL renderer turned in a score nearly half of it's DirectX counterpart.

Adding a second GPU did pretty much nothing for the average frame-rate at first, till I put out that Tessellation was turned on. As was demonstrated in the earlier benchmarks simply enabling tessellation under DirectX could result in a 50% loss of performance.

x6: 1.2ghz



Bumping the processor speed up to 1.2ghz with a 6x multiplier sees the UI bug on OverDrive still hanging around. So what did this do to our Game's performance?

OpenGL




DirectX: Tess Off / CFX Off



DirectX: Tess On / CFX On



Here we can see that the gap between OpenGL and DirectX is closing up. Interestingly, even with Tessellation turned on, the Crossfire setup is already pulling away and delivering a playable experience. So let's turn up the processing speed once again.

x8: 1.6ghz



Interestingly, AMD's OverDrive utility is now showing the multiplier that is set as well as CPUZ.

OpenGL:



DirectX Tess off / CFX Off:



DirectX Tess On / CFX On:



At this clock-speed we see the OpenGL API close the gap on the DX11 API, as now are both within a few points of each other. We also see the Crossfire configuration extend it's lead, even with Tessellation active.

What we also see is that the overall performance isn't making the same leaps and bounds that it was before. There is there practically no difference between the single GPU 5770 at 1.2ghz and 1.6 ghz, and the average frames per second spread between the 1.6ghz and 800mhz is only 2frames. Doubling the clock-speed, at least in this benchmark, only netted a realistic 2 extra frames per second. That being said, the minimum frame-rate did go up significantly, from just under 8 frames to around 13 frames.

There was also a sharp difference between the 800mhz Crossfire Configuration and the 1.2ghz Crossfire configuration. There's not as much difference between the 1.2ghz Crossfire Configuration and the 1.6ghz Crossfire configuration, with the maximum number of frames barely moving.

So, let's bump the clocks up again.

x10: 2ghz



Now we've gotten to the same clock-speed as the Turion64 in the Asus F3Ka. Unlike the Turion64, the Phenon II has two more processing cores, loads more cache, a much faster memory bus, and a much faster system bus. It's also coupled with a much more powerful Graphics Processor.

OpenGL



DirectX: Tess Off / CFX Off




DirectX: Tess On / CFX On



An additional 400mhz sees the Single Card OpenGL performance sustain a higher average frame-rate than the Single Card DirectX performance figures, while having both a lower minimum frame-rate and a lower maximum frame-rate.

The Single Card DirectX Performance is largely unchanged from the 1.6ghz speed, gaining only a few frames on the minimum side, but barely pushing the average frames per second or the maximum frames.

Crossfire Performance is still increasing, but again, not by much. The immediate conclusion to make is that at this point Unigine Heaven 2.1 is not being limited by the processor, but is instead being held back by the graphics cards.

Which becomes pretty evident as the speed is bumped again:

x12: 2.4ghz



OpenGL



DirectX: Tess Off / CFX Off



DirectX: Tess On / CFX On



At 3 times the base clock speed, the only API that has shown any significant improvement is the OpenGL API, which once again manages a higher average frame-rate than it's DX counterpart, while still having a much lower minimum frame-rate and a lower maximum frame-rate.

The Single Card DirectX performance has doubled it's minimum frame-rate, and the maximum number of frames has also increased. The average number of frames has only gone by about 2 frames per second.

The Crossfire DirectX performance has improved dramatically from the 800mhz base, but it hasn't improved so much from the 2ghz clock speed.

So, onwards to 2.8ghz:

x14: 2.8ghz




OpenGL:



DirectX Tess Off / CFX Off



DirectX Tess On / CFX On



I really could have left this series of benchmarks out. Absolutely nothing has changed in terms of performance.

x16: 3.2ghz / x17 3.4ghz



At 3.2ghz this system is now just 200mhz off the figures shown in the first posting on Catalyst 10.8. So we'll also include the x17 single cared figures as well, since I haven't done them on this system yet.

OpenGl 3.2ghz:



DirectX Tess Off / CFX Off 3.2ghz



OpenGL 3.4ghz



DirectX Tess Off / CFX Off 3.4ghz



DirectX Tess On / CFX On



As you probably expected, performance still did not change. With 4 times the raw processing speed, and the Single Card DirectX performance basically went absolutely nowhere.

Unigine, at least in the benchmarks available for download, is a questionable software product. I'm left wondering just how much of the performance difference in OpenGL is down to AMD's drivers, or Unigine's product. I'm also left wondering why Crossfire scaling ceased to scale as well. Is it an engine issue or is it a Driver issue?

As I said, the real-world benefit of this performance scaling is pretty much nil. There's no commercial game on the market that uses the Unigine Engine. Hopefully, when those games arrive, they'll show much better performance scaling across hardware than the benchmarks.

ATi Catalyst 10.8 / Unigine: Performance Scaling on Low End Hardware Part 3

One of the concepts I talk about in many of my Gamenikki reviews is the concept of breakpoints. A breakpoint is the point at which hardware is fast enough to run a certain application. Since I was already running Unigine's Heaven 2.1 benchmark across several different computers, I thought I'd try to run it on my Asus F3Ka laptop. The frame-rate results are not surprising, but they can help shed light on the concept of a breakpoint.

We'll start out with the default scorings of Heaven 2.1 under OGL:



Then Heaven 2.1 under DirectX:



As expected, the DirectX API scores a few frames per second more. What's interesting is that Unigine Detected a Turion TL-60 as a 2.6ghz part. Oh how I wish.

One would think then that lowering the resolution would help the performance. So, I lowered Heaven 2.1 to 640*480 in OpenGL:



And DirectX:



While we get to see Unigine's Hardware Detection issues repeat themselves, the results themselves aren't actually that spectacular. Performance has largely doubled itself under DirectX, and is closer to a 40% leap under OpenGL.

So, what about turning the Shaders to low to give our graphics card less processing work?

Heaven 2.1 with low shaders in OpenGL:



Heaven 2.1 with low shaders in DirectX:



Interestingly, here we see that OpenGL turns in better performance than DirectX. The performance of OpenGL in low-resolution with High-Shaders is not far off from OpenGL in High Resolution with Low Shaders. The performance of DirectX in low-resolution with High Shaders is much better than Direct in High Resolution with Low Shaders.

So, one would think that turning both the shaders to low, and the resolution to low, performance would go up? Right?

640*480 with Low Shaders in OpenGL:



640*480 with Low Shaders in DirectX:



Unsurprisingly, OpenGL not only went back to scoring less than DirectX, by a wide margin. Surprisingly, OpenGL did better with Shaders on high, and DirectX pretty much performed the same as it did in low resolution with High Details.

One of the problems here is that this particular laptop is hindered both by the processor and the GPU. Neither are really capable of driving the Unigine Software Application. Even if I moved to a much more powerful graphics card, this processor simply would not be able to process the Unigine software application as used in the Heaven 2.1 test. Even if I moved to a more powerful processor, the graphics card alone would also be a hindrance.

Way back when Valve first launched the Source Engine in Half-Life 2 I praised the engine for it's ability to scale across various hardware levels. One of my complaints about the Unigine Engine, at least as it is used in the various downloadable benchmarks from Unigine, is that it doesn't really scale across hardware. Recent versions of the Unreal Engine have the same problem. On this particular laptop games like Unreal Tournament III and Borderlands will run. They just won't run well. Games that are built on recent revisions of the Source Engine, such as Alien Swarm, Team Fortress 2, and Left For Dead 2, are still largely able to run well on this particular hardware combination.

Thursday, August 26, 2010

ATi 10.8 OpenGL / DirectX performance difference - Part 2

After a significant amount of fettling with the I7 system, including I think flashing a new bios for the motherboard, and completely un-installing and removing the existing ATi drivers with Guru3D's Driver Sweeper, the system now appears to be largely stable with a fresh install of the 10.8 drivers.

Again, the aim of these benchmarks is not to compare hardware, but to compare the performance of the DirectX and OpenGL API's on the same hardware with the same settings. According to several software developers there is no massive performance difference between the two API's in their latest revisions. Against the same hardware DX11 and OpenGL 3.x / 4.x should perform the same.

In part one I showed there was a massive performance difference between the RadeonHD 5770 running in a 2x Crossfire mode on the DirectX 11 and OpenGL API's. The performance ratio difference was consistant across platforms backed by an AMD PhenomII and an Intel I7 920 Engineering Sample.

Testing on a RadeonHD 4850 in a 2x Crossfire mode and in a single GPU mode indicated that one potential problem might be related to Crossfire not actually activating against OpenGL 3.x / 4.x calls. The performance difference between the RadeonHD 4850 in Crossfire Off and Crossfire On modes in OpenGL was fairly close, with the Crossfire On mode turning in a score of 904, and the Crossfire Off mode turning in a score of 931.

So, with the I7 co-operating, how does it fair with Crossfire turned off?

In the Single GPU rendering mode against DirectX 11 with Tessellation enabled, Unigine Heaven 2.1 turned in a score of 683:



In the Single GPU rendering Mode against OpenGL with Tessellation enabled, Unigine Heaven 2.1 turned in a score of 460:



The Crossfire Off score of OpenGL with Tessellation is not far off the Crossfire On score of 493.

So what is the performance penalty of running OpenGL 4.x tessellation anyways?

Same system, same I7 processor, same Crossfire Off Configuration. All that's changed is Tessellation in Unigine Heaven 2.1 has been set to Disabled.

In this configuration against DirectX 11 Heaven 2.1 turned in a score of 1037:



Under OpenGL Heaven 2.1 turned in a score of 1148:



No, that's not a typo. Under the OpenGL 3.x rendering mode, nothing else changed, the OpenGL renderer actually turned in a higher number of frames per second compared to the DX11 renderer with Tessellation turned off.

What conclusions can derived from these tests? Well, again, the scope is just too limited to draw any useful conclusions. Against this particular software application, Uningine Heaven 2.1, tessellation performance under ATi's OpenGL 4.x driver is just abysmal when compared to the DirectX tessellation performance. Outside of tessellation and in single-gpu mode the OpenGL driver is about where it should be in keeping pace with the DX driver.

Crossfire support for OpenGL 3.x / 4.x also seems to be completely broken, or it just may not be implemented yet. I do know that Crossfire rendering modes do not work in City of Heroes, but by the same token, SLI rendering modes are also non-functional for that game, so that isn't a reliably point of data either. SLI seems to be working in the cards I have against Unigine Heaven 2.1, but their OpenGL performance is indeed lower than the DirectX performance. I don't know if this is an issue with Heaven 2.1 itself, or an issue with the Nvidia drivers.

As an entire driver set, Catalyst 10.8 still isn't giving me any problems against the older Mobile RadeonHD 2600 and 4850 graphics cards I have. It doesn't seem to be that bad on those cards.

ATi OpenGL 3.x / 4.x performance: something's not right here

Okay, ATi just launched their Catalyst Driver for August. So I've been installing it on my systems, kicking the binaries, and seeing how it performs. In DirectX rendering modes the driver seems fairly competent. In older OpenGL 1.x and 2.x applications, such as F.E.A.R., Quake Live, NOLF, and so on, performance seems fine there too.

In City of Heroes, which leverages OpenGL 3.x for it's ultra-mode graphics, the performance has never really been where I would think it should be on the graphics cards, at least compared to DirectX titles. City of Heroes is extremely difficult to reliably benchmark. There is, however, a software application that does leverage OpenGL 3.x / 4.x API calls, and is a reliably benchmark with easily repeatable results: Unigine Heaven 2.1.

So I pitted the Heaven 2.1 benchmark against my Radeon cards running on AMD's latest driver in both DirectX 11 and OpenGL rendering modes under Windows NT6. For those wondering why I didn't also run Unigine Heaven 2.1 under Mepis Linux to compare OpenGL performance differences between Windows and Linux, it's because the benchmark application doesn't work on Mepis Linux 8.5, or previous versions for that matter. Now I don't know if this is because Unigine Heaven 2.1 is just badly coded, or because some components of Mepis Linux 8.5 are best described in terms of computer paleontology. I'd be more concerned about this if Unigine Heaven 2.1 wasn't effectively bleeding edge software.

Anyways, getting back to the benchmarks themselves, starting with my primary system running Crossfired 5770's atop a Phenom II. In 1920*1200 with Crossfire Active, Unigine Heaven 2.1 posted a score of 947:



Against the same computer, nothing changed but the API, Heaven 2.1 posted a score of 325:



That's a huge difference in performance when nothing but the API is changing. I also happen to know, from the likes of Valve Software and ID Software, that from an API standpoint, there's no performance difference between DirectX or OpenGL. Ergo, on the surface, there is something seriously screwed up with either Unigine's software or with ATi's drivers.

Okay. So, same two graphics cards, different base computer. This one being the Intel I7 920 Engineering Sample. To make this clear, this is not a comparison of Unigine Heaven 2.1 against different hardware platforms. This is a comparison of Unigine Heaven 2.1 against different API's Under DirectX in 1280*1024 this system posted a score of 1284.



Same computer system, all that's changed is the API, and it scores 493



Again, OpenGL performance is roughly 1/3 of the DirectX performance. The performance difference is repeatable, under different resolutions, on entirely different base computer systems. Now, at this point, I was going to test the HD 5770 cards outside of Crossfire rendering mode, only I ran into a rather large problem.

The 10.8 driver is an unstable mess with these cards on 64bit Windows NT6. During the Install of the 10.8 driver on my Phenom II system, the HDMI audio driver kept failing to install. On the I7 system, things were a bit worse. When I turned it on in Windows mode, it blue-screened. System restore which took me back to 10.4 OpenGL 4.0 preview driver, update to 10.8, reboot, blue-screen. Restore again, install 10.7 driver, everything's fine. Install 10.8 driver, everything's fine during that session, but reboot, and blue-screen.

Ergo, I'm forced to conclude that not only are there some rather serious performance problems with the 10.8 driver, there are some serious issues with the 10.8 driver in regards to system stability, at least on the RadeonHD 5x00 series cards I have in my hands.

Next question: Is this a problem shared by all of the ATi OpenGL 3.x / 4.x cards? Or is this a problem solely related to the 5x00 series cards? Well, to start off this testing, I turned to my RadeonHD 4850 system backed by an Intel Core 2 Duo. It ran fine yesterday when I did the initial Crossfire-On benchmarks. Spent the night turned off, turned on fine, no blue-screens, no other problems. So, how did it do under Unigine Heaven 2.1?

With a resolution of 1360*768, in DirectX 11 API rendering mode, this system scored 1793



Under OpenGL, nothing else changed, the system scored 904:



Okay, the good news is performance is no longer one third. The bad news is, the performance gap is still pretty wide, just on the positive side of 50%.

Since that's how ATi cards perform in Unigine Heaven 2.1, how about Nvidia cards?

Well, as I've mentioned before, I don't actually have any GTX 400 series cards on hand to test with. The most recent cards I have from Nvidia on hand to test with are some GTS 250's. Like the RadeonHD 4850 they are an OpenGL 3.x class gpu.

In a resolution of 1680*1050 leveraging DirectX, the GTS 250 system managed to score 1436:



Nothing else changing but the API, the GTS 250 turned in a score of 1200:



Okay. Yes, there is a performance drop here, but OpenGL still delivered 83% of the performance that DirectX delivered. This is a more acceptable delta of change considering that fast-pass OpenGL 3.x / 4.x rendering applications are still very new and drivers could be unoptimized.

Unoptimized drivers are one thing, but the sheer performance difference between the DX11 and OpenGL API's just continues to raise questions. All of these systems so far have been running their respective multi-gpu modes. How exactly would they perform outside of their multi-gpu rendering modes? Well, as mentioned earlier, I was going to do these tests on the RadeonHD 5770 because of the near 1/3 performance difference, but wound up not being able to because of driver stability issues. I'm still trying to sort those problems out.

I can however look at the RadeonHD 4850 and Geforce GTS 250 systems in single card rendering modes.

The RadeonHD 4850, with the DX11 API, and no other changes than disabling Crossfire, achieved a score of 1064:



Under OpenGL with Crossfire turned off the system achieved a score of 931:



Going back to an older system, I ran Unigine Heaven 2.1 atop a Mobile RadeonHD 2600. This GPU is actually too weak to give any meaningful input. Under DX11 it only managed a score of 192:



Under OpenGL, it turned in a score of 151:



Well, that seems to be one problem sorted. Crossfire does not seem to be activating for Unigine Heaven 2.1, at least on the RadeonHD 4850, but the single card performance difference between the OpenGL and DirectX 11 API's are not that extensive.

For comparison, the Geforce GTS 250 under the DirectX 11 API with SLI disabled achieved a score of 867:



The same Geforce GTS 250 system under the OpenGL API with SLI disabled achieved a score of 849:



The immediate conclusion is that SLI is activating for Unigine Heaven 2.1, but Crossfire is not. This explains the 50% performance difference between the RadeonHD 4850 in Crossfire under DirectX and OpenGL, but not the 70% performance difference between the RadeonHD 5770 in Crossfire under DirectX and OpenGL.