|Duh Voodoo Man's Geforce 6800 Card Mod Misadventure|
|A GOOD NEWS/BAD NEWS STORY WITH A HAPPY ENDING|
TIME TO TURN YOUR CLOCKS FORWARD...
Click on pic for larger image
Overclocking the Nvidia cards has really become a simple task, between the Forceware drivers and the "Coolbits" registry hack that turns on several hidden-by-default tabs, including one entitled "Clock Frequencies". From there, it's a simple matter of selecting Manual overclocking mode and the 3D Settings option, and then adjusting the core and memory clocks separately. The method I generally use is to start with the core and walk up the frequency in 10MHz increments until I run into a problem in a 3D benchmark. Usually, that shows up as visual anomalies, "hitching" (the action stops for anywhere between a fraction of a second to several seconds before restarting), crashes to the desktop or system freezes. It's almost impossible to get into serious trouble, because every clock change requires a test to be run before the new clockspeed is accepted. So if you dial up the value too high, even accidentally, the test fails and prevents you from implementing the new speed setting. Even so, I've found that clock frequencies that pass this initial test run may still give longer-term stability problems. So the only way to really evaluate the stability of a given clock setting is to run it for an extended period, either by looping a benchmark or running a 3D game or screensaver for an extended period. Once the maximum core frequency is determined, the memory's top speed is determined by the same basic method.
The Forceware drivers also offer an automated way to determine these clock settings, by clicking on the "Detect Optimal Frequencies" button. This runs a scan of core and memory frequencies and returns the maximum settings that don't exceed a critical temperature threshold. The problem that I've run into with this technique is that it doesn't seem to be very accurate. I tried this with the EVGA 6800 and got a core setting that was higher than the maximum stable setting I determined manually, and a memory clock that was well below what I was able to achieve in the fully manual mode. Consequently, I recommend sticking with the more reliable (though more time-consuming) manual method. There's even a fully automatic overclocking setting that's enabled by selecting the "Auto overclocking" radio button. However, this is way too "hands off" for a tweak-geek like me, so I haven't been tempted to even try it.
The outcome of the manual overclocking procedure was a pleasant surprise. For the core, I was able to achieve a maximum stable clock frequency of 376MHz, an increase of just under 16%. While this was about what I'd expected, the result from overclocking the memory is what surprised me. I was able to push the stock 700MHz DDR memory up to a setting of 862MHz, an increase of 23%. That's a very solid memory overclock, in my experience. So, with the maximum clock settings determined and applied, I continued the benchmark testing to see what the performance benefit would be.
BENCHMARKING PLATFORM & TESTING SCHEME...
The video benchmarking was done using my personal PC; the basic configuration details are shown in the table at right. The testing scheme was comprised of a total of four benchmarks, consisting of two "synthetic" multi-test video benchmarks and two actual 3D games, and employing both Direct3D and OpenGL rendering modes:
- 3DMark05 Build 120 Multi-test Video Benchmark (DirectX 9)
- AquaMark3 Multi-test Video Benchmark (DirectX 9)
- Far Cry "Volcano" Timedemo (DirectX game)
- Doom 3 ViaVGA Benchmark Timedemo (OpenGL game)
For each of these tests, the benchmark was run under four different conditions, all at a screen resolution of 1024x768 and 32-bit color depth:
- Default core & memory clockspeeds, anti-aliasing & anisotropic filtering off
- Default core & memory clockspeeds, 4x anti-aliasing & 8x anisotropic filtering
- Overclocked core & memory settings, anti-aliasing & anisotropic filtering off
- Overclocked core & memory settings, 4x anti-aliasing & 8x anisotropic filtering
In addition to the EVGA 6800 card, my son's previous vid card (MSI GeforceFX 5900XT-VTD128) and my current personal video card (Albatron Trinity Geforce 6800GT) were put through the same battery of tests, as a basis of comparison. So, let's move on to the benchmark results and their interpretation....
The test data for the three different video cards is summarized in the following table and graphs:
Benchmark Result Summary Table
Benchmark Result Charts
(Click on charts to see full size version)
3DMark05: No AA/AF
AquaMark3: No AA/AF
Far Cry "Volcano" Demo: No AA/AF
Doom 3 ViaVGA Demo: No AA/AF
Far Cry "Volcano" Demo: 4xAA/8xAF
Doom 3 ViaVGA Demo: 4xAA/8xAF
ANALYSIS & CONCLUSIONS
As expected, the performance of the EVGA 6800 falls in between its more expensive sibling, the 6800GT, and the older 5900XT. The good news here is that it's much closer to the former than the latter, which isn't too surprising, considering that they share the same GPU core technology. Let's take a look at the relative performance of these three cards in more detail:
Overall Rendering Speed - No real surprises here, as the 6800 GT with its 16 pixel pipelines, DDR3 memory and higher clockspeeds dominates all four tests under all conditions. At stock clockspeeds, the 6800 lags it's big brother by about 22% overall, which really isn't bad considering the deficit in pixel pipelines, core & memory clockspeeds, and total amount of memory. Overclocking the EVGA 6800 to the max reduces this performance deficit to about 10%, which is really quite impressive for a $164 card! The 5900XT, with it's older GPU core technology, is left way behind by the two 6800-series cards. On average, it lags the 6800 by about 58% and the 6800GT by 65% for the four benchmarks used here.
Anti-aliasing/Anisotropic Filtering Performance Penalty - The extra 3D processing power of the 6800 GT clearly pays off here. Enabling 4x anti-aliasing and 8x anisotropic filtering imposes an average performance penalty of only 18%, with a spread of 7 to 36% over the range of testing done. For the 6800, this increases to 24% (16 - 39% spread), and for the 5900XT, the negative impact is up to 28% (17 to 33%). Even so, these are all good results when you consider that not so long ago in the world of 3D video rendering, cranking up the AA/AF settings typically lowered frame rates by well over 50%.
Overclocking Extent & Impact - Though none of the three cards is a huge overclocker, the 6800 achieved the highest core/memory average increase, at a bit over 19%. The 5900XT was second at 16.5% and the 6800GT managed only 11.3%, not surprising for a higher-end card that is configured closer to the limits of its hardware technology. As far as overclocking efficiency goes--the rendering speed increase per incremental clock cycle--here the 5900XT takes the prize at 108%. This probably reflects the fact that the 5900XT starts from a lower performance base and thus has more potential for improvement impact from overclocking. The 6800 and 6800GT come in at 79% and 74%, respectively--still good efficiencies in translating their clockspeed increases to framerate gains.
THE BOTTOM LINE
Despite the abysmal failure of the "softmod" process to deliver 16 fully functional pixel pipelines on this particular card, I'm still very pleased with the value offered by the EVGA 6800. While not quite in the performance class of the 6800GT, the 6800 model still offers major league 3D rendering power and does so at a very reasonable price point. Factor in the respectable overclock I was able to achieve for the card (especially the memory!), and the value is just that much better. The outcome is that my son has an excellent new vid card in his PC that should ably handle any game he can throw at it. And his old man's wallet didn't suffer too much of a hit in buying it for him. So, in this good news/bad news story, the good outweighs the bad by a healthy margin.
In summary, if you're looking for a highly capable 3D card but don't have too much to spend on it, the 6800 should definitely be on your "short list". The bang-for-the-buck factor for this model is already very attractive, and if you're lucky enough to get one that has a fully functional fourth pixel quad, so much the better!
November 16, 2005
Questions or comments? E-mail me at THIS LINK
Take me to Duh Voodoo Man's Home Page Just close this browser window, please...