read.cash is a platform where you could earn money (total earned by users so far: $ 771,176.89).
You could get tips for writing articles and comments, which are paid in Bitcoin Cash (BCH) cryptocurrency,
which can be spent on the Internet or converted to your local money.
I was curious to see if nVidia is slowing down older video cards with newer drivers, such as the GeForce 8800 era cards. In the late 90s, nVidia indeed slowed down TNT1 and TNT2 card with the newer drivers, as the GeForce2 was released. I was curious, if nVidia pulled the same nasty trick with the GeForce 8800, which is an approximately 15 year old graphics card, but still performs acceptable with modern games. I have assembled a testbench for this purpose, to measure the performance of the card.
In the early 2000, those who still used Socket7 based computers with nVidia TNT1 graphics cards, was able to notice that their gaming performance fell after every driver update. If the early 2.x TNT1 drivers offered 24-25 fps in a game on their configurations, the 4.x drivers were already barely at 20 fps, and the 10.x or 20.x series of video drivers eroded the performance below 15 fps. This was super annoying for those, who wanted to use newer drivers (to run newer titles).
When these people upgraded to stronger Celerons around this time, was able to note that in the Celeron computers, they was able to run their TNT1 and TNT2 cards again, even with the newer drivers, above 25 fps with the given title. The performance however decreased with version upgrades again and again, the stronger processor was still not able to encounter the increasing demand of nVidia drivers.
nVidia favored their newer graphics cards when optimizing their drivers, and was focusing on newer processors and configurations in their newer and newer driver releases. This meant severe performance degradation and compatibility issues with their older hardware. Even if they still supported their older hardware from the driver, it didn't made too much sense to use newer drivers with older hardware.
When the PCI-E era arrived, the speed of the processors stopped to increase for a while. This meant manufacturers had to keep the cpu usage of their drivers low, they needed to come up with newer and smarter tricks to be able to feed the new era of video chips. The first PCI-E Radeon cards were just clones of the previous 9xxx gen. nVidia were also releasing PCI-E compatible graphics cards, but they were in the middle of development of a new type of graphics chip, to be used in the GeForce 8800 series. This chip was designed to meet the demands of the newly forming DirectX 10 API.
In 2006, nVidia released the 8800 GTX and GTS cards. These cards are a totally new conception compared to their older brothers. The 8800 series of video cards use unified shaders. Older video chips were pipeline-type solutions with dedicated pixel and vertex units. These cards are being built from hundred of small graphics processor units, working somewhat similar to normal processors. This allowed these cards to all type of workloads, and there are less situations when various internal parts of the chip are idling without any tasks to be performed. This means bigger performance in overall.
The nVidia GeForce 8800 chip was indeed grown up to its hype. When the card was released, it outperformed every Radeon cards by at least two times. These cards were however too power hungry, and they required very complex circuits to be made. There was several GTX and GTS models, with 192, 256, 320, 384 bit memory bus, and with 320, 512, 360, 768 MByte of video memory. All of these cards were offering quite brutal performance, but to simplify the chips, nVidia decided to refresh the product line.
The 8800GT were a cost-reduced version, manufactured on newer chip manufacturing process. nVidia fine-tuned the architecture, they have decreased the number of some of the processing units, and added different type of processing units where it was needed. They have introduced a fix 256 bit memory bus with 512 MByte video memory. (256 and 1024 MByte versions also exist, but the 512 MByte version was the most popular). The card is available in single slot design as well (compared to the dual-slot cooling of the GTS and GT cards), and it needs only one 6 pin power connector (in comparison to the predecessosr, which requiring two connectors). The newer, cheaper 8800GT quickly became the favorit of the gamers, and AMD struggled to produce a product to compete against it.
Actually its an Asus card, i just swapped the cooling to modify the card to a single-slot card. I also have downclocked the card by 35%, and dowvolted it from 1.1v to 0.9v. This brought the card down from 90 celsius to only 50-60 celsius degree under full load.
Those who still playing games with 8800GT or similar era nVidia cards, probably having older dual core or very early quad core computers as well. I had the option to choose between a 64 bit Pentium D, a Core2Duo, a Pentium Dual Core, or an Athlon64 x2 based computer for this test. I have decided to choose an Athlon64 x2, which is probably the slowest one, but still decent enough with its two cores and 2 GHz clock speed. The Athlon64 x2 was the second dual core desktop system released on the world. Its a very early design, and still uses DDR1 memory. Regardless, the motherboards for it already have PCI-E ports. The 8800GT uses PCI-E 2.0 but it usually boots up in the PCI-E 1.1 or even 1.0 motherboards. Sometimes, a video bios upgrade is required to make the 8800GT cards to boot up in these old motherboards, luckily my card was already having a patched bios (Asus cards can be patched to work in PCI-E 1.0 boards).
The Athlon64 x2 supports DDR1 type of memory, at DDR-400 speed rate. Unfortunately, i only had random DDR memory sticks laying around, in random sizes. Some of them was DDR-266, some was DDR-333, and some was DDR-400. I showeled these modules into the motherboard, to get a total 1.75 GByte of memory.
Another problem was the size of video games. I wanted to test far more titles, but unfortunately, nowadays, games tend to be 30-60 GByte of size (or even bigger). Downloading even these few GByte sized games took a good 3 hours for me one by one, and its very hard to imagine why would someone let his computer to run for weeks just to download a 100 GByte video game. Games for the classes, not for the masses, it seems. Its not like there is technical reason for these games to be such large, they just probably use uncompressed pictures as textures, and high polygon models for no reason, which makes the game sizes to grow one hundred times bigger than what they should be.
And i was originally planned to use a 40 GByte hard disk to carry out this test. Obviously that was not possible. Luckily, last year i have found a few 80 GByte IDE hard drives in the e-waste. I had one still laying around unused, so i have used that for the test-setup, which allowed me to at least install a few of these few-GByte sized relatively modern games.
The first game i planned to try was Dead Space. A friend of me (Conker) recommended this to me, after i have mentioned to him, how much i liked Alone In The Dark 1. Dead Space was supposed to be a horror game. After starting the game, it turned out: i can't skip the intro. The first minutes of the intro is just showing random texts on a screen before white noise. The full blown processional incompetency of modern game developers reaching levels i never knew was even possible before. Then the player is sitting in a seat, while other characters were talking on the screen. At this point, i had to visit the bathroom. Even when i have finished, the unskippable intro was still ongoing. The one thing that makes this game horror is the horroristically bad design choices and super crappy gameplay. At least i was able to use it to measure the frame per second.
I have decided to try out Hyperdimension Neptunia, as i have heard good from it. Unfortunately, the controls are so bad, you can't even control the player with the default keyboard settings. They have ported the game from console, and they have never tested it if you can ever play it with the default settings. I quickly dragged out my joystick from my main PC, and connected it, in the hope i will get better results if i play the game with it. Unfortunately, the game was not able to detect the joystick, as it requires a specialized joystick to be compatible. I have went with other similar games (anime characters fighting), just to be disappointed once more. No wonder the game industry is in such a deep crisis nowadays: every game is super crap with bad design, bad controls, next to no story, or they are just animated movies disguised as games, when you press a button after every minutes once or two. If i would like to watch a TV show, i wouldn't start up these games, that's for sure. I don't know, if these "developers" even know what a video game is, but this is certainly not how they supposed to look like. At least they will be fine to measure the performance.
Some games refused to show FPS counters, and i was not able to disable vsync in most of them. This meant the games were limited to 60 fps, no matter what i did. At this point i decided to scrap most of the games i wanted to test, and i went with some classical ones, like NFS shift instead. I have tested more games than which i will list here, but as most of them were running at 60 fps all the time (despite of force-disabling vsync in drivers) i will not include them in this list. (At least we can know that DX11 games are indeed working fine with 8800 GT, despite its a DX10 hardware).
The built in driver to Win7 64 bit is ForceWare 185, which i also included in the measure.
The next driver release is the nVidia ForceWare 195 driver, which was released a few months later.
The third one is ForceWare 259, which is from 2010 august.
The newest driver i was able to install is the ForceWare 285, which is from 2011.
The newest compatible driver, ForceWare 342 from 2016, which simply refused to install, and wanted me to download some special thingies from the internet, which i didnt allowed to happen, so that driver was not tested.
After setting up everything, i have set the memory clocks to DDR-200, because that was the only option which was stable at first. I have started to measure the things like this, and the first game i have tested was Dead Space. I have tested the game in 1366x768, as that is the resolution of my monitor. First i have tested with the built in driver, then i went to the newer drivers one by one. I have tested the game multiple times, to verify if the results are accurate.
We have the smoking gun. The 8800GT indeed gets slower from every driver upgrades. The earlyest official Win7 driver (besides the bundled driver) is the fastest, offering the best performance. Newer drivers lose the speed, crawling below 20 fps, and the newest driver can only squeeze out 18 fps from this game on this configuration. Terrible news.
I have also tested the previously mentioned Hyperdimension Neptunia. As you can see from the results, it didn't really made any sense to test this game.
Along with other similar type of games, it was locked to 60 FPS, no matter what i did, and my monitor was not big enough to stress the card even better with higher resolution. I leave this result here to indeed prove it will work on this processor and video card without any issue. Except the fact that it refuses to recognize my joystick.
I have realized i can set the memory clock speed higher, so i have selected DDR-266 (133 MHz). Maybe i could went even higher, but overclocking this system is not the point of this test. Although a true gamer will probably overclock his processor and memory, if he is forced to use this system to play games, this processor indeed have some 10-20 percent overclocking potential - which will be not tested now.
I have decided to test Dead Space once again, this time with the faster memory settings. And indeed, the situation now totally changed. With the faster memory settings, i got the following results:
Ok, now the results are the totally opposite than before. I have measured the results multiple times, so there was no calculation errors. Now the newer drivers were indeed faster than the older ones. This is something i didn't expected.
At first, when using the memory at DDR-200 settings, i thought i indeed catched nVidia to do something wrong. But it seems, this was not the case. When switching the memory to DDR-266, the situation became the opposite, and now suddenly the newer drivers became somewhat faster than the earlyer drivers. So obviously nVidia doesn't intentionally slowing back older video cards, the problem is far more complex.
nVidia is basically trying to optimize the drivers more and more, however, sometimes, these optimizations favor more modern computers, and they are sometimes harmful for more older computers. It seems nVidia moved to favor higher memory bandwidth in its drivers some cases to improve the FPS, but this optimizatio of them will may hurt the performance on systems with lower memory bandwidth.
If you have an older configuration, using newer drivers are not always the best choice. On some configurations, older drivers will give you better results. In some situations, you will have to try out multiple drivers to find the one that gives you the best performance. If you change your memory hardware settings, or change your memory, processor, motherboard, different drivers will maybe give you better results than other drivers. The difference can be quite significant, the drivers will be slower or faster by 25 percent on the same hardware configuration, but the differences can be as high as 40-50% if the settings are being changed in the bios as well.
nVidia is not slowing down your older video card, but you will have hard time to find an optimal driver for your old gaming rig. Changing the drivers alone can give you 25% boost in performance in some cases, and nVidia drivers are super sensitive to memory settings as well. These differences can make the game to go from stuttering to fluid game play, so you will have to find the ideal driver for your obsolete gaming machine to get the best results.