Multiple nVidia Cards


Well in the old days, there was a thing called SLI which basically had two video cards and they each took a line of a display, so you could get them to really cooperate. Today, nVidia has taken the idea farther and generalized it. Now they have something called NVLink which looks like a really expensive horseshoe.

It connects two of the right kind of nVidia cards, the high-end ones like the RTX 2080Ti and gives them a much higher speed bus.

This is now actually data center technology that nVidia uses to get very fast machine learning times. They have something called NVSwitch which is a crossbar that lets 18 graphics cards talk to each other.

Why did they need to do this? Well, the current Turing family of cards is PCI Express 3.0, so they support 1GBps per lane. With a single card, you get 16 lanes and most of the time when you have two cards, you get 8 lanes each so that's 16GBps to 8GBps.

The NVLink 2.0 supports up to 3.1GBps per lane. These have 8 lanes on them, so you are getting more like 20GBps and this is all off the PCI Express bus, so the CPU doesn't have to contend for bandwidth.

You then get a new SLI entry in your nVidia control panel and you should see about 50-100% scaling (not bad) with another card on 3DMark depending on the benchmark. For most graphics benchmarks, the scaling is close to 90% which make 4K gaming a reality. When I'm playing Modern Warfare at 4K, you can definitely see it slowing down, so if you can afford $1,200 cards, then you do get some really cool performance!

Related Posts