This will get incorporated into the new Haswell file server post but it’s big a bulky on the topic of DDR4 and LRDIMM:
DDR4
DDR3 has been around forever and the stock ones are typically 1600MHz but range from 800 to 2200MTs, but DDR4 starts at 2100MTs eventually up to 4200 Mts! As an aside MTS is Megatransfers per second (you do two transfers per cycle, so the true cycle time is half the MTS) so if you can you want to be able to use it. Particularly for big servers that like to cache things in memory. The big boys like Crucial are already up to 3300 MHz so nearly twice stock DDR4. If you are a gamer geek and have the latest Haswell E (iCore 7) with the X99 chipset in say an ASUS X99 motherboard, then you should be really happy.
Samsung is starting to make 8Gb memory chips with a 20nm process. That density will allow 128GB DIMM modules if you can believe that. Quite a step up from the common 16GB DDR3 modules. And perfect for servers who can use TB of physical storage.
But what servers, does this kind of fast memory matter to them?
Tweaktown did a test and with monster dual 16 core system, they couldn’t even complete their testing with 128GB of Ram! The server we are building is more modest, so the 16GB per module with EEC CT16G4RFD4213 seem pretty reasonable. As another aside because of interleaving, you have to be careful to put your memory into the right slots on a particular motherboard. Normally you want a pair of modules which normally doubles the effective bandwidth. It’s also interesting to see that if you fully populate a memory system, then you can actually slow things down. The full 16 slot filling reduced memory to 1800MTs at least for this test. Scaling-wise Tweaktown found the optimal bandwidth was 4 slots worth of memory to each processor. That is 4-way interleave. The other good thing is that their testing is showing that all memory is pretty much the same at this level, so it is mainly a cost thing whether you are using 8GB per slot or 16GB. If you have relatively smaller memory needs then it makes sense to go with 8GB to increase interleave as it seems that 4-way is fastest. (e.g. 4 x 8 = 32GB) whereas (4 x 16 =64GB).
LRDIMM vs RDIMM for more than 16GB per module
Typical desktop memory is unregistered (and less expensive) as there are not too many modules. With servers, you need a register chip to drive all that memory all that distance. However for really big systems with terabytes of memory, you need Low reduced Registers so you can have big memory arrays. This isn’t that big a deal for our little file server. The net is that soon we will be able to get 64GB memory modules. Wow that really helps density. You do need BIOS support for LRDIMMs as Supermicro explains, so make sure your server has this but most modern server boards have this. So for instance on a X10SRCH, you can have up to 512GB (8 x 64GB) worth of memory on it! It will cost you a little a 8 x 32GB system is $3K worth of memory 🙂
ECC for large memory configurations
The last point is error correction. In desktop this doesn’t matter too much. The amount of memory is small, but in big server systems, you are far more likely to have errors, this also drives up the cost
Buying Recommendations
Newegg has mainly Crucial memory and the pricing shows a 16GB sweet spot. Amazon as usual is slightly cheaper with the Crucial 1x16GB Registered ECC server memory at $204, so the best choice:

  • 1x16GB. $214 for 2100DDR4.
  • 1x8GB. $113
  • 1x4GB. $74

So you can see the sweetspot in cost per bit is the 16GB system. So we might want to start for a small server with 2x16GB ($428) or if you know you won’t need much, the best performance would come from 4x8GB ($580).
Cooling
One final piece is that servers are normally quite loud. For small business and home use, we don’t need such noisy stuff and we are not running at such high loads, so a quick skim through silentpcreview.com gives us some
Power Supplies
The big boys use redundant power supplies, but most small business servers don’t need that. Also we don’t have a monster graphics card either chewing up lots of power. The new Haswell-EN are really low power as well, so doing a little math shows the main issue is the startup power required by lots of disk drives. And of course you need lots of power connectors. A good power calculator shows that we need at least a 900 watt power supply if we put in all 24 drives?!
 
 
 
 
 

One response to “New Haswell file server: Memory and Cooling addendum”

  1. […] Haswell Rack for the big server. Well there are lots of questions about reliability, but if your data is really mission critical, then you are pretty on your way to using Xeon V3 and SAS drives. This build gives you 24-bay SAS with a Xeon with up to […]

I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect