I haven’t built a machine in a while and have never built a dedicated file server box. So lots of great learnings from the exercise. We’ve used first Mac Mini with Drobo and then Synology (nice embedded product), but now it’s time to go all the way to Linux with ZFS so wish me luck. Here are some of the decisions. Note that if you buy today then you get 10% off from Newegg with promo code visacheckout and using the Visa checkout system.
For a list:
- Amazon Wish List. Here’s a quick list of recommendations
- Newegg Wish List. I haven’t quite figure out to beat URL out of this system, but it is there too.
This is more important than with desktop machines where we normally go for the smallest mini-ITX and the coolest processors. Here we need lots of room and hot swap is a good idea given how often disks fail. the big winner and choiceat t Newegg.com seem to be the Norco RPS-4220 as the armored case for file servers:
- Norco RPC-4220. Can you imagine 20 hot swap bays in a single $300 box? That is definitely overkill and you still need a power supply. These use mini-SAS connectors whereas the RPC-4020 uses traditional SATA cables which are less reliable. You can use mini-SAS to fanout to four SATA cables. These things require RL-26 rails. If you need more, the RPC-4224 has 24 drives and the RPS-4216 has 16 of them if you need less. The Norco comes stock with loud 90mm fans, but you can get a 120mm fan partition for $11 that let’s you put in low noise 120mm fans.
- Supermicro CSE-743TQ-865B. The is a 4U and costs more at $390 but includes a power supply and is highly rated.
- ARK 4U-500 if you want 10 drives and don’t need hot swap but you still need a power supply at $199
- Rosewill RSV-L4411 if you do need 12 hot swap bays (6 x 12 = 96TB of data 🙂 for $199. Note that if you ever think you’ll use SAS drives, the $249 Rosewill RSV-L4412 supports those things. Of course in the distant future everything will just be PCIExpress with flash, so not clear you need that. The main issue is the hot swap drives are flimsy so probably not a great long term choice, but certainly cheap!
For a smaller file server, there are some nice options for mini-ITX:
- Lian Li PQ-26. If you are building a ultra small system, this has 10 (?!) 3.5″ bays and is mini-ITX.
- Corsair Air 540. This is a small cube but has room for a couple of hot swap drives. Nice for small SOHO NAS systems.
Most of these fans are loud. So if these are going into a server rack, it doesn’t matter, otherwise, you want to use silentpcreview.com and find some nice quiet 120mm fans if the noise is too much. Some choices are (and used both) and they are amazingly quiet:
- Scythe Gentle Blocker 120. The 800 rpm unit is expensive at $16-20 each but whisper quiet and very efficient
- Noiseblocker M12-S1. Expensive at $23 but nearly noiseless particularly the low speed S1 model. The S2 is nearly as quiet but with a top speed of 1200 rpm is more versatile when facing heavy loads.
As an aside, you should need it but the Antec QuietCool 140 is the best 140mm fan out there and they haven’t reviewed 80mm fans in ages but models from Scythe, Noise Blocker, Noctua, Enermax and NMB should do pretty well.
And of course a Xenon is going to generate some considerable heat and Silentpcreview.com has been a good place to find highly effective coolers like the Noctua NH-14. The main issue is that server boards use the narrow ILM rather than the traditional square ILM and there are very few available. The best one seems to be the Noctua NH-U12DX i4 based on the very nice NH-C12 that is a decent cooler.
If you decide not to use the SuperMicro board, then you can use a square ILM in which case the Prolimatech Genesis is amazing:
- Prolimatech Genesis. $80. Top rated and supports the 2011 according to the product page. It needs two 140mm fans as well so it is more expensive, but has amazing performance. It’s 160mm tall, so it needs a huge case.
- Thermalright Silverarrow SB-E. $80 A Thermalright was one of the first coolers I ever bought, so it’s nice to see this one at the top of charts.
- Scythe Kotetsu. $35. A smaller single cooler and it is very efficient and cheap. The main problem is that it is very hard to find.
SAS vs SATA motherboard and drives
Another detail is that the 4220 uses SAS connections. So if you have a SATA hard drive then it direct connects to a SAS backplane (it is plug compatible). And if your motherboard is SATA, then you need a reverse cable to connect the motherboard to the SAS backplane. See below but SAS Motherboards are about $50 more and that seems like an easy decision, get SAS and you can always use SATA drives if needed
Also you can connect SATA drives to a SAS backplane and a SAS controller on a motherboard as SAS is backward compatible with SATA.
Adaptec explains it all but the big difference is that SAS allows a daisy chain of up to 128 drives whereas SATA controllers are point to point (so 8 SATA port is 8 drives, whereas 8 SAS ports can be 128 drives) and the typical cable allows one SAS port to fan out to 4 SATA drives.
Systems like the Norco have a SAS backplane, so you can attach the motherboard controller to it and then fan out with a forward cable to breakout from the SAS backplane to SATA drives. Another solution is to use Nearline SAS which are SATA drives with SAS interfaces.
The final issue is that the SATA data channel is much likely to generate an error (10^17 of the time) you will have a silent data corruption (SDC) on an SATA channel but SAS is much better at error detection so it is 10^21. Since these are silent you have to way to correct for it. That’s a good reason to use SAS for big arrays. There is even a higher standard for SAS called T10 that increase this to 10^28
Additional Enclosures for more drives
SUPERMICRO CSE-M35T-1B. This converts a 3×5.25 external stack in 5×3.5 hot swap on any machine for $99 so you can add it no an existing chassis, but has quality and seems slow. iStarUSA BPU-230SATA-RED does a similar thing with somewhat better reviews for $76 and adds 3 hot swaps to a 2×5.25 enclosure.
The big boys use redundant power supplies, but most small business servers don’t need that. Also we don’t have a monster graphics card either chewing up lots of power. The new Haswell-EN are really low power as well, so doing a little math shows the main issue is the startup power required by lots of disk drives. And of course you need lots of power connectors. A quick look at the Outervision showed that a good 1000 watt power supply was what is needed.
Newegg has a short list of server power supplies. $600 for a dual 1K watt. The desktop list is much longer, but here’s a list of the most reviewed. Silentpcreview.com also has a review of these. The SeaSonic Platinum is supposed to be even more efficient but has some quality problems. The Kingwin is more efficient and has better power efficiency.
- SeaSonic X-1250. Also well rated as a editors choice. $220.
- SeaSonic X-1050. This was highly rated by Silentpcreview.com as well. It is completely silent drawing less than 500 watts and has no electronics noise at all. $169.
- Kingwin LZW-1000 Platinum. We actually bought this unit and it is truly whisper quiet. 80 Plus Platinum so 94% efficient. It is modular which is very nice. It is also noiseless below 500 watts. But this seems to be out of stock.
These are the other newer well rated ones:
- Rosewill Lightening-1000. $169. 80-Plus Gold
- SeaSonic Platinum-1000. $220. Include 11 SATA connectors as well. 80 Plus Platinum
- Rosewill Lightening-1300. $169. 80-Plus Gold
Now there is the choice of Xeon or the Core series. The big news on Xeon E5 v3 is that you can get an incredible number of cores (up to 18 Haswell’s), so they support terrific concurrency because the Core has fewer, well, Broadwell cores but higher individual performance.
So Xeon is usually one fab behind the Core line. The idea is that these modern server chips are perfect for virtualization. That is a single server might run all kinds of virtual machines running on real cores (if that makes sense 🙂 The map of processors is really complicated with CPU from $4K(?!) to $200. For instance here are some good small system choices.
The naming system for Xeon is as dense as Core, but basically we are on Haswell-EP (Core is on Broadwell) and it was just announced with only Wikipedia really able to keep up.
The E3 are the small business server models and still have integrated graphics and have typically four cores and 8 threads. The lower end ones run on the same socket 1150 as their desktop cousins and uses DDR3 and are from May 2014.
- Xeon E3-1226. $220 from Newegg at 3.3GHz and 2/3/4/4 turboboost
The E5 are the real server chips running Haswell-EX. Note they don’t have integrated graphics, so you use them with a real console. These things have Turboboost which is a complex calculation saying how many multiples of 133MHz faster they will run from all cores down to a single core running. They all run DDR4 so are theoretically 20-30% faster in Ram access. These were just announced September 2014.
- Xeon E5-160xv3. It is a uniprocessor with 4 cores ranging from 1607 at 3 GHz at $200 to 3.7GHz (andTurboboost at 1/1/1/1) at 1630 at $372 but not readily available yet.
- XeonE5-26xxv3. These are dual processors with 4 cores in two processors and confusingly the $444base part (2623) runs at 3GHz but boosts at 3/3/5/5 (that is with 4 or 3 cores, it boosts to 3 x 133 = 3.5GHz, but with 1 or 2 it boosts to 5 x 133 +3GHZ = 3.7GHz). It boosts more because it is actually more processors.
- Xeon E5-2603 V3. This is a budget chip at $223 with dual processors and six cores at 1.6GHz and no turbo boost so good for lots of small jobs that don’t need much processing power.
- Xeon E5-2609 V3. $300. 6 cores 1.9Ghz and no turboboost so a very budget chip, but this is great for file servers doing little work.
- Xeon E5-2620 V3. This is another example six cores at 2.1GHz with dual processors and 3/3/3/3/4/5 turbo boost, so it is a good way to have lots of small jobs run for $434 Newegg, $407 Amazon. This seems like probably the best choice for a high performance server.
Super micro seems to have the most solid reputation in dedicated server boards. Also Newegg has a $100 off promotion if you buy a motherboard and a processor from them:
- Supermicro MDB-X10SRH-CLN4F. What a board, it supports 8 SAS and 10 SATA as well as 4 Gigabit ethernet. It is $419 from Amazon and sees quite future proof although expensive but nothing compared to all the drives you need.
- SUPERMICRO MBD-X10SRI-F Server Motherboard LGA 2011 R3. Supports the latest 2011 V3 socket, has dual gigabit ethernet and a single ethernet plus 10 SATA connectors. It also supports DDR4 with LRDIMM with up to 512GB of ram. $290.
- Supermicro mbd-x10srl vs mbd-x10sri is about $10 cheaper and apparently has virtualization features whatever that means.
Dual Socket or not
As servethehome explains, dual socket motherboards are common with servers. Makes sense as you want lots of cores running around and buying two processors is oftentimes cheaper than buying one with twice as many cores. However, the big tradeoff is that if you use it for expandability, then you lose half the memory and half the PCI Express slots. but it would be interesting to analyze if it is better to buy two $400 processors and mate them. The tradeoff is that each processor gets half the memory, half the PCIe bandwidth, half the disks, etc, so at peak, you have have the resource available compared with a single $800 processor. I’m sure Intel’s pricing is non-linear though so it might make sense.
However for simpler small business servers, probably that complexity isn’t needed.
DDR4 and LRDIMM Memory
This is the next thing memory compared with DDR3 with 16 banks instead of 8 and there are very high capacity memories for those big virtualization applications like Big Data that use LRDIMM at a 20% cost premium but good for very large physical memories of up to 1.5TB of physical memory!
DDR3 has been around forever and the stock ones are typically 1600MHz but range from 800 to 2200MTs, but DDR4 starts at 2100MTs eventually up to 4200 Mts! As an aside MTS is Megatransfers per second (you do two transfers per cycle, so the true cycle time is half the MTS) so if you can you want to be able to use it. Particularly for big servers that like to cache things in memory. The big boys like Crucial are already up to 3300 MHz so nearly twice stock DDR4. If you are a gamer geek and have the latest Haswell E (iCore 7) with the X99 chipset in say an ASUS X99 motherboard, then you should be really happy.
Samsung is starting to make 8Gb memory chips with a 20nm process. That density will allow 128GB DIMM modules if you can believe that. Quite a step up from the common 16GB DDR3 modules. And perfect for servers who can use TB of physical storage.
But what servers, does this kind of fast memory matter to them?
Tweaktown did a test and with monster dual 16 core system, they couldn’t even complete their testing with 128GB of Ram! The server we are building is more modest, so the 16GB per module with EEC CT16G4RFD4213 seem pretty reasonable. As another aside because of interleaving, you have to be careful to put your memory into the right slots on a particular motherboard. Normally you want a pair of modules which normally doubles the effective bandwidth. It’s also interesting to see that if you fully populate a memory system, then you can actually slow things down. The full 16 slot filling reduced memory to 1800MTs at least for this test. Scaling-wise Tweaktown found the optimal bandwidth was 4 slots worth of memory to each processor. That is 4-way interleave. The other good thing is that their testing is showing that all memory is pretty much the same at this level, so it is mainly a cost thing whether you are using 8GB per slot or 16GB. If you have relatively smaller memory needs then it makes sense to go with 8GB to increase interleave as it seems that 4-way is fastest. (e.g. 4 x 8 = 32GB) whereas (4 x 16 =64GB).
LRDIMM vs RDIMM for more than 16GB per module
Typical desktop memory is unregistered (and less expensive) as there are not too many modules. With servers, you need a register chip to drive all that memory all that distance. However for really big systems with terabytes of memory, you need Low reduced Registers so you can have big memory arrays. This isn’t that big a deal for our little file server. The net is that soon we will be able to get 64GB memory modules. Wow that really helps density. You do need BIOS support for LRDIMMs as Supermicro explains, so make sure your server has this but most modern server boards have this. So for instance on a X10SRCH, you can have up to 512GB (8 x 64GB) worth of memory on it! It will cost you a little a 8 x 32GB system is $3K worth of memory 🙂
ECC for large memory configurations
The last point is error correction. In desktop this doesn’t matter too much. The amount of memory is small, but in big server systems, you are far more likely to have errors, this also drives up the cost
Newegg has mainly Crucial memory and the pricing shows a 16GB sweet spot. Amazon as usual is slightly cheaper with the Crucial 1x16GB Registered ECC server memory at $204, so the best choice:
- 1x16GB. $214 for 2100DDR4.
- 1x8GB. $113
- 1x4GB. $74
So you can see the sweetspot in cost per bit is the 16GB system. So we might want to start for a small server with 2x16GB ($428) or if you know you won’t need much, the best performance would come from 4x8GB ($580).
Storage with hard disks
These are the still the cheapest for things like video storage and shear bulk. The big issues are what to do with the decision about SAS, nl-SAS and SATA. SAS drives are enterprise, they are higher quality and they are expensive. SATA fail in droves (I lost about a drive every six months or or so in my RAID arrays at home and this is just 8+4+8+8=24 drives in all!).
Anandtech has done some good round ups of SATA 4TB and 6TB drives. The 6TB drives are using Helium filled drives to get 7 platters instead of the five in normal air filled 4TB drives. Finally although more expensive, the Seagate and HGST are enterprise class with 10x fewer errors over their lives than the consumer grade WD Red as well as having five year warranties. Interestingly while the raw read speeds are quite different, in typical NAS scenarios with low client counts, it didn’t make much difference.
- WD Red 6TB. This uses six platters with higher density. $299 so reasonably cheap, although right now the sweep spot in pricing is still the 4TB at $169 or so. You pay a big premium for 6TB. 130MBps raw transfer as this rotates at 5400 rpm.
- Seagate Enterprise Capacity 3.5 HDD 6TB. Also air filled. 176MBps raw transfer
- HGST Ultrastar He6. This use the helium fill. Also note that since it is sealed, you can dump it into an Aquarium PC if you want :-). $470. It runs 4C cooler with 49% less power as well. 142MBps raw transfer
Anandtech also explains and reviews the difference between various lines and the big difference is in warranty and failure rates. The big issue with unrecoverable read errors is that if you run RAID-5 and a drive fails then if a single big fails on the remaining drives, you are cooked and can’t recover.
That is one reason why we run RAID-6 here so that even with consumer drives this is less likely to happen at the cost of less capacity, so in a funny way the more expensive enterprise drives are worth it because they fail less (losing 20% of the drives a year means you are willing to pay a 25% premium on drives) and being able to do RAID 5 instead of RAID 6 gives gain another 12.5% in capacity (for an 8 drive array, you have roughly 1 of 8 as error for RAID-5 vs 2 of 8 for RAID-6, the penalty is less for bigger arrays of course).
One problem is that with consumer drives, every 10TB read is nearly 100% likely you will get a hard error (10^14) so if you lose a drive, then on the next read, it is likely that you will have another hard error. Which is why you need RAID-6 with consumer drives. The so called near-line SAS/SATA drives are 10^15 so they are 10% likely with a 10TB read. So your rebuild times are much less. Basically with these big drive arrays (10-40TB), you need to upgrade from consumer to at least near-line SAS/SATA (e.g. 10^15)
- WD Red Pro. 7200 rpm. 5 year warranty. For 8-16 bay NAS systems. 4TB and 6TB versions. SATA. $260. 5-year, but it is only 10^14
- WD Red. For smaller 8-bay NAS. 5400 rpm. $173. 3-year warranty. 10^14
- Seagate Constellation ES.3. Traditional enterprise drive. 128MB cache vs the normal 64MB. $290. 5 year. 10^15. Actually quite a value as it is last years drive with good performance at $100 less. A target if you can find it.
- Seagate Enterprise Capacity. Their latest drives. 5 year. 10^15. $400. It is the winner if performance is the key.
- Seagate NAS. 5900 rpm. 3 year. $170. 10^14.
- Seagate Terascale SED. Low cost enterprise drive for gnarling enterprise. $260. 10^14
- WD Re. Western Digital enterprise drive. $300. 5 year. 10^15 failures
- WD Se. Competitor to Terascale for low cost enterprise. 5 years but 10^14. $244
- Toshiba MG03AC. Nearline enterprise storage. 7200 rpm. $290 Newegg, $232 Amazon. 5 year. 10^15. Also a value. These are good value because it is faster than consumer NAS drives but is 10^15 so you can use RAID-5 and is 50% faster.
- HGST Ultrastar 7K4000 SAS. A chance to compare with a SAS drive. 2M hour MTBF?! $270-380 10^15 so a pretty good deal if you can get it for $270. $232 Amazon SATA
Performance-wise, if you have 1-25 clients hammering away then you definitely get higher throughput on things like Enterprise capacity. Here are the categories of performance in three classes
- Seagate Enterprise Capacity, WS Red Pro, Seagate Constellation is particularly impressive at 4MBps with the lesser drives at half that on total throughput in RAID-5.
- Halfway between the cheap drives and the expensive as is the Toshiba at about 3MBps.The HGST SAS had an interesting curve. Was slow at small clients with 2MBps but then at 25 got to the 4MBps
- While the “consumer” drives are more like 2MBps which are the Seagate and the WD Red
Their summary is Seagate Enterprise Capacity v4 is the winner but is hard to get if you only care about performance. The Constellation is actually a very good value at $100 less. The Toshiba is a good deal if you can get it at $290 and it has the 10^15 although performance is less than the constellation. Interestingly the HGST Ultrastar isn’t a bad deal and is a SAS drive if you can get it for $270 saving connectors etc. Net, net, the non enterprise drives WD Red Pro performs like an enterprise drive but is $100 cheaper and is less reliable at 10^14.
Here is their pricing on Newegg and Amazon right now and this changes the ordering somewhat
- Seagate NAS 4TB. $170 on both Newegg and Amazon. A good value drive with decent reviews and decent performance. I’ve been using in my home build. You need to run RAID-6 thought and they do fail even with little use.
- WD Red 4TB. $170 Amazon and the 5TB is just $226 so a pretty good capacity deal
These are the mid-tier ones:
- HGST Ultrastar 7K4000. $270. SATA and SAS the same price for Newegg, $207 SATA from Amazon
- Seagate Terascale with Instant Secure Erase 4TB. $260 on Newegg and $214 on Amazon
- WD Red Pro. $408 Newegg, $239 Amazon. Good performance but poor reliability
For the high performance drives, they are all pretty close and given that the Seagate Enterprise Capacity seems like the best value.
- WD RE. $239 Amazon, $399 Newegg SATA. $272 SAS Amazon.
- WD Se. $240 Amazon
- Seagate Enterprise Capacity. $255 Amazon SATA and $262 for SAS from Newegg and Amazon (Platinum Micro)
- Seagate Constellation ES.3. $252 Amazon SATA or $262 SAS. $270 Newegg SATA and $320 SAS. Wow, this is the best deal given the performance figures, last years drive and awesome. As an aside, there is an array of different ST4000NM0023 is SATA ($270), ST4000NM0033 is SAS ($330), ST4000NM0033 is SED ($365), ST4000NM0043 SED ($390), ST4000NM0063 is SED-FIPS encrypted ($330). SED is kind of a cool feature that encrypts the drives at a low level which is pretty cool for enterprises. It is $100 more than the Seagate NAS but has a 5 year warranty, can run RAID-5 and is twice as fast as the Seagate NAS.
- Toshiba MG03AC. $405. Not worth it at this price.
The conclusion is that Seagate Enterprise Capacity 4TB SAS at $262 seems like a good value. 10^15 reliability, SAS for more reliability in the connector and then double the performance compared with Seagate NAS at $170 plus five year warranty.
Storage with SSDs
If you want to be the cool kid on the block, then spent $1K and get the Intel Tom’s Hardware and get an Intel SSD DC P3700 that use the PCIExpres directly. This blows through the 6GBps limit of SATA and just uses PCIExpress for everything. You can also get an adapter board that takes the new M.2 mobile flash specification and turns it into PCI Express. SAS drives are enterprise class and have more throughput but cost a fortune. For a startup, the cusp case is that SATA is cheaper but fails more, but we aren’t doing as many simultaneous accesses.
The future of course is moving to PCI Express direct connect for drives. There isn’t a reason really to get to this and a good experiment for this project. Amazon now has a sea of adapters that let you connect the M.2 directly into PCI Express. This works pretty well for these huge server boards with 7 slots of PCI Express and should be more reliable that having connectors and then having:
- SATA 2.0 bottleneck (6Gbps for SATA 3.0). Moving any faster would be a big problem for SATA. That by the way is about 560MBps effective according to Anandtech. When we built a RAID0 configuration with two SATA SSD drives, we in fact got to a real 1GBps of transfer on our Aquarium PC, so this actually can happen at least at peak rates.
- SAS bottleneck (12Gbps). Can you believe we are talking about 12Gbps to a drive being a bottleneck
- PCI Express 3.0. There lots of math, but basically each lane is 1GBps of bandwidth so roughly speaking just an x1 PCI Express is 8Gbps . Wow. So an x4 is 4GBps or an amazing 32Gbps with effectively 1.56GBps of bandwidth.
- SATAe. This is an upcoming standard that basically stuffs PCi Express lanes into a SATA cable, so that you don’t need a PCI Express card adapter. Of course the cables will be unreliable, but its a neat idea. New motherboards like the ASUS Z87 are getting updated with SATAe drivers
- NVMe. There will also be a move from the ACHI software stack to something called NVMe which you need to build into the SATA SSDs (the SF3700 will have this in the consumer world shortly so look out for those. This will increase the number of IOPs because the latency is much shorter for accessing data. The soon to be shipping Intel PC3500 is a consumer grade PCI Express system (no adapter needed) with NVMe priced at $600 for 400GB
- M.2 B key vs. M key. Current SSds are either B-key or M-keyed. B-keyed means they support x2 PCI and M means x4. So, you want to get x4 as you get double the bandwidth out of them. Only a few SSDs are M-keyed like the new Samsung XP941 ($750 for 500GB) which is a 1GBps SSD so needs x4. Most SSDs are 500MBps so can live with x2. Note that if you have an adapter that has say a pair of SSDs, then you need to double the number of lanes to sustain peak performance. This can make some sense as a Crucial M550 for instance is one third the cost per bit ($250 for 500GB
As an aside, if you were thinking about storage boxes that are standalone rather than an embedded drive array, here’s why it make sense to go embedded. It’s about speed:
- USB 2.0. Theoretically 10x slower than USB 3.0 so these are 400Mbps systems
- Firewire 800. As the name implies these are 800Mbps throughput.
- USB 3.0. Theoretically it can handle 5Gbps to 640MBps which is competitive with SATA 2.0.
- Thunderbolt 1 and 2. The first version is 10Gbps and the second 20GBs. With real loads, it is about 2x USB 3.0 for Thunderbolt 1
So what kinds of cards can you get to adapt if you just want to get the advantages of PCI Express now and you can see the speeds are much higher.
- PCI M.2 to PCI Express card. This is a $25 adapter and it is an x2 so it has a 750GBps bandwidth maximum and works with B-key SSds
- Lycom DT-120 PCI M.2 to PCI Express x4. $24 and this only supports M-key which means that it is for PCI M.2 cards which support PCI Express 4 lane whereas B-key only supports 2 lane. This would be a good choice for a monster boot drive that needs 1GBps boot times
- Micro SATA m.2 to PCI Express x4. This has two B-keyed slots so a good match for say a pair of Crucial M550s. It doubles the storage density effectively. This gives you double the density so say 1TB and if you RAID-0 them a cheap way to get 1GBps theoretically. Although some folks say that RAID-0 doesn’t work as effectively in the real world and you are better off with two logical drives and easier time access the data concurrently.
Net, net, you want SSDs in the future with SATAe and NVMe support the thing seems to be:
- to wait for the Intel PC3500 to ship and get 400GB of glory and incredible boot performance with x4 and NVMe. $600
- If you can’t wait and want super high bandwidth (but higher latency without NVMe), then get a pair of Crucial and a dual m2. to PCI Express x4. This is less than half the price of PC3500 at $600 for 1TB
- If you want blazing performance right now (but without NVMe), then its the Samsung XP941 on the Lycom x4 for $800.