The incredible speed of the Apple M1 Max

Well, Paul was asking so just how fast is the new MacBook Pro with M1 Max. Well, it was a good question, so a little digging, but some questions:

  1. When was RAM slower than the current SSD transfer speeds?
  2. How has that ratio of level 1 and level 2 each to ram storage to the disk changed over time

So some quick digging, the MacBook Pro M1 Pro Max 2021:

  1. Memory speed. It runs at 66GBps for the M1, 200GBps for the M1 Pro to 32GB and 400GBps for the M1 Max to 64GB of unified memory. This is the bandwidth from the caches on the processor to the memory itself. I can’t find the actual
  2. Disk speed. It runs at 7.4GBps for the M1 Pro and M1 Max and 3.3GBps for the M1. 

Disk Speeds

To give you a sense of the evolution of disk:

  1. The 2008 era drives on such as the 320GB WD Blu runs at 76.3 MBps Write and 81MBps write
  2. The 2012 Western Digital WD1000 is a 1TB 3.5″ disk attached to SATA with a 121-217 MiB/s read and 112-192MiB/s write
  3. The 2013 Samsun SSD 840 EVO 1TB attached with SATA is 348-500MiB/s read and 152-488 MiB/s write. Here, we see the limitations of the SATA interface at about 500MBps.
  4. The NVMe Samsung SSD 960 Pro runs a 1.1-2.6GiB/s read and 648-1,943 MiB/write

So when you look at these numbers the Apple numbers are pretty incredible:

  1. Compared with a 2008 drive, these SSDs are running 80x faster
  2. Even compared with modern NVMe drives, they are still 2-7x faster
  3. For a 500GB file, in less than a minute which would have taken a disk 10 years ago 5 hours to do.

Now of course, with hard disks, you don’t ever get to the point where you just start at the beginning of the disk and read it all the way through. What really happens is that you have to seek the right location on the disk and then have to wait for the sector to come around on the disk. The net effect can be pretty horrible.

So, with modern hardware on a laptop, a typical disk has a 12ms seek time and then you have a rotational delay waiting for the sector come around. On say a 5200 rpm mobile drive, this will be 1/5200 * 50% on average so 60 sec/min / 5200 rpm * 1/2 revolution = 11.5ms. So that is you are reading and you have to move to new position you need an extra 13.5ms to get there.

So, let’s take a case where you are reading small files, say 10KB and you need to seek 50% of the time, what’s the throughput. 10KB/150MBps + 50% * (12 + 11.5ms) = 0.1ms + 6.75ms and you can see you spending 6.75/6.751 of your time seeking 99.999% just moving the disk around and you 1000ms/sec/ 6.751ms/op = 148 operations per second so the effective throughput rate is 148 ops/sec * 10KB/op = 1.5MBps. That’s why you should always get a hard disk if you can. These fragmented hard drives are terrible!

Memory Speeds

  1. DDR. The original specifications for double data rate synchronous dynamic RAM, this was introduced in 2000 with 1,600MBps or 1.6GBps
  2. DDR2-400. These are first announced in 2002 and then in mainstream market in 2004, these are running at bandwidths of 3200MBps
  3. DDR3. Announced in this provides up for 2010 era machines and the DDR3-1600 runs at 12.8GBps maximum although this is at the chip level and is the theoretical maximum. In the real world, you need to combine these things into a typical memory read for a system with multiple chips is 40GBps
  4. DDR4. Announced in 2014 And available 2016, the DDR4-2400 runs at 19.2GBps maximum and typically these are 4-way interleaved, so you can get about 60 GBps in a typical system

When you look at this compared with the Intel numbers, the performance of integrating memory is pretty incredible:

  1. The 400GBps is basically 7x faster than a single DDR4 Xeon system with Haswell-E
  2. Compared with the 7GBps disk speeds, they are basically as fast as DDR3 machines launched in 2010, so the disk is as fast as main memory 10 years ago. What an achievement. It makes sense given this is all solid state now, but great work.

Shrinking Ratio of Memory to Disk

  1. For a 2010 ear machine, the memory read was about 100MBps while memory was running at 12.8GBps so 128x difference
  2. For the MacBook Pro 2021 Pro Max, the read is 7GB and the memory is running at 400GB so about 57x, so you can see that the conversion to solid state memory definitely helps reduces the difference. The implication is that you can get away with slightly less memory since the disk is relatively faster.

And, yes it does matter

When you look at actual application benchmark you don’t see anything as dramatic, but you are getting 50%-100% improvements on things like Geekbench or Handbrake.

I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect