NVMe will overhaul the storage industry – again – with Dr. Jay Metz (@drjmetz)

Summary:

The enterprise storage market has seen significant change over the last several years, due to the availability and implementation of low-cost flash memory. But the use of memory in as a storage device is only half the story – the other half will come with the use of the NVMe protocol that allows memory-based storage to be accessed directly by system CPUs. The changes to the entire industry are likely to be enormous. Dr. Jay Metz (@drjmetz) has worked with NVMe for several years and is on the NVMe standards board. Last week he took a ridecast with Marc Farley (@GoFarley) to explain what NVMe is and why it is going to make such profound differences to data center infrastructures, including it’s role as an enabler to true software defined storage that has the ability to adjust it’s operating parameters in response to real time conditions.

Transcript:

MF: Hi, this is a technology ridecast I’m Marc Farley and our special guest this afternoon is Dr. J. Metz. How’re you doing Jay?

JM: Hey, how are you doing Marc?

MF: What’s interesting is the topic of NVMe something that’s near and dear to you because you’re working on it

JM: I am working on it and time spending a lot of time working on NVMe.

MF: You are also on the standards board for NVMe.

JM: The NVM Express is a de-facto standards group and I am on the board for that. It’s also the name of the technologies NVM Express is both the technology and the group that’s creating it.
So, NVM Express is an alternative to SCSI as a storage protocol for a host to access its storage devices. And that storage could be a memory type of system or an NVMe-based flash device. It’s basically the the nature of the relationship between a host CPU and its corresponding memory storage.

MF: Oh, CPU to memory.

JM: Yeah, CPU to memory. You wind up eliminating a lot of the abstraction layers that existing traditional types of storage so instead of going to adapters you actually can use the shared memory space of a PCIe connection for example

MF: What kind of drivers to have in NVMe?

JM: Well there are inbox drivers for all major modern operating systems. There is a difference, however, between a local situation of NVM Express or NVMe and a remote – so the drivers for a local which is using PCIe,  have all been around for quite a while, the NVMe over fabrics – which is how you do a remote access to an NVMe device – those drivers are in development as of this recording.

MF: So what are the big differences between local and remote? Is it basically timeout values or…?

JM: No actually the thing is that with NVMe and PCIe – it relies very heavily on the PCIe transport to have that shared memory space. So it’s a memory mapped architecture. The trouble is that once you start doing things remotely you can’t map the memory anymore. So what NVM Express over fabrics does is it allows you to include transport-agnostic delivery systems where you can leverage those kinds of error recovery and still have the efficiencies of a multi-queued system. So the way that the relationship is between a host CPU and its corresponding storage, is that you establish queue pairs – you have a sending queue and you have a completion queue. Unlike a SCSI environment which is a acknowledged/response kind of situation, you can actually have multiple queues per host connected to multiple different storage devices

MF: How far up the stack does NVMe access extend to – for instance, is it possible for an application to have its own NVMe queue or is that restricted to the OS, the container, or something even lower level?

JM: Well right now the NVMe group is working on extensions into virtual machines and containers so that you can get additional bypassing of these stack systems, so you have more of a direct connection into the media itself. As of this moment, the applications haven’t yet been rewritten to take advantage of multi-queue threading because there was never really any reason from a SCSI based system

MF: I would think database systems could really rock with their own dedicated NVMe – I don’t know what you call it a stack connector ..?

JM: Yeah, yeah as a matter of fact there’s s a set of APIs that can be connected into from a user space where are a couple of vendors in Flash Memory Summit last week have demonstrated that they’ve done some amazing things with databases, accessing a key-value casche inside of that little light VM inside the kernel space. With amazing performance and flexibility.

MF: Interesting interesting.  Yeah, so you know I’m a storage guy like to think about storage. Do you see storage vendors using NVMe between the controllers and the devices inside their subsystems?

JM: Yes as a matter of fact, we’ve already seen that there are a couple vendors out there right now who have an NVMe back-end and then they’ve done a translation to tie into traditional SCSI based systems in the front end. They’ve got this enclosed NVMe system that they then present out to hosts – and that’s available in the market and has been for a while. But, the queuing stops in the back end so you’re kind of getting a hybrid approach of protocols. So if you really want to take advantage of the drivers inside of a host, you really need to be able to do NVMe all the way through.

MF: OK, so one last area of question, and that has to do with instrumentation, metrics, telemetry and whatnot. Is that anything that’s being built into NVMe?

JM: Yes, as a matter of fact there’s a lot of work going on right now. So what the NVMe group is doing is embedding that into the protocol so that you can actually have better communication between the host and a smart target, where you can get better metrics and telemetry data coming back from the devices built into the protocol. So one of the things about software-defined storage as a concept to me is that, realistically, and this is my opinion here, not anybody else’s, until you actually have the ability for the software to read the information coming from these devices – and that includes in-server storage as well as outside of server storage – then your so-called definition is really just implementation.

MF: We’ve talked about intelligent storage for a long time and it’s always been an oxymoron.

JM: Yeah yeah I’ve always always found that the words that we use for handling storage have always been kind of problematic. For example last thing you want to do in storage be disruptive.

MF: Hey this is great thanks for coming along.
JM: Oh yeah! Thanks for driving around San Francisco in the middle of the day.
podcast-logo

 

Leave a Reply

Your email address will not be published. Required fields are marked *