A little over a year ago, DataCore Software's late Chief Scientist, Ziya Aral, released a groundbreaking piece of technology he called adaptive parallel I/O that showed the way to alleviate RAW I/O congestion causing applications, especially virtual machines running in hypervisor environments, to run slowly.  

Demonstrations of the effectiveness of adaptive parallel I/O in reducing latency and boosting performance of VMs demonstrated the silliness of arguments by leading hypervisor vendors that slow storage was to blame for poor VM perfomance.  Storage was not the problem; the decreasing rate at which I/Os could be placed onto the I/O bus (RAW I/O speed) was the problem.

The problem was that hypervisor vendors really don't seem to want to place blame where it belongs -- with hypervisors and how they use logical cores in multi-core processors.  In better times, the error of such an assertion (that storage was responsible for application performance) could be shown just by looking at queue depths on the hosting server.  If the queue depth was deep, then slow storage I/O was to blame.  Conversely, if queue depths were shallow, as they typically are in hypervisor computing settings we've seen, then the problem lies elsewhere.

Aral and DataCore showed that RAW I/O speeds were to blame and they provided a software shim that converts unused logical CPU cores into a parallel I/O processing engine to resolve the problem.  Here is our avatar, Barry M. Ferrite, reviewing the technology in its early days -- at about the same time as Star Wars Episode VII was about to be released.

 

 

Since the initial release of Adaptive Parallel I/O technology, DataCore has steadily improved its results as measured by the Storage Performance Council, reaching millions of IOs per second in SPC benchmarks...on commodity servers from Lenovo and other manufacturers.

So, why isn't adaptive parallel I/O part of software-defined storage?