Eliminating the VDI price/performance bottleneck

Deploying a VDI solution can be demanding and expensive, with storage in particular a major potential hurdle, argue Juan Mulford and Stefan Ferrari of AccelStor. Industry-validated, cost-effective, all-flash storage however, can enable businesses across all verticals to adopt VDI

Virtual Desktop Infrastructure (VDI) allows IT organisations to simplify and automate the management of thousands of users, securely delivering desktop-as-a-service from a central, secure location. Adoption of VDI is highest in industries where there are both high levels of regulation and a distributed workforce or many remote offices (healthcare, banking, education, retail etc.).

When effectively deployed, VDI delivers high returns on investment - reducing operating costs and closing security gaps through infrastructure consolidation, while also adding flexibility to the workforce which is now empowered to work from anywhere.

Because VDI touches so many users, it directly affects the productivity and bottom line of the organisation deploying it. This high level of visibility puts immense pressure on IT to deliver performance to ensure a positive user experience, and this added pressure is the single biggest roadblock for VDI adoption. Storage is widely regarded as the biggest hurdle to overcome from both a cost and performance perspective.

In order to deliver a user experience on-par with users having their own desktop, the VDI infrastructure stack must service bursts of very high read/write I/O while maintaining ultra-low latency for all users at once. Failure to do so leads to the death blow of many VDI projects - users asking for their desktops back.

According to IDC, most failed VDI deployments are blamed on storage, as bursts of high I/O request impact user latency across the VDI platform. It is unsurprising then that storage is often the most expensive and carefully selected hardware in successful VDI deployments.

Traditional spindle (spinning disk) based and even hybrid (spindle mixed with flash) storage have failed to meet the demands of VDI because these systems depend on a read/write cache to service low-latency I/O requests - however once the cache is overwhelmed, latency increases, the users complain, and the project dies.

The storage demands of VDI are so high that even all-flash storage systems that are dependent on NVMe or NVRAM caching for low-latency I/O can fail to deliver a user experience that leads to a successful VDI implementation.

At AccelStor there is no cache. Only metadata is temporarily stored in global memory, so all write blocks are direct to disk - meaning all reads come directly from disk as well. This cache-less design means all I/O goes straight to/from the SSD media at near-cache speeds, significantly increasing performance, lowering latency, and providing better data resiliency than traditional cache-based systems.

RAID was designed in the late 1980s to provide resiliency and increase performance for spinning HDD based storage systems. It was widely successful at the time, so successful in fact that it is now still the foundation used by many so-called "state-of-the-art" all-flash storage systems. These 'RAID optimised for flash' based systems do two things that are actually far from optimised for flash:
• Write data to the SSDs randomly
• Write data more than once

The AccelStor approach is very different. Born from an in-depth research project into the adverse effects of RAID on SSD media, AccelStor took a step back and uncovered a better way to use SSDs effectively.

At the heart of every AccelStor system is a cache-less, RAID-less technology called FlexiRemap. The culmination of 10 years of research, 2 PhDs, and over 45 patents and counting; FlexiRemap harnesses the true power of NAND memory with game-changing write I/O performance. With major funding from tech giants like Toshiba Memory and wide recognition like best-in-show at FMS 2016, this technology really is revolutionary. WHAT IS FLEXIREMAP?
FlexiRemap got its name because it remaps incoming write I/Os into a single sequential write across every SSD in the system. To do this, it breaks the data down into 4K pages, which is, mathematically speaking, what fits best in most SSDs.

Because the writes are sequential and writing in a way that is designed for NAND memory, they now occur much faster than in a RAID-based system (700,000 sustained write IOPS in a single appliance); there is no need to stage data to cache first, enabling a cache-less design.

Additionally, because the writes are spread across every SSD in the system, read I/O gets serviced faster as well (1.1M 4K mixed read/write IOPS).

Many RAID based appliances will use expensive SAS SSDs to support a shared storage architecture with dual controllers, however RAID is unable to utilise the potential of SAS SSDs and will on average only achieve 10% of the potential IOPS from each SSD. FlexiRemap combined with a 'shared nothing' cluster architecture is able to out-perform these systems using SATA SSDs utilising 90% of their potential. This means a lower TCO and higher ROI. Essentially, FlexiRemap means you get what you pay for.

Finally, unlike spinning hard drives, SSDs fail at a predictable rate based on the number of writes. So, unlike RAID-based systems, FlexiRemap technology writes data once and only once, increasing the longevity of each SSD by over 300%, keeping operating and support costs low as well.

FlexiRemap technology therefore unlocks the true potential of flash, allowing AccelStor customers to do more with less across all types of high I/O, low-latency applications like VDI.

AccelStor NeoSapphire AFAs are pre-configured and simple to manage. The web-based GUI allows users to create volumes easily and present them to hosts. There is no cache to manage, the system just works.

The NeoSapphire provides full integration as well with VMware environments, with support for VAAI block, SRM, Horizon View, and the VMware vSphere plugin integration tools people have come to expect are all standard.

AccelStor runs a "zero licence" operation, meaning that all features such as Clone, Snapshot, Replication, Deduplication and more are included out of the box, free of charge. As well as these features, AccelStor offers helpdesk support for the lifetime of the system, at no additional cost, minimising TCO and ensuring ease of budgetary administration.

The AccelStor NeoSapphire AFAs come equipped with FlexiDedupe, a deduplication algorithm that provides up to 10:1 data reduction with a minimal impact to performance in VDI environments. With the use of linked clones and FlexiDedupe, storage capacity efficiency has been shown to be as high as 98% (see chart).

At full desktop density on a NeoSapphire AFA, AccelStor's FlexiRemap technology combined with thin provisioning and deduplication, reduce the storage cost as low as US$30 per power-desktop across the entire NeoSapphire line.

The NeoSapphire product range has been tested and validated with the industry leading tool from Login VSI, the industry standard load-testing tool for performance and scalability of VDI environments.

In a test using 500 linked-clones on VMware vSphere and Horizon View, the storage performance capabilities of VDI's heaviest and most resource-intensive workloads were tested. The results showed that no matter the workload, the user experience, or desktop application response-time remained unaffected, with average latencies well below the coveted 0.6 ms across the board (see chart).

When storage performance really matters, and all-flash storage is a consideration for VDI, a system built on FlexiRemap technology will deliver the best performance for the price, bar none. With the ability to use enterprise SATA SSDs that last 3X longer and provide consistently low latency, these are systems tuned and tested to serve the needs of VDI. Please visit the website below for a free consultation and try a no obligation POC system to see how AccelStor's FlexiRemap powered NeoSapphire products can eliminate your VDI challenges today.
More info: www.accelstor.com