Disruptive innovations: How storage is changing in the enterprise Scott H. Davis CTO, Infinio Welcome! Scott H. Davis: • CTO, Infinio • 25+ year IT veteran • Former VMware EUC CTO & Chief Data Center/Storage Architect • Founder, President, CTO of Virtual Iron • 16 Patents for Virtualization, Storage, Clustering, and EUC technologies www.TalkingTechwithSHD.com @shd_9 Agenda • Storage overview • Technology disruptions • Storage landscape • All-flash arrays • Hybrid arrays • Hyper-converged infrastructure/SDS • Decoupled infrastructure (capacity and performance) • Infinio’s storage acceleration platform 3 Storage circa 2004 Traditional Storage Array Innovation: • Unified block and file • Storage tiering 4 Storage in 2015 VM VM VM VM VM Hypervisor Server Server Storage-side processing All-flash arrays SSDs VM VM Hypervisor Controller software Server-side processing VM VM VM Hybrid arrays Controller software Write log / Read cache Disk pool SSDs HDDs VM Hypervisor Controller VM/software Hyper-converged (Software-defined) I/O Optimization Server SSDs VM HDDs HDDs Decoupled Infrastructure Technology Disruptions 6 Disruption: Desktop Virtualization • Extreme workload consolidation necessary for economics to work • More workloads on fewer drives • Workload mobility • Blender effect, mix of read/write ratios • Impact of client OS-specific caching • Impact of synchronized peaks (e.g., boot storms, login storms, virus scans) 7 Disruption: Hardware advances DRAM IOPS Networking Flash Hard drive Latency The complexities of flash Reads vs writes Write amplification Garbage collection Endurance / wear-out Consumer grade vs. Enterprise Traditional RAID / Filesystem applicability Tiering with storage system 9 Disruption: Hardware advances for performance Memory channel storage e.g., NVDIMM DRAM IOPS Speed comparison Networking Flash Hard drive Memory 5 μsec PCI-e 50 μsec SAS SSD 300 μsec Non-volatile characteristics of classic flash Latency Interface challenge for OS/Hypervisor NVMe e.g., PCI-e solid state drive replaces the AHCI stack – 1/3 CPU utilization Disruption: Hardware advances for capacity Capacity-optimized drives “Shingled Magnetic Recording” (SMR) DRAM IOPS e.g., Seagate’s 8TB SMR drive at $.03/GB (1/500th that of flash!) Networking Non-symmetric read/write characteristics Flash Hard drive Cloud Network speeds making it possible Latency Can be as inexpensive as $.03/GB/month Disruption: Hardware advances in networking Networking speeds continue to improve – 10GbE typical; 40GbE and 100GbE coming soon DRAM IOPS Networking Inter-node communication clocked at 50 μsec, including TCP/IP stack Flash Hard drive More predictable speed than flash Latency Enables scale-out storage architectures 12 Disruption: Scale-out application architecture Scarcity vs. Abundance Scale-out Node A Node B Node C Consequences: Object storage Replicas instead of updating in place Minimized synchronization I/O performance should scale out with the application 13 The storage landscape 14 Storage in 2015 VM VM VM VM VM Hypervisor Server Server Storage-side processing All-flash arrays SSDs VM VM Hypervisor Controller software Server-side processing VM VM VM Hybrid arrays Controller software Write log / Read cache Disk pool SSDs HDDs VM Hypervisor Controller VM/software Hyper-converged (Software-defined) I/O Optimization Server SSDs VM HDDs HDDs Decoupled architecture Storage in 2015: All-flash arrays VM VM VM VM Hypervisor Server Controller software SSDs All-flash arrays “Porsche” of storage array performance Consistently high performance for all connected applications Comes at a steep price premium All drives are flash, plus the proprietary upcharge: Dell server SSD = $3K EMC storage SSD = $15K Storage in 2015: Hybrid arrays VM VM VM VM Hypervisor Server Controller software Write log / Read cache Disk pool SSDs HDDs Hybrid arrays Better price/performance calculation than all-flash Most market share is from existing vendors; the next “status quo” ? Buyer Beware: SSDs as a tier in legacy arrays vs. purpose-built hybrid array (handling of flash & architecture of write log/read cache) Storage in 2015: Hyper-converged VM VM VM VM Hypervisor Controller VM/software Server SSDs HDDs Hyper-converged (Software-defined) Integrated building block for an entirely new datacenter architecture Commitment to scale everything together Inefficient with storage space because of data protection schemes More appropriate for greenfield (new) deployments because of new mgmt tools and processes Typical in ROBO and SMB Storage in 2015: Decoupled Architecture Splits storage into capacity layer and performance layer I/O Optimization HDDs Decoupled Architecture Performance layer benefits from: • Hyper-locality μsec vs. msec • Commodity pricing Dell Server SSD = $3K EMC Storage SSD = $15K Capacity layer can be any storage platform – keep existing tools and reporting Infinio’s storage acceleration platform 20 Infinio’s storage acceleration platform Software-based performance layer; optimized for RAM • Globally deduplicated • Operationally transparent • Simple to evaluate, implement, and use “We noticed the results almost instantly, with a visible reduction of storage latency on the VDI desktops and decreased workload on our filers.” --Nathan Manzi, Systems Engineer at Minara Resources Infinio architecture No changes to guest VMs 1 Accelerator VM and 8GB RAM per ESX host One solution for virtual servers and virtual desktops Kernel module 1 Console VM per vCenter Kernel module Kernel module LAYER FOR STORAGE ACCELERATION Communication runs over the vMotion network “By better utilizing the existing infrastructure, I/O optimization can improve performance, and help control costs.” –Gartner Hype Cycle 2014 Infinio’s content-based architecture Global: Deduplicated: All nodes share a single address space Inline deduplication across VMs and hosts 1A23 2QQ3 101H 1DS4 6BE4 SA93 GH56 5L56 G4K1 54MW 2NN5 S9H4 7P89 72JK 11H4 7M62 3E5T 3A38 Using a 5:1 dedupe rate, an 8-node Infinio cluster starts with an effective size of 320GB and can grow much larger. Infinio’s distributed cache architecture A 1-33 B 34-66 C 67-99 A 1-33 B 34-66 C 67-99 A 1-33 B 34-66 C 67-99 Node C Node C A-D Node B Node B A-D Global D-C Node A Node A A-D 24 Infinio’s global deduplication in action General Enterprise mix • Common applications • Application data OS files Boot images Infinio’s global deduplication in action VDI DevOps Gold images Source code for slightly different versions Common application executables Test automation on the same code Common user files Exemplar data Customer National Specialty Alloys saw sustained offload rates of 61% A large consumer goods company saw build time drop from 2 hours to 15 minutes Operational transparency Datastore configuration ESXi ESXi Snapshots and replication Backup scripts VM Patching VM vMotion DRS Maint Mode VM Infinio VM "Installing Infinio was fast and easy. You install it live, and you can start or stop accelerating without affecting production. There’s no rebooting either." --Doug Soltesz, Vice President and CIO - Budd Van Lines Simple to evaluate, deploy, and manage Install to results in 30 minutes Change the cache size Accelerate a new datastore What’s new in Infinio version 2 Extension of award-winning storage acceleration platform into SAN environments to support Fibre Channel and iSCSI. VM-level statistics for a granular view into performance; choose specific applications to accelerate* Easily see performance improvements for up to two weeks of history Note the benefit of deduplication with effective cache size Continued operational transparency, with no changes to storage tools, backup or reporting scripts; complete integration with VMware VAAI *v2.1 Learn more about Infinio today • Accelerate response time by 10X • Reduce reads 65-85% from storage • Achieve better user experience from applications • Extend life for storage systems VISIT US AT BOOTH #3 30 31
© Copyright 2026 Paperzz