A storage area network (SAN) is a dedicated, high-speed network that provides access to block-level storage. SANs were adopted to improve application availability and performance by segregating storage traffic from the rest of the LAN.\u00a0\nSANs enable enterprises to more easily allocate and manage storage resources, achieving better efficiency. \u201cInstead of having isolated storage capacities across different servers, you can share a pool of capacity across a bunch of different workloads and carve it up as you need. It\u2019s easier to protect, it\u2019s easier to manage,\u201d says Scott Sinclair, senior analyst\u00a0with\u00a0Enterprise Strategy Group.\nWhat is in a SAN?\nA SAN consists of interconnected hosts, switches and storage devices. The components can be connected using a variety of protocols. Fibre Channel is the original transport protocol of choice. Another option is Fibre Channel over Ethernet (FCoE), which lets organizations move Fibre Channel traffic across existing high-speed Ethernet, converging storage and IP protocols onto a single infrastructure. Other options include Internet Small Computing System Interface (iSCSI), commonly used in small and midsize organizations, and InfiniBand, commonly used in high-performance computing environments.\nVendors offer entry-level and midrange SAN switches for rack settings, as well as high-end enterprise SAN directors for environments that require greater capacity and performance. Key vendors in the enterprise SAN market include Dell EMC, Hewlett-Packard Enterprise, Hitachi, IBM, NetApp, and Pure Storage.\n\u201cA SAN consists of two tiers: The first tier \u2014 the storage-plumbing tier \u2014 provides connectivity between nodes in a network and transports device-oriented commands and status. At least one storage node must be connected to this network. The second tier \u2014 the software tier \u2014 uses software to provide value-added services that operate over the first tier,\u201d says research firm Gartner in its\u00a0definition\u00a0of SAN.\nHow is NAS different than a SAN?\nSAN and network-attached storage (NAS) are both network-based storage solutions. A SAN typically uses Fibre Channel\u00a0connectivity, while NAS typically ties into to the network through a standard Ethernet connection. A SAN stores data at the block level, while NAS accesses data as files. To a client OS, a SAN typically appears as a disk and exists as its own separate network of storage devices, while NAS appears as a file server.\nSAN is associated with structured workloads such as databases, while NAS is generally associated with unstructured data such as video and medical images. \u201cMost organizations have both NAS and SAN deployed in some capacity, and often the decision is based on the workload or application,\u201d Sinclair says.\nWhat is unified storage?\nUnified storage \u2013 also known as multiprotocol storage \u2013 grew out of the desire to stop procuring SAN and NAS as two separate storage platforms and to combine unified block and file storage in one system. With unified storage, a single system can support Fibre Channel and iSCSI block storage as well as file protocols such as NFS and SMB. NetApp is generally credited with the development of unified storage, though many vendors offer multiprotocol options.\nToday, the majority of midrange enterprise storage arrays tend to be multiprotocol, Sinclair says. \u201cInstead of buying a box for SAN storage and a box for NAS storage, you can buy one box that supports all four protocols \u2013 it could be Fibre Channel, iSCSI, SMB, NFS, whatever you want,\u201d he says. \u201cThe same physical storage can be allocated to either NAS or SAN.\u201d\nWhat\u2019s new with enterprise SANs?\nStorage vendors continue to add features to improve scalability, manageability and efficiency. On the performance front, a key innovation is flash storage. Vendors offer\u00a0hybrid arrays that combine spinning disks with flash drives, as well as all-flash SANs.\nIn the enterprise storage world, flash so far is making greater inroads in SAN environments because the structured data workloads in a SAN tend to be smaller and easier to migrate than massive unstructured NAS deployments. Flash is impacting both SAN and NAS environments, \u201cbut it\u2019s predominantly on the SAN side first, and then it's working its way to the NAS side,\u201d Sinclair says.\nArtificial intelligence is also influencing SAN product development. Vendors are looking to ease management by building artificial intelligence for IT operations (AIOps) capabilities into their monitoring and support toolsets. AIOps\u00a0uses machine learning and analytics to help enterprises monitor system logs, streamline storage provisioning, troubleshoot congestion, and optimize workload performance, for example.\nIn its most recent Magic Quadrant for Primary Storage, Gartner includes AIOps features among the key storage capabilities to consider when choosing a platform for structured data workloads. AIOps capabilities can target operational needs, \u201csuch as cost optimization and capacity management, proactive support, workload simulation and placement, forecast growth rates, and\/or asset management strategies,\u201d Gartner writes.\nImpact of hyperconverged infrastructure\nWhile converged arrays and appliances blurred the lines between SAN and NAS, hyperconverged infrastructure (HCI) took the consolidation of storage options even further.\nHCI\u00a0combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers.\nHCI can contain any type of storage \u2013 block, object, and file storage can be combined in a single platform, and multiple nodes can be clustered to create pools of shared storage capacity. The benefits of shared storage are resonating with enterprises, particularly as many modern applications rely on file and object storage, and the growth of unstructured data continues to outpace the growth of structured data. HCI isn\u2019t a replacement for all SAN deployments, but enterprises may opt for HCI depending on the cost, scalability and performance requirements of certain workloads.\nConsumption-based IT is a growing trend\nAnother trend impacting the evolution of traditional SAN storage is the movement toward consumption-based IT. Pay-per-use hardware models are designed to deliver cloud-like pricing structures for on-premises infrastructure. Hardware is deployed on site, and it\u2019s essentially rented from vendors via a variable monthly subscription that\u2019s based on hardware utilization.\nEnterprises are looking for alternatives to buying equipment outright, and research firm IDC reports that 61% of enterprises plan to aggressively shift toward paying for infrastructure on a consumption basis. By 2024, half of data-center infrastructure will be consumed as a service, IDC predicts.\nUptake of consumption-based IT is strongest in storage rather than compute. Gartner estimates that by 2025, more than 70% of corporate, enterprise-grade storage capacity will be deployed as consumption-based offerings. That\u2019s up significantly from less than 40% in 2021.\nDell\u2019s Apex line and HPE\u2019s GreenLake platform are examples of consumption-based IT, and both include options for procuring storage on a pay-per-use basis. Dell\u2019s Apex Data Storage Services, for example, offer enterprises a choice of three performance tiers of block and file storage. Subscriptions are available in one- or three-year terms, and capacity starts as low as 50 terabytes.