Cloud computing remains one of the hot topics of conversation in the market. While there is certainly a lot of hype, there are also some fundamental changes occurring in how organizations are designing their architectures and delivering their services. This move to cloud-based environments is forcing a re-examination of every aspect of the IT infrastructure, and it’s become painfully obviously that storage has come up short.
The cloud model demands an entirely new level of flexibility, and Ethernet SAN designs are one of the most promising areas of innovation. They leverage massively parallel Layer 2 Ethernet networking and off-the-shelf array hardware to deliver a scale out, dynamic, and efficient architecture which is ideal for the cloud service delivery approach. By eliminating the complexity and cost of mainframe era designs, Ethernet SANs combine the simplicity of direct-attached storage with the benefits of shared storage. This approach enables a single elastic tier of storage to support a wide variety of workload demands.
Take a look at cloud computing giants like Google and Amazon in the era of Big Data and you won’t find cloud stacks built on legacy storage technology. These players developed their own cloud-scale “operating systems” over the last decade to aggregate massive amounts of commodity hardware into elastic compute and storage farms. The cloud players rejected Fibre Channel and “rolled their own” storage systems based on the key assumptions that the cloud is scale-out, dynamic and efficient.
Traditional enterprise storage arrays use “scale up” designs, with proprietary storage controllers driving daisy-chained shelves of drives. As deployments grow, the processors and disk connectivity become performance bottlenecks, forcing forklift upgrades to handle growing capacity. In contrast, cloud architectures utilise massively parallel “scale out” architectures with off-the-shelf hardware and virtualisation to deliver maximum scalability and elasticity. No forklift upgrades are required as data volumes grow -- capacity is added just in time and performance scales linearly.
Legacy Fibre Channel storage networks are static with rigid data connections between every server, switch, and storage. This level of complexity was acceptable when companies ran an 8-port SAN, but data growth is pushing many companies to an 80-port or 800-port SAN. Whenever storage is added or reconfigured, storage [or IT] administrators are forced to manage multiple layers of complexity, including multi-pathing, port bonding, switch zoning, controller load balancing, and array management across multiple tiers for different workloads. That’s not a cloud, and it’s not even remotely elastic. Cloud applications are mobile and fluid, with the relationships between applications, servers, and storage in constant change. Cloud storage needs to be dynamic by default.
Lastly, cloud business models work because they aggressively lower IT operating expenses. To do this, they demand cost-efficient technologies that are simple to deploy and operate. The acquisition cost of Fibre Channel storage systems is often ten times higher than commodity systems, and the complexity of managing them fundamentally affects operating cost and agility. In contrast, cloud architectures assemble inexpensive off-the-shelf Ethernet and arrays in ways that minimise operating costs for configuration and replacement.
The server industry has already completed the shift to scale-out architectures, but enterprise storage is still stuck in the mainframe networking era. As customers scramble to keep up with data growth of 50-70 percent per year, it’s apparent that storage has become the largest single impediment to achieving virtualisation and cloud benefits.
As more companies transition their storage networks to cloud architectures, they need to question their assumptions. If they want all the benefits of cloud, upgrading from Fibre Channel could be the key.
Tags: Cloud Storage, Ethernet Storage