|Home > Ethernet Storage > News > Network Convergence: the holy grail or networking mythology?||
SNS Europe Data storage and IT management: Ethernet storage, networking storage solution, IP storage network, fibre channel, iSCSI, FCoE
It sounds like a tall order, but it’s hardly surprising that many CIOs are taking a long, hard look at their current network infrastructures and finding them wanting. Today, their teams are typically managing two or three parallel networks: they have a storage network built for reliability, guaranteed data integrity and non-blocking performance. And they have a data network whose characteristics include best-effort performance and unpredictable bandwidth, along with often-frustrating levels of complexity.
Tomorrow’s networking environment will consolidate user-application traffic and storage-data traffic onto a single, high-performance, highly available network that has the built-in intelligence to identify different traffic types and handle them appropriately, according to pre-defined rules. The benefits of the concept are clear in terms of time, cost, skills and procurement, not to mention reduced cabling complexity.
And although the path to getting there may at first seem steep, an incremental approach can ensure that the journey needn’t cause unnecessary upheaval for the IT team and the end-users that they serve. The days when the majority of computing power was in the data centre are behind us. Today, we have incredibly smart end points with lots of computing power that are remote, distributed and mobile. Information and applications are virtualized and can reside anywhere within the infrastructure we refer to as the cloud.
This shift dramatically drives the importance of the communications throughout the network, and because you have distributed the compute power and your information storage, you have in essence distributed the data centre which must be reliable and robust enough to cater for user demands.
A very established and well-understood technology lies at the heart of the network - Ethernet. It is already the predominant network choice for connecting servers to each other on the corporate local area network (LAN) for the purpose of transporting user-application traffic; convergence proposes that the separate Fibre Channel network, that typically transports storage-data traffic around a storage area network (SAN), be shifted to Ethernet, too – or Data Centre Bridging (DCB).
But in order for this kind of SAN/LAN convergence to work, DCB must offer the reliability and latency characteristics associated with Fibre Channel technology. That’s because storage networks require data to be delivered in sequence and intact - in industry parlance, a “lossless” infrastructure is required.
Ethernet, by contrast, falls short in this respect, taking instead a ‘best effort’ approach, where data is not necessarily delivered in the right order and where some packets may be dropped altogether due to network congestion.
In response to these challenges, the essence of DCB lies in higher transmission speeds, based on 10-Gigabit Ethernet (GbE) technologies, and enhancements to the underlying Ethernet specifications in order to replicate the reliability and class-of-service features seen in today’s SAN environments.
Today, standards bodies representing some of the world’s foremost suppliers of storage and networking technologies are working on specifications to increase the performance of existing Ethernet networks and to tunnel Fibre Channel traffic securely and efficiently through Ethernet infrastructures.
One of the most important of these is the Fibre Channel over Ethernet (or FCoE) standard, an encapsulation protocol that wraps Fibre Channel storage data into Ethernet frames, enabling it to be transported over a new lossless Ethernet medium. Developed by the T11 technical committee of the International Committee for Information Technology Standards (INCITS), it relies on flow control to recognise when a buffer is almost full and to request that the sender stop transmission until the buffer has emptied and the transmission can start again.
One of its major advantages is that FCoE will use the same FC drivers, switches and management applications already in use today. That is allowing companies to start introducing concepts such as unified fabric within the first few meters of their data centre networks today, using FCoE with DCB at 10Gbps speeds to send both data and storage traffic to the first access switch it encounters. That traffic can then diverge to the corporate LAN or to the existing SAN using Fibre Channel, accordingly, and the number of cables that need to be plugged into a server rack are immediately reduced.
Theses rack cables are now short twinax copper cables with 10 Gigabit Ethernet SFP+ embedded, reducing dramatically the cabling cost compared to fiber optic cables and SFPs.
The road ahead
In this way, a technology still in its infancy is already starting to take shape at the network edge in the data centres of early adopters. Others will follow their example. Richard Villars, vice president of storage systems at market analyst firm IDC has predicted that 2010 will see an increase in converged networking pilot projects, with significant technology deployments expected in 2011.2 According to figures from analyst firm Dell’Oro Group, approximately 10,000 FCoE ports shipped in 2008, but the company’s analysts anticipate that this figure will rise to about 1 million in 2011.3
Of course, mainstream convergence is dependent on other factors, too. Storage device vendors must allow for FCoE adapters on their products. Network interface cards (NICs) for 10GbE must continue to drop in price and be incorporated into server motherboards. And within end-user organisations themselves, CIOs will need to reorganise.
They may wish to ensure that storage, server and network operations can be monitored by a single service desk, for example. Network and storage teams will need to work together far more closely than they do in today’s silos. And when data centre switches are ready for replacement, they will need to ensure that whatever new solutions are bought can support the move to a converged environment.
Some in the industry would have end users believe that the converging of storage and data networks is a relatively easy process that should be done as soon as possible. But this assertion must be challenged as businesses will migrate in their own timeframe. The complexity of managing and deploying a network and data centre is reaching unacceptable levels.
With the advent of the Cloud and the drive to convergence underway, this paradigm shift is enabling the ability to selectively outsource or extend a significant portion of the IT infrastructure. Now businesses can adopt a wide range of models depending on the needs of the business. Some may want to move rapidly to a “pay-as-you-go” model, while others may migrate slowly to a private model built internally.
Either way businesses are taking advantage of the convergence revolution by adopting new operating models and creating virtual enterprises. In doing so, they become much more agile and flexible, more efficient and less capital intensive and becoming more competitive but only if the architecture is affordable and reliable.
1. Both reports indicate strong interest among respondents in LAN/SAN convergence: ‘Benefits Of SAN/LAN Convergence’, published by Forrester Consulting, December 2009; ‘Future Datacenter Management Strength – Network Convergence’, by Reco Li & Jason Chen, published by IDC, October 2009
2. ‘Future Datacenter Management Strength – Network Convergence’, by Reco Li & Jason Chen, published by IDC, October 2009
3. ‘SAN 5-Year Forecast Report’, published by the Dell’Oro Group, August 2009
The FCoE promise is that there is one single network in the Data Center that transport all traffic, including storage, high performance computing and IP, says OLIVIER VALLOIS, Data Center and Core Category Manager from HP Networking EMEA.
End users are confused about FCoE. Despite the obvious value of a converged network, FCoE is new, transitioning from Fibre Channel to pure FCoE will be complex, and there seem to be some contradictory messages around multi hop FCoE. Storage network connections take a decade to establish, and at least a decade to fade away, so no matter how successful FCoE is enhancements for existing Fibre Channel customers, like 16Gbit FC, will continue. While industry consensus was reached around Congestion Notification in the IEEE standards process, a number of individuals still had doubts about CN under real customer workloads.
As to 40GbE and 100GbE, each Ethernet speed evolves from exotic, to expensive/emerging, to affordable/mature. Today 100GbE is exotic, 40GbE is expensive/emerging, and 10GbE is almost affordable/mature. Over the next two years, 40GbE will move toward affordable but still be used primarily between switches, and 100GbE as 4 fibres of 25Gb/s will emerge the way 40GbE is today. While there is a prestige race here, this is ahead of the needs of most data center customers. Servers are in transition from GbE to 10GbE now; the transition from 10GbE to 40GbE will be later.
Again, these speeds and feeds are visible and measurable and make good headlines, but are the easiest part of Converged Infrastructure: try orchestrating the whole infrastructure made of virtual resource pools and making it very power efficient. Likewise it’s easy to get focused on a protocol like FCoE, but a vendor needs to make the best choice for each individual customer, which could include staying with Fibre Channel, using Fibre Channel with FCoE edge; iSCSI; NAS; DAS with application specific storage.
Data center virtualization, LAN and SAN convergence, High Speed Ethernet (HSE) and automation enable cloud infrastructures to keep up with the ever increasing demand for cloud services, explains JURRIE VAN DEN BREEKEL , Product Marketing Manager at Spirent Communications.
Convergence of LAN and SAN together with 10G Ethernet are changing security concepts including separate SAN and LAN with an own physical Gigabit Ethernet connecting per virtual machine. Recently announced servers with 16 to 48 cores are further driving scalability from 1000s to 10000s virtual machines in data centers supporting cloud computing.
The data center cloud infrastructure can have multiple 10G Ethernet and 8G Fibre Channel connections providing high performance LAN and SAN access to the virtual machines on servers. LAN and SAN convergence can be introduced from the server access using FCoE on lossless 10G Ethernet (IEEE Data Center Bridging) to Top of Rack switches. Top of Rack switches split LAN and SANs up to the respective LAN Core and SAN Director switches. 40G Ethernet and 16G Fibre Channel will become available to interconnect Top of Rack with the network core. As server LAN and SAN I/O increases to over 40-80Gbps per server and dual port 40G Ethernet and or 16G Fibre Channel adapters are cost effective compared to multiple 10G Ethernet and 8G Fibre Channel connections servers will directly connect to the network at those speeds. Server I/O needs won’t require dual port 100G server connectivity for some time, 1000s of servers together with most LAN and SAN traffic staying inside a data center will drive the need for high performance and low latency short distance 40G Ethernet in the aggregation/core and 100G Ethernet WAN links interconnecting the cloud.
The promise of convergence
The ‘clutter’ of multiple networks within an enterprise place a significant burden on IT department time and budgets. Add to this the demands placed on the corporate network infrastructure by virtualisation, and the anticipated progression to internal clouds, and it’s clear that CIOs expect tomorrow’s corporate networks to fulfill a wide range of sometimes conflicting demands. SNS Europe tasks some key vendors for their thoughts on what happens next.
Is convergence happening? And if so, where? asks HENRIK HANSEN, Director of EMEA Marketing for QLogic.
Convergence of networking technologies is a long held requirement for IT managers. The benefits gained by having a single infrastructure for data and storage networking provide improved TCO through reduced hardware, cabling, power, cooling, and management costs. The adoption of 10GbE Ethernet into mainstream data centers makes it possible for Ethernet to match the raw performance characteristics of Fibre Channel SANs. Enhancements to the Ethernet standard are being implemented allowing Ethernet to match Fibre Channel’s high delivery reliability. New features eliminate network congestion and dropped packets which storage networking traffic cannot tolerate and this has led to the creation of Fibre Channel over Ethernet (FCoE). Additional information regarding these new technologies can be found at www.fcoe.com.
Two basic principles are guiding investment decisions for the next generation data center:
1) The leverage of virtualization technologies to achieve IT consolidation and lower TCO
2) I/O and network consolidation by minimizing the number of switching fabrics that must be supported by each host Virtualization of applications, servers, storage, and networking resources is becoming the driving force behind consolidation efforts. Virtualization offers the potential to enhance the flexibility of the data center infrastructure, enabling it to adapt to both changing user requirements and transient workloads. 10GbE Ethernet is the only technology that can be considered as a viable basis for a truly unified data center switching fabric. 10GbE and future generations of Ethernet could over time become the core data center infrastructure technology upon which next generation data centers are built. Over the past year server and storage vendors alike have all introduced an FCoE offering based on QLogic’ 8100 series Converged Network Adapter (CNA). Adoption has been driven primarily by the server refresh cycle plus the adoption of 10GbE. The initial FCoE deployments were for POC evaluation but in the past 6 months FCoE is introduced into production environments.
QLogic CNAs & Intelligent Ethernet Adapters can reduce host CPU utilization for network processing to a negligible level, allowing for the support of numerous applications. With virtual I/O, each application is associated with its own virtual adapter, allowing a number of applications to share a single physical adapter. By matching I/O bandwidth to aggregate application demands for network bandwidth, I/O cost, efficiency can be improved in conjunction with fewer adapters, cables, and switch ports. Servers, especially those employing multi-core, multi-socket processors will make significant bandwidth demands on the network. These platforms will have the processing power to saturate multiple network interfaces in the system.
Today, convergence is happening in and around the server. Top-Of-Rack deployments have driven this and more recently in blade server environments where IBM with their Virtual Fabric and HP with Virtual Connect FlexFabric are starting to push convergence. End users are fully aware of the benefits of FCoE and converged networking for TCP/IP, iSCSI and FCoE.
In the quarter April through June QLogic recognized $10M revenue from our Converged Networking products.
Where are end users on the road to convergence - Will everything meet at 10 GB Ethernet, including Fibre Channel over Ethernet? If so, why is the Fibre Channel roadmap already including 16 GB FC? And how quickly will 10 GB Ethernet be overtaken by 40 and 100 GB speeds? Asks CHARLES FERLAND, Vice President and General Manager, EMEA, for BLADE Network Technologies.
CHARLES FERLAND, vice president and general manager, EMEA, for BLADE Network Technologies. > Today, one of the hottest trends in the data centreer is network convergence. Just like we saw many voice systems migrate to IP in the last decade, now we are seeing data and storage traffic converging over Ethernet. The benefits are clear – by fully utiliszing your Ethernet network, you can converge your separate networks into a single one to gain back space, reduce power and significantly decrease operational costs and complexity.
IP NAS, iSCSI and other similar protocols have matured and today provide excellent price-performance .performance. FCoE is just another way to bridge your existing Fibre Channel systems with the Ethernet world; it is not a new implementation strategy per se, more like a path migrating towards Ethernet. We strongly believe that eventually systems will use Ethernet as a unified switching fabric. There will, of course, always be room for niche applications that require specialiszed switching fabrics, but the biggest trend is toward adoption of Ethernet for network convergence.
To connect servers with Ethernet storage using FCoE, iSCSI or NAS, a reliable data centreer-class switch is required. Low latency, built-in Layer 3 support and lossless capabilities through support for the new Data Centreer Bridging (DCB) (also known as CEE or Converged Enhanced Ethernet) protocols are characteristics SAN administrators should look for.
The DCB protocols enable what is termed “lossless Ethernet” in the data centreer. In fact, this is a case of Ethernet taking over some of the features of the FC SAN to ensure that frames will not be lost within the data centreer, as this can wreak havoc on storage transfers. Thus, DCB is a requirement to run FCoE. Essentially, DCB enables simple prioritiszation of trafficof traffic types over the data centreer’s 10 Gigabit (GB) Ethernet connection, which is something that standard “lossy” Ethernet could not achieve.
So, SAN administrators do not need to worry about moving the transport of the storage traffic onto the same 10GB Ethernet link as the LAN traffic; the FCoE traffic will always have the required bandwidth and will not lose frames. The support of the DCB protocol on BLADE Network Technologies’ Ethernet switches fulfills the requirements for running FCoE.
In addition, it is important to point out that DCB support is also a boon for IP-based SANs (iSCSI and NFS) which, while not absolutely requiring DCB to function, nevertheless benefit greatly from the prioritiszation of their storage traffic over other types of LAN traffic.
The greatest advantage of Ethernet is that it is evolutionary; you can implement and manage 10GB Ethernet today the same way you are handling your 1GB Ethernet ports and the same way you will handle your 40GB Ethernet ports tomorrow.
10GB Ethernet has been standardiszed since 2002, even though it has only recently been widely deployed because of massive price reductions, inclusion of standards, such as DCB and increases in bandwidth requirements. Eventually, 40GB Ethernet and 100GB Ethernet will become relevant in the data centreer when servers can process enough data to fill the pipe.
40GB and 100GB Ethernet could also become more relevant as a high-performance clustering solution due to reduced latency and better price/performance than InfiniBand, especially, especially as 40GB throughput will take longer to become a requirement. But, this is a niche market and not likely to drive a move to mass adoption of 40GB Ethernet.
Meanwhile, 10GB Ethernet data centreer switches are available and affordable today for network convergence, and are being widely adopted in enterprise data centresers.
Tags: Ethernet Storage
|Related White Papers|
|Read more News »|
|Related Web Exclusives|
|Related Magazine Articles|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.