|Home > Data Centres > News > Storage Capacity Optimization in the Era of Green IT||
SNS Europe Data storage and IT management: Data centre solutions, data centre solution products, data center solution
According to this study, data centers in the United States consumed 56 BkWh, those in Western Europe accounted for 41.3 BkWh, Japan/Asia Pacific tallied 36 BkWh and data centers in the rest of world consumed 19.2 BkWh. The study forecasted the worldwide data center electricity growth rate would slow to 12% per year for the years 2005-2010, but would still grow another 76% during that period.
A similar study in 2007 by the US Environmental Protection Agency (EPA) concluded that data center electricity consumption in the United States alone was the equivalent of 5.8 million households2. As a result of these studies, and others, the EPA ENERGY STAR program and the EU Code of Conduct for Data Centers are two examples of active initiatives that address these concerns by providing guidelines to help data center managers insure that they do not consume a disproportionate amount on power from an already strained energy grid.
Today, it is not uncommon for individual data centers to require a megawatt or more of continuous energy. In terms of costs, assuming a commercial utility rate of 10c per kilowatt hour, the cost of providing a megawatt of electricity is equal to ?73,000 per month or ?876,000 per year. Payments to the utility company of this magnitude are raising eyebrows in our post-recession days of cost containment. In short: saving energy is not only good for the environment - it’s good for the bottom line.
Data Center Electricity Use
In the aforementioned EPA study, data center components were placed into 6 electricity use categories, as shown in the table on the following page:
The highest electrical use component - “Site Infrastructure”, was described by the EPA as: “Infrastructure systems necessary to support the operation of IT equipment (i.e., power delivery and cooling systems).”
Site infrastructure is sometimes measured using Power Usage Effectiveness, or PUE – which is the ratio of overall data center energy compared to data center equipment energy. For their study, the EPA assumed an average PUE of 2.0, in other words, each unit of energy required for IT equipment required another unit for infrastructure.
The remaining 5 components were comprised solely of IT equipment categories. To no great surprise, the EPA reported that volume servers (i.e. servers costing up to $25,000 each) used more data center electricity than any other category of IT equipment. To some surprise, however, was the fact the data storage equipment ranked next in electricity use, and proved to be the fastest-growing of any component.
To address the growth of servers and their infrastructure costs (including electricity), the industry turned to server virtualization. This provided an immediate payback by allowing the removal of large numbers of physical servers, and replacing them with Virtual Machines (VM), running on a smaller population of servers.
The trend towards server virtualization, however, provided no real relief to the storage systems housing the VM’s and in fact created even higher storage capacity requirements as server sprawl was replaced by VM sprawl. In some respects, the quest to improve data center efficiency through server virtualization was simply moving the energy problem from one category to another. The server population was shrinking, but storage demands were rising. Increased use of virtual servers and a steady creep of overall data growth brought the need for capacity optimization to new heights.
Because of the interaction between components as illustrated in the example of server virtualization, data center managers should take a holistic approach to achieving energy efficiency. Servers, networks, and storage components can all benefit from the latest technologies in order to reduce overall energy consumption. The remainder of this article will discuss some methods described by the Storage Networking Industry Association (SNIA) in promoting storage capacity optimization as a form of energy reduction in data centers.
Using Storage Capacity Optimization to Reduce Energy Needs
Beyond the physical components that comprise a storage system, perhaps one of the more interesting aspects of energy efficiency is the effect of software techniques. Using advanced software features to reduce the number of storage devices can have a profound effect on energy consumption. SNIA has done some research in this area, recommending the adoption of five specific technologies. Let’s take a closer look at these optimization methods:
Data compression algorithms reduce stored data by identifying numerical patterns within data streams and replacing those patterns with smaller data objects which can then be uncompressed by reversing the algorithm. Used for decades in tape drives and in file compression utilities, data compression has recently become popular in online data storage systems.
Unlike data compression, data deduplication does not “shrink” but rather removes redundant data and replaces this data with markers that reference identical, stored data objects.
Thin provisioning has a two-fold effect on storage capacity optimization:
1) Thin provisioned storage systems in effect provide “just-in-time” storage capacity. When a data container (usually a file-based Volume or a block-based LUN) is thinly provisioned, the storage system grants the capacity request but does not pre-allocate, or “carve out” the capacity. Only once data is written by the application does the container actually require any capacity. Since the containers only consume space as data is written, the assumption is that a thinly provisioned storage system can house more containers than a non-thinly provisioned system.
2) This leads to another effect of thin provisioning - the ability to oversubscribe the storage system. By oversubscribing, a storage system’s logical capacity can appear larger than its physical capacity. Since data and applications often grow at unpredictable rates, oversubscribing can be used to manage storage systems that contain many applications growing at varying rates. As the storage system receives data from applications, the system capacity “self balances” and gives the applications only the capacity they need.
In both cases, thin provisioning helps increase utilization of the available storage capacity. Thin provisioning, however, should be combined with tight system management, since it is possible to request more data be stored than the storage system itself can physically contain.
Threshold alarms, auto delete functions, and storage tiering are typical methods that can be used in conjunction with thin provisioning for optimal results.
As the name implies, snapshots are point-in-time images of data containers. Typically, a snapshot is accomplished through the creation of a 2nd set of markers (i.e. pointers) that ‘freeze’ the container view as a snapshot image. As writes continue to the container, the original data in the snapshot is preserved, creating a delta between the frozen image and the live container.
Another application of delta snapshots are writeable snapshots, sometimes call virtual clones. With virtual clones, A snapshot of a data object is made, but instead of modifying the parent object, The snapshot (clone) copy is modified. The clone copy only contains data not held by the parent object, and all common data is shared by the parent and the clone. The end result is that slightly different copies of data can be created using very little capacity.
Like the other forms of capacity optimization, RAID has continuously evolved since its origin. As disk drives grow larger, the implications resulting from failure become more troublesome. To reduce the risk associated with drive failure, the traditional response has been disk mirroring, often referred to as RAID 1, RAID 1+0, or RAID 10. RAID mirroring does indeed provide a higher level of protection, but carries with it a high capacity penalty. In response to this, single and dual parity RAID levels (often referred to as RAID 5 or RAID 6) offer adequate protection with a much lower capacity penalty.
The energy consumption of data storage systems continues to be a significant portion of total energy expenditures made by data center managers. Finding ways to increase storage optimization is of critical importance to IT organizations, and has even become a significant public policy issue. SNIA has been a leader in vendor collaboration to help Users manage and monitor storage systems for optimal space and energy efficiency. SNIA and the EPA together are dedicated to addressing growing data center energy concerns by providing resources in the form of metrics, measurements, and certifications to reduce data storage energy consumption. SNIA and SNIA Europe also supports the European Union Data Center Code of Conduct program whereby storage optimization methods are integral and recommended best practices.ShareThis
Tags: Data Centres
|Related White Papers|
|Read more News »|
|Related Web Exclusives|
|Related Magazine Articles|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.