Translate this page:
Search this website:


BC/DRCloud StorageComplianceData CentresDeduplicationDisk/RAID/Tape/SSDsEthernet StorageSAN/NASTiered StorageVirtualizationSNIA & SNIA EuropeDCIM
News
SNS TV
White Papers
Products
Web Exclusives
Magazine
Events
Media Pack
Blogs
Register
Contact

 

Optimised Data Services

The future of data protection in the virtual data centre, by Christopher L. Poelker, Vice President of Enterprise Solutions, FalconStor Software.

 

Date: 1 Feb 2010

Storage virtualisation for the sake of storage virtualisation is just not enough these days. Being able to pool heterogeneous resources and migrate data from point A to point B while the application is up and running is great, but what businesses really need are complete solutions – solutions that not only provision storage more efficiently, but that can virtualise, protect, migrate, dedupe, encrypt, replicate, recover and archive any data source in real-time via policy.

What I am seeing more and more is a need for a simple yet comprehensive solution that enables a more efficient IT infrastructure that leverages existing assets, policies and procedures while reducing overall costs.  This entails building an optimised suite of integrated data services on a common platform.

I call this holistic approach the "optimised data services" utility model or ODS utility.  An ODS utility is created by virtualising existing data sets, storage and servers where possible to enable physical abstraction and flexible data movement between compute and storage elements. Once virtualised, the ODS platform should allow the creation of policies that enforce specific service levels for explicit or pooled datasets. The grouping of data elements for consistency or recovery purposes should not be hampered by physical constraints such as volumes in the same array, or SAN versus non-SAN, or storage network-attached devices or hosts.

The ODS solution engine is able to provide thin provisioning capabilities for enhancing storage utilisation; and capacity expansion for running applications can occur in real time and on demand by the compute resources in question.  This reduces the overall administrative burden and provides an element of automation to the design.  All data is continually protected based on policy, and recovery time objectives (RTO) and recovery point objectives (RPO) are reachable at minimal costs based not on budget constraints, but by the service level agreement (SLA) policy applied to the application.  This unique capability can only be achieved if the solution also automatically applies efficiency in data storage and movement through de-duplication and sub-block-level monitoring of all stored data to ensure only unique data is stored and replicated.

A focus on application uptime and rapid recovery would be paramount in such a design, so the solution must also be able to integrate at the application level and provide a simple means to monitor and recover any application across any platform from hardware and software failure or malicious intent. Protection from corruption and deletion is also very important, so continuous protection would need to be utilised to achieve a zero RPO for critical applications.

Using such a solution for critical applications would eliminate the need for multiple management elements for protection and replication, such as log shipping and array-based replication. Also, since protection is continuous and policy based, there would be no requirement for backup applications, clients, servers, media or processes, which could save huge sums of money and time and allow companies to focus on the business rather than IT technologies. 

The engine must also be intelligent and able to seamlessly work with or even enhance other protection and virtualisation solutions such as VMware and VMware Site Recovery Manager, Microsoft Failover Clustering and Data Protection Manager, Oracle Real Application Clusters, SAP BRtools, PolyServe, Platform Computing, Virtual Iron, Citrix, Sybase Replication Server and others.

A comprehensive ODS utility would need to provide built-in encryption and off-site replication of all data sets for risk mitigation. And to reduce WAN costs, data optimisation over WAN links needs to be included. 

Since many organisations are obligated by law or regulation to provide removable media copies, the ability to transparently integrate with tape formats and tape-based archiving also can be beneficial.  Since tape also is low cost and removable, it should be used for long term archives and data should move transparently based on policy to tape-based media.

The media should be automatically encrypted by the solution and not require expensive tape hardware or libraries that enable encryption. Furthermore, data must be able to be stored in an immutable fashion for compliance, and it must be searchable for audit purposes.  All datasets also should be de-duplicated so that only a single instance of every data object is stored for archive.

Data de-duplication, however, should not be used when storing data in native format ready for application use or when rapid recovery (under a couple of minutes) is required. Since data de-duplication usually implies electronic hashing of data into unique objects, a recovery process would need to be applied to re-constitute the data.  Instead, data should simply be stored more efficiently by monitoring the data stream and eliminating any "white space" within the file system or data blocks written by the application. 

During data replication, only these unique sectors of disk would need to be replicated and stored for recovery at the disaster recovery (DR) site.  By simply storing data more efficiently, companies gain the benefits of data de-duplication without the associated overhead or risk, and the datasets themselves are always instantly available for mounting to the same or a different application for recovery, testing or DR.  In fact, if data can be stored very efficiently, these space-efficient images can be utilised for retention of multiple data points for many days, providing the ability to recover applications very rapidly to any point in time while saving costs.

The ODS utility should be flexible enough to accommodate not only existing protocols such as Fibre Channel and iSCSI but also newer protocols such as Fibre Channel over Ethernet (FCoE) and InfiniBand, so that rapid obsolescence can be avoided.  Technology refresh of any component should be transparent to running applications, and maintenance must be able to be accomplished with minimal or no downtime.  Scalability is a simple factor of adding more compute resources, connections or ports in a modular fashion and not be limited or hampered by technical issues or artificial resource limitations of the file system, capacity, connectivity or availability.

The ODS utility would be more cost effective if it could provide these capabilities using the same server and storage infrastructure currently in place. There would be no need to purchase proprietary disks or servers to create the solution.

The accumulated knowledge of the existing environment would not be wasted, and the learning curve would be greatly reduced.

Recovery from failure or disaster is simple, fast, comprehensive, cost effective and provide automation capabilities where possible.  At the very least, recovery is simple enough so that at time of failure no-one has to scramble to figure out how recovery actually works.  This means that the ability to test for DR should be intrinsic in the design and simplified to the point that following a wizard or script is all operations staff should need to know.  Since many applications also include data feeds from other applications, the ability to provide consistency grouping for recovery across platforms and storage tiers is also a requirement for the ODS utility.

This is all a pretty tall order; but, once achieved, it would make it easy for IT staff to stay home spending time with family or friends, rather than coming into work at 3am to recover a stupid mistake.  That being said, the ODS utility should be able to be implemented intuitively and rapidly without requiring weeks or months of professional services to make it work.

In fact, it would be beneficial if all you needed to do to create a node that operated within the ODS platform was take an existing server, grab a USB memory stick that includes all the self-installable software you need, place the stick in an open USB port, and reboot the server.  Do that to as many servers as you need for the required performance, and you could build your own platform for optimised data services – or PODS – which provide the critical data services and abstraction.  Simply attach a PODS to your storage network, zone it in, and your done.  Attach two PODS together across the WAN at two sites, and you have DR – while only unique and encrypted data traverses the connection between the PODS. You could even create a "mini PODS" for your remote locations by using small low-cost servers with internal storage; or install the solution on a VMware virtual server and do it all virtually!

Companies looking to optimise their data services and create a more services-oriented architecture for their applications and data resources or who are looking at moving to a cloud computing model should take a hard and critical look at solutions currently available in the market. You do not want to have to tie together software from multiple vendors into a Frankenstein-like science project that would be a support nightmare. 

Be sure to look for a platform that provides all the capabilities mentioned above, so you can implement simply, quickly and with peace of mind, knowing that everything is certified, supportable and can be managed globally from a single console.

www.falconstor.com

ShareThis

Tags: Virtualization

Related White Papers

11 Jan 2012 | White Papers

The Infoblox VMware vCenter Orchestrator Plug-in by Infoblox

The ease and speed with which enterprises can deploy virtual machines has outpaced their ability to provide IP address services to them in a timely fashion. ... Download white paper

23 Nov 2011 | White Papers

Automated Storage Tiering on Infortrend’s ESVA Solution by Infortrend

This white paper introduces automated storage tiering on Infortrend’s ESVA storage solutions. Automated storage tiering can generate significant advant... Download white paper

Read more White Papers»

Related News

22 Apr 2014 | ICT

21 Apr 2014 | ICT

17 Apr 2014 | ICT

16 Apr 2014 | ICT

Read more News »
Related Web Exclusives

24 Feb 2014 | ICT

11 Feb 2013 | ICT

  • A look into the future

    Now that 2012 is nearly over, I guess it’s time to start looking at what’s coming down the track in 2013. Here are my top five predictions for th... Read more

4 Feb 2013 | ICT

4 Feb 2013 | ICT

Read more Web Exclusives»

Related Magazine Articles

Winter 2010/2011 | Disk/RAID/Tape/SSDs

Winter 2010/2011 | Data Centres

Winter 2010/2011 | Ethernet Storage

  • Saving time and money

    Following the acquisition of Double-Take Software, SNS Europe talks to Vision Solutions’ Chief Technology Officer, Alan Arnold, about how the two organ... Read more

October 2010 | Virtualization

Read more Magazine Articles»

Related Supplements

1 Feb 2009 | Virtualization

IT Professionals’ Guide to a Smarter Storage Environment: Solutions for Managing Data Growth and Controlling Costs

The storage landscape continues to evolve. Faced with such an array of storage ideas and options, how does one begin to make sense of such complexity?

Click here to learn more »

1 Feb 2009 | SAN/NAS

Networked Enterprise Storage Solutions for Business Partners

Avnet Technology Solutions (via acquisitions) helped the Fibre Channel Industry Association (FCIA) Europe put storage networking technology on the map, across Europe, more than 10 years ago. Move forward to the present day and the FCIA Europe has ?evolved? into the Storage Networking Industry Association (SNIA) Europe.

Click here to learn more »

Read more Supplements »

Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»

Advertisement