Wednesday 26 October 2011

Meeting your Company's Storage Needs with Directory Virtualization

Virtualization

by Don MacVittie

The one constant with enterprise storage is that you always need more of it. Since the 1990s, storage has seen a steady double-digit growth in most activities, and a much faster growth in enterprises that rely heavily on video for communications. Even the worldwide economic turmoil of 2008 and 2009 did not stop this need for more storage; it merely forced IT management to make tough decisions about targeting their limited dollars wisely.

Of course, with the growth of storage comes the growth of security concerns. Seventeen racks of storage present a larger threat surface than 10 do, and managing access rights to an ever growing pool of storage can be intimidating and fraught with room for error.

Enter directory virtualization, the technology that places a device between users and the various NAS devices on the network. While directory virtualization has been around for a good long while, the growing pressure of budget controls and increasing storage demands are just now bringing this technology to the fore.

The purpose of directory virtualization is to put a strategic point of control between users and the storage that they require for daily operations. By doing so, there is a platform that allows several things to happen. First, resource utilization can be greatly normalized because on the user side, directory virtualization devices present a single directory tree for all the various devices behind it. Thus, while the user stores to the same place in the directory tree, the actual physical location of the file can be moved to meet the needs of IT. Second, this movement can be automated by setting criteria to move files between storage tiers (storage tiering). For example, IT can say “if a file is used a lot, keep it on the really fast storage; if it is never accessed, roll it off to the slowest.” Furthermore, if the device behind a directory is reaching capacity, some or all of the files in that directory can be moved to a new device while the user sees no change – the file appears in the same place while IT was able to “expand” the space behind the directory tree.

Finally, utilizing a directory virtualization device that recognizes your cloud storage provider of choice, or a cloud storage gateway that is presented to the directory virtualization device as just another NAS device makes it possible to roll the least frequently accessed files out to cloud storage, which does not increase your storage infrastructure expense (although it will cost you a monthly fee - usually small - per gigabyte). This moves rarely accessed files completely out of your building, but since they still show up in the directory tree, they can be retrieved relatively quickly if needed. And once retrieved, they can work their way back up the hierarchy of tiers.

The other major benefit of this strategic point of control is one of security. Since everything is presented to users as a single directory tree, security can be moved from the large number of NAS devices into the directory virtualization device. The effect is that the various NAS devices can be locked down so that they only communicate through the directory virtualization device, and all security can happen in one place. A group does not have to be maintained at multiple locations/levels/devices, and storage/security administrators need only learn a single UI for day-to-day operations.

With the growing amount of data being transferred to remote locations, there is also a security risk in Internet communications. Knowing that the cloud storage communication device you deploy – be it part of a directory virtualization tool, a WAN optimization tool, or a stand-alone cloud storage gateway – can encrypt the data it is sending to the cloud will resolve many issues for you. First and foremost, it will allow IT to send any data it needs to cloud storage without concern for how well protected it is. Though of nearly equal importance, encryption of outgoing data will allow you to offload encryption from your servers. Since encryption is a heavily CPU-intensive operation, such offloading extends the useful life of servers by allowing them to service applications instead of encryption. While machines have continued to follow Moore’s law and gotten faster, virtualization has added to the encryption burden by multiplying the number of requests for encryption that software could be sending to the hardware CPUs. Moving this functionality off of VMs and physical servers and onto a device designed to handle encryption saves CPU cycles and either increases VM density or allows your server to do more.

Even though storage growth continues practically unabated, the options for how to deal with both increasing volume and securing the resulting storage against prying unauthorized eyes are expanding also. The advent of directory virtualization for NAS has improved security by enabling the ability to lock down storage access on the virtualization appliance, before the physical storage is even accessed. Directory virtualization also allows organizations to put off costly storage purchases by spreading the storage more evenly across the available infrastructure. Cloud storage brought with it enhanced encryption to protect data sent to the cloud, and the various deduplication/compression schemes implemented at every layer of the storage network by default obfuscate the data they are acting upon. While not the same as security, it can protect against casual prying eyes, and when used on data at-rest, can make deciphering data on physically stolen disks more difficult for would-be data compromisers.

The same is true with SAN encryption and compression of course; offloading these functions from the servers and onto purpose-built hardware available from most SAN vendors and many third-party vendors allows servers to focus on what they’re best at: serving up applications and data. While SAN virtualization has more caveats than NAS virtualization, it does allow for a certain amount of optimization and load balancing between SAN devices, which theoretically improves performance.

There is a lot going on in the storage space that can help resolve some of the more problematic issues of today – protecting data at rest, locking down data access, securely transferring some data to cloud storage vendors, and simplifying NAS infrastructure, to name a few. Taking advantage of some or all of this functionality will help you to better serve your customers, making your storage, and by extension your applications, more secure, fast, and available.

0 comments:

Post a Comment