Businesses run on data. With more data collection sources and opportunities than ever, coupled with advanced analytics that turn raw data into useful, actionable information in real time, protecting data storage resources is more critical than ever.

Data must be easy to access but stored in a secure enough manner to protect it from malicious attacks, machine failures or human errors that could jeopardize its integrity. Under any circumstances, those requisites might be tough to satisfy, but given the deluge of data that most companies must contend with, the job might seem insurmountable.

The good news is that the tools available to build an effective data storage management practice have improved considerably as data capacities have grown to petabyte scales. But it’s unlikely that any one technology or methodology will be enough, so a toolbox approach to storage management is often the best way to ensure the best fit for a specific environment.

Whatever components comprise the storage strategies in administrators’ toolkits, they must address two levels of management:

  • Physical layer. This layer includes all the physical devices that make up an organization’s data storage infrastructure, including arrays, drives, tape libraries, host bus adapters/network interface cards and storage switches. Some concerns related to storage hardware include capacity, performance and durability.
  • Data layer. At this level, it’s the data itself that must be managed, according to its importance to the business, any vulnerabilities and how to ensure its availability.

1. SRM software

Storage resource management (SRM) apps have been around for decades, but earlier iterations were too complex and unwieldy and, often, ended up as shelfware. Today, many of these applications have slimmed down, gotten much less expensive, and easier to install and use.

In some cases, storage array vendors have purchased SRM companies so they can add SRM functionality to their OSes. Standalone SRM apps are still available; examples include QStar Storage Reporter, IntelliMagic Vision for SAN and ManageEngine OpManager.

SRM is especially useful in large, mixed vendor environments where keeping track of a lot of varied system components is essential to ensure they’re operating efficiently and capacity isn’t being wasted.

 

2. System consolidation

Perhaps the biggest contributing factor that complicates storage management is the array sprawl that occurs as new storage systems are added to handle growing capacity needs. Storage systems have limited capacities and it’s often easier to just add another unit, rather than replace the existing array with a higher capacity model.

However, managing many separate systems can be burdensome — this is especially true for NAS systems, as file data is the fastest growing type of data in most shops. Consolidating several storage systems into one larger unit makes management much easier, but it’s likely to require purchasing data migration tools or professional services, especially if the systems are from multiple vendors.

 

3. Multiprotocol storage arrays

Some vendors still make their customers choose between SAN arrays that are best suited for block storage applications such as databases and NAS systems that handle unstructured data well.

But more vendors now offer multiprotocol arrays that can support both SAN and NAS connectivity and protocols and can be divvied up between the two according to specific needs. Multiprotocol is an excellent storage option, as having both types of storage in a single box can cut costs significantly and make block and file storage easier to manage.

 

4. Storage tiering

The concept of tiered storage has been around a while; when it was called hierarchical storage management, and then information lifecycle management, it never really caught on, but the idea is simple and can make managing data and storage systems much easier.

Tiering means putting data on the type of storage that’s appropriate to its importance to the company. Less important files can be stored on slower, less expensive disk systems, while frequently accessed, critical data could be kept on fast SSDs. Assigning tiers to different types of data and applying appropriate levels of data protection makes managing all kinds of data and storage systems easier.

 

5. Deploy SSDs strategically

One of the knottiest storage management problems is how to configure a storage array for optimal performance. Over the years, techniques have been developed to squeeze every last bit of performance out of disk-based systems.

Short-stroking involves using only the fastest spinning edge tracks of hard disk and spreading the data over lots of short-stroked drives. That approach delivered more performance, but it was hard to manage and wasted a lot of disk capacity.

That can be avoided by adding SSDs to an array to handle the demands of high-performance apps. Although SSD is still much more expensive than disk storage, strategic deployment can save money and make managing for performance nearly painless.

 

6. Hybrid cloud storage

Admins can use cloud storage to reduce on-site storage capacity and provide some management relief. Data stored in a cloud storage service requires very little oversight and practically no management. By moving less frequently accessed and lower value data to the cloud, storage admins will have more time to focus on higher priority data that requires high-performance storage.

Splitting storage between on-prem systems and the cloud isn’t complicated; there are several tools designed to manage hybrid environments and data migrations, such as NetApp Cloud Manager.

 

7. Opt for scale-out storage systems

If the company is considering a new storage array purchase, managing that array and the rest of the storage environment can be streamlined if the new array is a scale-out system.

Scale-out arrays let admins add capacity as needed, but admins can also add new storage controllers as they add capacity, which helps maintain performance levels. In contrast, scale-up systems might let admins add capacity, but they can’t add controllers, so as capacity grows performance often dips.

 

8. Archive older data

In most organizations, much of the data stored on pricey storage arrays is old and rarely accessed. It’s expensive and adds management chores for data that doesn’t have much current utility. Archiving infrequently accessed data to cheaper media saves money and eliminates management headaches.

Old data can be archived to less expensive arrays that use high-capacity disk drives, tape or cloud archive services, which offer capacity at extremely low prices. An active archive using a cloud service or Linear Tape File System offers easy access to archived data if it’s needed.

 

9. Find and eliminate orphan VMs

Server virtualization has transformed most data centers by enabling new server instances to be spun up as needed. Unfortunately, the ease with which virtual machines can be created often results in a lot of VMs that are no longer in use or have been abandoned. Those orphaned VMs still have an associated data store and add to management tasks, particularly data protection activities.

There are several ways to discover and discard VMs; for example, admins can use vCenter for VMware virtual servers or freely available scripts for Hyper-V and other virtual server environments.

 

10. Backup and deduping

Backup is probably the biggest data storage management headache in most shops. Use the backup app’s monitoring and logging features to ensure backups include all relevant data. Those same tools will also help find dormant servers, folders or files that are no longer use and don’t need to be backed up.

Make sure all backup data is deduplicated — this can save a lot of disk capacity and make managing the backup data a little less onerous.

 

11. Ensure business continuity with DRaaS

Backup is often a headache to manage, but disaster recovery can be an even bigger challenge. For effective disaster recovery, an organization must maintain copies of critical data off site, in addition to maintaining a remote site with servers ready to run in the event of an emergency. It’s so difficult and expensive to set up and manage that many companies lack a disaster recovery plan.

Disaster recovery as a service (DRaaS) makes it possible for even small companies to put a solid plan in place. Critical data and the associated VMs are copied to the cloud DRaaS site, and when disaster strikes, VMs are spun up with current data and businesses can continue at normal or close to normal levels.

 

12. Consider object storage

Object storage is one of the newest array types available. Object storage is like file storage, except it doesn’t have the file system limitations that NAS systems typically do.

Object uses a flat file system that can easily expand to accommodate millions — even billions — of objects, such as files, multimedia and other data elements. This makes object storage a good choice to store a lot of data, such as an archive or a data warehouse. And the ability to add custom metadata to the objects stored makes managing all the data much less complicated.

There are many other technologies and techniques that can be employed to help relieve the stress of managing a data storage infrastructure. Depending on the environment, there might be network upgrades and monitoring systems, hyper-converged storage systems and container management.

And if the company operates in a regulated environment, tools to help manage and ensure compliance rules related to managing data could be essential.

Chi Corporation partners with a variety of storage manufacturers and vendors and our experience and certified engineering team can help you decide the best storage strategy for your organization. Contact Chi today. 

Originally published on TechTarget.com, by Rich Castagna, May 17, 2021. 

Share