I like Mexican food. But several years ago, I started noticing the same distribution trucks delivering ingredients to all of the local restaurants: That meant the same cheese, rice, tortillas, beans, etc. So, despite the rearranging of ingredients (“The #22 has beans/beef/cheese on top of the tortilla, but the #39 puts them inside it.”), my dining experience at most of those restaurants was destined to “sameness.”

What in the world does this have to do with data storage?

Well, my experience with Mexican restaurants seems to resemble what’s going on with data center storage manufacturers.  Over the years, they have introduced a LOT of new and improved “ingredients” that eventually became part of every vendor’s menu:

  • 10k RPM hard drives, then 15k…SSDs, NVMe, etc.
  • Networks running at 100MB à 1 Gig à 10 Gig à 40 Gig à 100 Gig (and the parallel Fibre-Channel arms race)
  • Countless variations of CPU speeds and core counts
  • Deduplication and compression
  • Hot- and cold-tiering
  • Compute and storage bundled together
  • Write coalescing

These are just a handful of the “unique features” that manufacturers have heralded as The Thing that solves those perennial IT headaches and makes our end-users rejoice.

Most of these features – storage ingredients if you will – have provided some degree of temporary performance gains but storage administration has largely remained the same: teams still spend days or weeks planning and installing new systems in data centers; hours (sometimes days) working on performance problems; reactively dealing with application changes by working with everyone from DBAs and developers to network and security teams, and wasting endless hours planning for future growth.

In other words, not one of the features listed above (or any combination of them) has fundamentally improved the processes that IT uses to deliver storage services to end-users or the overall experience for either group.

  • “Yes, we can clone a VM in 30 seconds, but we’ll need 3 weeks to figure out where to put it.”
  • “The test VM on our storage proof of concept did 50,000 IOPS! But no, we don’t know why your database is slow 6 months later. Sorry.”

You might be thinking: “OK, maybe cloud-based analytics holds the key to simplified storage management.” Indeed, Waze offers cloud-based analytics, too, but I’d be a damn fool to take my hands off the wheel while doing 80 mph down the highway. In the same way, cloud-based analytics provide better insights in a lot of cases, but IT is still responsible for acting on them.

“Yeah, but what about automation and orchestration platforms ?” To be sure, those technologies have helped make processes easier to a degree, but they require calibration and training and are sensitive to changes within the infrastructure (i.e. they’re brittle, so policies can easily break)—and none of them are truly no-cost, even if they’re “free.” And they’re not native to the storage vendors. (think “smarter mortar” vs. “smarter building blocks”)

Keep in mind that the baseline expectations for IT have also shifted dramatically in the last 10+ years: high customer (co-worker) satisfaction is the bare minimum, and services must be delivered with the speed, consistency, and ease of public cloud or SaaS offerings – all while optimizing costs. These are today’s enterprise IT table stakes. In addition, more mature technology teams are viewed as active innovation partners, aligned with organization-level initiatives to drive revenue.

The bottom line is that IT has to deliver services more effectively to be seen as a capable innovation partner within the organization. Given these requirements, we invariably see an IT shortfall, even with “new and improved” storage ingredients. “Faster storage” has provided only incremental, temporary gains, with no significant impact on the processes that support those services or the user experience.

Well, here’s some food for thought: what if you went to sleep tonight and woke up tomorrow knowing that your storage was optimizing itself? – that your team wasn’t going to be taken off-task today by vague, anecdotal user reports of “application slowness?” That your volume of performance-related tickets would drop by 90%? That your mean-time-to-resolution for the remaining tickets would drop to minutes, instead of hours—or days? That your next storage system could be installed in minutes by a single IT generalist —not in weeks by a team of specialists?

Tintri designed the VMstore Intelligent Infrastructure system to deliver the same operational improvements to storage that VMware and its hypervisor siblings have provided to CPU and memory for the last 20 years. You could even say that VMstore is a storage hypervisor – providing transformational technology and enormous differentiation in terms of intellectual property and business value. VMstore integrates with hypervisors and database servers and uses groundbreaking innovations like “proportional queuing” to proactively, autonomously, and continuously optimize performance (think “traffic cop”– remind you of anything else? *cough* hypervisors *cough*); that enable auto-QoS and manual QoS (min & max) at a per-VM and per-DB level, independent of location.

What’s the business benefit? VMstore is proven to save up to 95% of your IT team’s storage administration time, with autonomous process and  SLA management – not to mention real-time and predictive AI-driven analytics – that are native to the storage platform itself.

So the next time you’re considering a storage vendor, just remember—the #22 and the #39 are probably the same thing. Tintri, on the other hand, makes it really easy to order off-menu and experience something truly different – and better.

Watch this video to learn more


Orginally published on the Tintri Blog, by Frank Shepard, November 2, 2020