How to Avoid the Storage Money Pit
Within the enterprise, about a quarter of all IT spend goes toward data storage (NaviSite 2013). Everyone knows the story by now. An unprecedented explosion of data growth over the last decade or so has led to more money than ever before being invested in shared storage, and organizations are fighting to prevent that part of the data center from becoming a money pit. Every organization is different, and as a Director of IT, CIO, CTO, VP of IT, or Manager, you have your own unique obstacles and variables to consider when deciding just how you can save money on storage. That said, one factor you should certainly consider is whether or not you’re on the right storage platform.
You could be in a few different positions when it comes to your storage platform and the technologies that are already in place.
Maybe you’re on the ideal storage platform, but the cost of growing is too much for the business to bear. Or, worse, maybe you’ve inherited a platform that isn’t ideal and certainly wouldn’t have been your first choice, but the numerous hard and soft costs of switching make doing so impossible.
This can be frustrating, especially each time you have to make another purchase, but you definitely have options to give yourself some cost relief.
The first option we’d recommend is manual tiering. We’ll use EMC as an example, because they are the market leaders and are typically one of the most expensive providers. You have a VNX or VNX2 storage solution, and you’re taking advantage of FAST (EMC’s Fully Automated Storage Tiering) by investing in NL-SAS, 15K SAS, and SSD capacity for that top tier. FAST moves the hottest data to the top tier for you, so growth can be as simple as buying a tray of “cheap” NL-SAS disks, right? We don’t necessarily think so. In this case, you’re still paying EMC pricing for slow, fat disks that give you capacity—but no performance increase.
Instead, what if you purchased another system as part of a manual tiering approach? Did you know there are storage solutions out there that can provide 72 terabytes raw in a 2U controller, for under $25,000? This could give you a target for non-performance data and even disk-based backups, and it would free up expensive space on your EMC.
Of course, it’s possible that your team simply loves the management interface of their EMC, NetApp, or IBM storage device, and they don’t want to learn another technology. You could leverage the pre-owned market to buy genuine-branded storage that is produced by these manufacturers but sold outside of the standard supply chain, for functions like Tier 2 and Tier 3 data, test/dev, disk-based backup, or even DR. In many cases, you can also add capacity to your existing storage arrays by buying pre-owned expansion shelves and disks.
Finally, don’t forget the benefits of third-party maintenance and when it makes sense to use it. When you buy a machine, you usually get three years of manufacturer support with it. After that, the price to renew support is astronomically higher, because the manufacturer wants to make it more attractive to replace your three-year-old system with the newest technology. This approach is good for them, but painful for you.
You can extend the life of that storage device by another 1, 2, or 3 years by using third-party maintenance. Avoid the hard cost of paying for manufacturer support and the soft cost (albeit painful) of migrating to another system. Also, while manufacturers stop supporting systems entirely when they become “legacy” or “end of life,” third-party maintenance providers continue to support these environments.
Maybe you’re a smaller organization who’s looking for your first SAN. You know it will be the foundation of your infrastructure, meaning there’s a lot of pressure to get it right on the front end.
Many people in this situation go to EMC and NetApp, the two market leaders, and have them duke it out price-wise until one gives away enough to win their business. Honestly, it’s not a horrible idea, but it’s not always the best idea. For one, it’s short-sighted, and the TCO could be much greater than anticipated. Secondly, it doesn’t necessarily take into account where storage is heading.
When choosing a new storage solution, keep the following in mind:
Your growth plan: How much will it cost to grow with each solution? How scalable is each solution, and how easy is it to do so?
Does your enterprise require flash (or will it), and how well does each solution incorporate flash? Is it built into the design, or is it bolted on to an older architecture?
Suppose you took the approach mentioned above. You may get a wonderful price from NetApp, but once they have the system operating in your data center, the cost of additional capacity and renewed maintenance is almost always much, much higher.
EMC and NetApp have been around for a while, but there’s a downside to this. As flash drives (solid state drives) became increasingly necessary in the enterprise, those two juggernauts responded by adding flash drives to their existing products as their “hybrid” approach, and then later by buying all-flash technologies.
Many companies can benefit from hybrid players like Nimble Storage, one of the fastest-growing storage products in history, as well as all-flash companies like Pure Storage and SolidFire. One of my reps told me the other day that he talked to a customer who had replaced NetApp with Nimble. When the rep asked him how he liked Nimble, he responded with, “It saved our lives.” This had nothing to do with cost (although it did save them money) and everything to do with improved performance in their Oracle environment.
Again, there’s no question that storage can be a money pit. If you haven’t avoided the pit altogether, you’re like most organizations. However, through the use of third-party maintenance, pre-owned hardware, or strategic investments in new technology, you can help ensure sure that your IT spend never falls down a money pit.