Tiered storage has traditionally been thought of in terms of the media the data is stored in, with high value data stored in the fastest storage media such as flash SSDs (solid-state drives) and the lowest value data on media like tape drives. But in a hybrid IT world, we can think of storage in terms of where the data is stored, with the most important data in a high-performance environment like a highly-connected colocation data centre and the least critical and active data in a public cloud. Either methodology has the same outcome: the highest value data is stored in the most performant and therefore most expensive media or location, and the least valuable data in the least performant and lowest cost option. Data classification has tended to focus on three tiers of data storage: primary, secondary and tertiary – otherwise known as hot, warm and cold. Some organisations have gone a step further by splitting out primary into mission-critical and primary data. Complex systems can encompass five or more tiers, while future storage systems could have just two: a flash tier for primary data, with all backup and archive data in the cloud. But it’s not the number of tiers the system has that matters. What’s important is that the data in the system is accurately classified so it is assigned to the storage tier that best fits with the business tasks it performs. Data needs to be categorised by its value to the business, and ranked by how quickly and frequently it needs to be accessed by users and applications. Let’s take a four tier system as an example. Mission critical (Tier 0) data supports critical high-performing workloads that require zero downtime and minimal latency. Performance demands outweigh cost considerations; it is the fastest and most expensive layer in the storage hierarchy. Primary (Tier 1 or hot) data is in constant use to support applications that are essential to the organisation’s everyday operations. The data needs to be continually accessible so requires a high-performance storage environment, but can tolerate higher latency and lower throughput than Tier 0 workloads. Cost is balanced against the data needs. Secondary (Tier 2 or warm) data is seldom used but needs to be accessible, like old emails or historical financial information. Tier 2 storage can also support reporting and analytics, and be used as a backup for business continuity and disaster recovery. It needs large amounts of capacity for a long period of time and to be highly reliable and secure, but latency and quick data access aren’t an issue. Storage cost is more important that performance. Tertiary (Tier 3 or cold) data is an archive tier that sits behind the backup for data that is used rarely, if ever. The data will have some form of strategic value, for example to support regulatory requirements like compliance or for historical analysis. Cost is the overriding factor for storage. The four tiers have wildly different capacity, performance and cost characteristics, but work together to deliver a system that meets the organisation’s specific needs to optimise its performance while keeping a lid on costs. Application performance is improved because the primary storage is freed up for the most demanding applications, while moving secondary and tertiary data – typically 80-90% of an organisation’s data – onto cheaper storage media lowers costs. In a well thought out and implemented tiered storage system, costs and storage performance requirements are perfectly aligned. Of course, wherever the data is stored it will ultimately reside in a data centre, whether it’s the mission-critical data hosted in a secure, efficient and highly-connected data centre, or lower level cloud-based storage that tethers back to virtualised servers in a physical facility. However, while cloud-based storage and backup can be efficient and low cost, it may not be the lowest cost option for archive data, especially large volumes of it. The best fit for that could be tape-based storage in a colocation data centre. Determining the best location for the data tiers, and continuously monitoring performance and costs, are as important to the storage system as classifying and allocating the data itself. Related Articles Getting the most from High Performance Computing – how to ensure success How is the Cloud affecting the Colocation market and traditional Colocation Providers Why colocation in the UK will survive Brexit The Fintech Revolution Blurring the boundaries Previous article: IT's a hybrid world < IT's a hybrid world Next article: Minimising Data Centre Environmental Impact - VIRTUS Minimising Data Centre Environmental Impact - VIRTUS >