Filed under: Convergence, Data Center, Disaster Recovery Planning, Outsourcing, Risk Mitigation | Tags: Uptime Institute
The Uptime Institute developed a tiered classification approach to data center site infrastructure functionality and high-availability that addressed a need for a common benchmarking standard in this area that was usually based on opinion and conjecture up to this point. This system has been in practice now since 1995 and is often referred to by enterprises and co-location/managed hosting service providers to tout the robustness of their data center. The tiers classify from tier-I (Basic Data Center), where there is simply a single path for power and cooling distribution, without redundant components, providing to 99.671% availability, to tier IV (Fault Tolerant), where there is site infrastructure capacity and capability to permit any planned or unplanned activity without disruption to the critical load; 99.995% site availability.
The Uptime Institute has recently asserted two things in their leadership role in this area: there is no such thing as “almost tier III” or tier II+; you are either tier III or not adhering to the strict definition. And you must be “certified” by the Uptime Institute’s certification body or a by an Uptime Institute trained and certified consultant to refer to your data center as adhering to one of these classification levels.
I tend to think that the Uptime Institute’s tier classification has become a de facto standard and it is a little late and disingenuous to assert control now over using this term to describe a data center. I think it is telling that Uptime Institute reports that only “two dozen” data centers have had their tier rating certified.
This type of rating should be within the purview of an international standards body, not an organization, even a not-for-profit organization, that stands to benefit financially from certification. Adhering to a strict definition of a particularly tier level, such as the difference between level II and III, does not necessarily mean the data center is not meeting strict compliance for redundancy in important other areas. There is no allocation to weighting of measures of fail-over and redundancy; it is either all or nothing.
While I am not advocating ‘shades of gray’ when it comes to building a robust infrastructure, there are many factors that come into play when evaluating the availability of the infrastructure. All of this is for naught if the application architecture, the sole purpose of having a data center to begin with, is not built for a sufficient amount of resiliency and failover. No tier IV data center is going to save a poorly architected application.