Its a very little known fact but software developers are costing enterprises millions of dollars and I donâ€™t think in many cases either party realizes it. I am not referring to the actual cost of purchase for the programs and applications or even the resulting support costs. Those are easily calculated and can be hard bounded by budgets. But what of the resulting costs of the facility in which it resides?
The Tier System introduced by the Uptime Institute was an important step in our industry in that it gave us a common language or nomenclature in which to actually begin having a dialog on the characteristics of the facilities that were being built. It created formal definitions and classifications from a technical perspective that grouped up redundancy and resiliency targets, and ultimately defined a hierarchy in which to talk about those facilities that were designed to those targets. For its time it was revolutionary and to a large degree even today the body of work is still relevant.
There is a lot of criticism that its relevancy is fading fast due to the modelâ€™s greatest weakness which resides in its lack of significant treatment of the application. The basic premise of the Tier System is essentially to take your most restrictive and constrained application requirements (i.e. the one thatâ€™s least robust) and augment that resiliency with infrastructure and what I call big iron. If only 5% of your applications are this restrictive, then the other 95% of your applications which might be able to live with less resiliency will still reside in the castle built for the minority of needs. But before you you call out an indictment of the Uptime Institute or this â€œmost restrictiveâ€ design approach you must first look at your own organization. The Uptime Institute was coming at this from a purely facilities perspective. The mysterious workload and wizardry of the application is a world mostly foreign to them. Ask yourself this question – â€˜In my organization, how often does IT and facilities talk to one another around end to end requirements?â€™ My guess based on asking this question hundreds of times of customers and colleagues ranges between not often to not at all. But the winds of change are starting to blow.
In fact, I think the general assault on the Tier System really represents a maturing of the industry to look at our problem space more combined wisdom. I often laughed at the fact that human nature (or at least management human nature) used to hold a belief that a Tier 4 Data Center was better than a Tier 2 Data Center. Effectively because the number was higher and it was built with more redundancy. More Redundancy essentially equaled better facility. A company might not have had the need for that level of physical systems redundancy (if one were to look at it from an application perspective) but Tier 4 was better than Tier 3, therefore we should build the best. Its not better, just different.
By the way, thatâ€™s not a myth that the design firms and construction firms were all that interested in dispelling either. Besides Tier 4 having the higher number, and more redundancy, it also cost more to build, required significantly more engineering and took longer to work out the kinks. So the myth of Tier 4 being the best has propagated for quite a long time. Ill say it again. Its not better, its just different.
One of the benefits of the recent economic downturn (there are not many I know), is that the definition of â€˜betterâ€™ is starting to change. With Capital budgets frozen or shrinking the willingness of enterprises to re-define â€˜betterâ€™ is also changing significantly. Better today means a smarter more economical approach. This has given rise to the boom in Modular data center approach and its not surprising that this approach begins with what I call an Application level inventory.
This application level inventory first specifically looks at the make up and resiliency of the software and applications within the data center environments. Does this application need the level of physical fault tolerance that my Enterprise CRM needs? Do servers that support testing or internal labs need the same level of redundancy? This is the right behavior and the one that I would argue should have been used since the beginning. The Data Center doesnâ€™t drive the software, its the software that drives the Data Center.
One interesting and good side effect of this is that the enterprise firms are now pushing harder on the software development firms. They are beginning to ask some very interesting questions that the software providers have never been asked before. For example, I sat in one meeting where and end customer asked their Financial Systems Application provider a series of questions on the inter-server latency requirements and transaction timeout lengths for data base access of their solution suite. The reason behind this line of questioning was a setup for the next series of questions. Once the numbers were provided it became abundantly clear that this application would only truly work from one location, from one data center and could not be redundant across multiple facilities. This led to questions around the providers intentions to build more geo-diverse and extra facility capabilities into their product. I am now even seeing these questions in official Requests for Information (RFIâ€™s) and Requests for Proposal (RFPs). The market is maturing and is starting to ask an important question – why should your sub-million dollar (euro) software application drive 10s of millions of capital investment by me? Why arenâ€™t you architecting your software to solve this issue. The power of software can be brought to bear to easily solve this issue, and my money is on the fact this will be a real battlefield in software development in the coming years.
Blending software expertise with operational and facility knowledge will be at the center of a whole new train of software development in my opinion. One that really doesnâ€™t exist today and given the dollar amounts involved, I believe it will be a very impactful and fruitful line of development as well. But it has a long way to go. Most programmers coming out of universities today rarely question the impact of their code outside of the functions they are providing and the number of colleges and universities that teach a holistic approach can be counted on less than one hands worth of fingers world-wide. But thatâ€™s up a finger or two from last year so I am hopeful.
Regardless, while there will continue to be work on data center technologies at the physical layer, there is a looming body of work yet to be tackled facing the development community. Companies like Oracle, Microsoft, SAP, and hosts of others will be thrust into the fray to solve these issues as well. If they fail to adapt to the changing face and economics of the data center, they may just find themselves as an interesting footnote in data center texts of the future.