Craig S. Mullins

Return to Home Page

May 2002




The DBA Corner
by Craig S. Mullins  

How Much Availability is Enough?

Availability is usually discussed in terms of the percentage of total time that a service needs to be up and running. For example, a system with 99% availability will be up and running 99% of the time and down -- or unavailable -- 1% of the time. DBAs are constantly being pushed to increase database availability. Availability is crucial for databases because applications rely on the data stored in the databases – if the database is down so are all of the applications that read and modify that database.

Another term used to define availability is mean time between failure, or MTBF. More accurately, MTBF is a better descriptor of reliability than availability. But reliability has a definite impact on availability. In general, a system that is more reliable will also be more available. Certain aspects of reliability are beyond the control of technicians. A shoddily built hardware or software component will eventually fail even with diligent monitoring and maintenance. But other aspects of reliability can be controlled. Proper system and database administration techniques and solutions can be deployed to increase reliability. For example, a properly secured system using a firewall and automated anti-virus protection is much more reliable than one that does not use such software and techniques.

But just how much availability is enough? In this Internet age the push to provide never-ending uptime continues unabated. Stretched to the ultimate never-ending uptimes translates into an entire year of uptime - 365 days a year 24 hours a day. At 60 minutes an hour that mean 525,600 minutes of uptime a year. Clearly it is a laudable goal to achieve 100% availability, but just as clearly it is unreasonable to assume that 100% availability can be achieved. Frequently the term “five nines” is used to describe highly available systems. It refers to 99.999% uptime and is used to describe what is essentially 100% availability, but with the understanding that things fail and some downtime is unavoidable. Refer to the accompanying table for a better understanding of how to correlate percentages to actual annual downtime.

Table. Availability vs. Downtime

Availability Percentage

Approximate Downtime Per Year

In Minutes

In Hours


5 minutes

.08 hours


53 minutes

.88 hours


262 minutes

4.37 hours


526 minutes

8.77 hours


1,052 minutes

17.5 hours


2,628 minutes

43.8 hours


5,256 minutes

87.6 hours


10,512 minutes

175.2 hours
(or 7.3 days)

Even though it is unlikely that 100% availability can be achieved, some systems are approaching five nines of availability. DBAs can take measures to design databases and build systems that are created to achieve high availability. Taking advantage of non- disruptive utilities and the ability to change DBMS configuration parameters on the fly enhances database availability.

But just because high availability can be built into a system does not mean that every system should be built with a high availability design. That is so because a highly available system can cost many time more than a traditional system designed with some window of downtime built into it. The DBA needs to negotiate with the end users and clearly explain the costs associated with a highly available system.

Whenever high availability is a goal for a new system, database, or application careful analysis is required to determine how much downtime users can really tolerate, and what is the impact of an outage. High availability is an alluring requirement and end users typically will request as much as they think they can get. As a DBA your job is to investigate the reality of the requirement as opposed to the initial desire of the end user.



From Database Trends and Applications, May 2002.

© 2002 Craig S. Mullins,  All rights reserved.