On Sept. 19, 1980, near the small town of Damascus, Arkansas, someone dropped a socket, and it caused a breach. In terms of breaches, it was nuclear! Paradoxical as it may seem, the story of the 1980’s Damascus Titan II explosion showcases how a simple error parallels that of a significant breach of a company’s security infrastructure. Was the nuclear threat in 1980 preventable? Is a breach of a security environment preventable? Let’s deconstruct those questions and come up with a reasonable answer, but before we do, a history lesson:
The 1980 Damascus Titan II Explosion
The Titan family of rockets was part of the United States Air Force’s intercontinental ballistic missile and space launcher fleet from 1959 to 2005. This rocket family successfully launched many intelligence-gathering satellites, interplanetary scientific probes, and military payloads into space.
In the 1960’s, the Titan I was replaced because it could not be stored with fuel; therefore, it was slow to launch. For a Titan I to launch, the missile would have to be removed from its silo, brought to ground level, fueled, and then readied for take-off. The process was long and laborious.
A larger missile, the Titan II, could be housed with storable propellants (fuel ready) and launched from its silo in 58 seconds. It was topped with a nine 9 megaton W-53 nuclear warhead, which could reach a target more than 6,000 miles away in less than 30 minutes. The W-53 was three times more powerful than all the bombs used in WWII combined, including the atomic bombs.
If detonated, a W-53 nuclear warhead would result in an explosion that would have caused fatal burns to those within 20 miles of the detonation site. Thus, a quarter of the United States could have been injured or killed due to exposure to radiation or toxic chemicals from its detonation.
On September 18, 1980, the ground crew at Silo 374-7, located near Damascus, Arkansas, was alerted that the pressure of the oxidizer tank was low on the Titan II. A specialized unit from the Little Rock Air Force Base was called out to perform maintenance on the missile’s tank.
Per military policy, a two-man team would be sent into the silo wearing specialized hazmat suits. A torque wrench and socket were required to remove the cap of the tank. The torque wrench was a recent change to the checklist; in the past, the men used a socket wrench or ratchet to remove caps on oxidizer tanks.
On September 18, 1980, the airman in charge of the tools took a socket wrench with him into the silo, leaving the torque wrench in the truck.
At approximately 6:30 p.m., the team of two radioed the ground crew stating they were ready to begin the checklist. During the task, the airmen struggled to get the socket to sit correctly on the cap of the tank. With a slight twist of hand, an 8-pound socket became dislodged and began its freefall into the silo. Falling around 70 feet, the socket bounced off another platform and punctured the missile’s fuel tank. The silo filled with fuel.
Sirens alerted the ground team to a fuel leak, and jumping into action, the ground team went through the military checklist regarding the policies pertaining to a fuel leak.
Once the ground crew was out of checklist items, the Air Force put together a team comprising three major groups: the Missile Potential Hazard Team, the Strategic Air Command (SAC), and a group from Martin Marietta (the company who built and designed the Titan II). They were collaborating on an unknown course. Even with hundreds of nuclear incidents happening each year in the U.S., there was no manual or checklist regarding what would happen if an accident of this magnitude occurred.
For eight and a half hours, the fear of not knowing what was going to happen and the potential unintended consequences paralyzed many in the Air Force.
As a security professional, everyone knows the common statement, “it isn’t if a breach will happen, but when it will happen.” One breach could be “nuclear” to a company’s reputation and livelihood.
Nearly nine hours later, at approximately 3:00a.m. on September 19, 1980, the Titan II exploded, killing Senior Airman David Livingston and injuring 23 people. The W-53 warhead – the largest thermonuclear device in the American arsenal – was found intact several hundred feet away. A safeguard prevented it from detonation.
Was the incident in Damascus, Arkansas preventable? No, but it could have been contained. It is rare that the responsibility of an accident or breach falls on one person or group. However, bad communication, improper planning, and a few bad decisions made by one or two people can spiral into a nuclear explosion, resulting in a loss of revenue and trust.
Is a security environment breach preventable?
The Online Trust Alliance (OTA) found that 93% of breaches in Q3 of 2017 were avoidable if “simple steps had been taken such as regularly updating software, blocking fake email messages using email authentication, and training people to recognize phishing attacks.” In information security, we have a communication and education gap. Security without communication is fruitless.
One of the most significant lessons learned from the Titan missile explosion is that, at a certain point, there were no additional checklist items for the ground team to execute. Sadly, there was not a “nuclear” disaster plan in place, so they ran the operation blind. In context, it would be like a company not having a comprehensive disaster recovery plan as part of the company’s overall business continuity plan.
It is foreseeable that a launch-ready missile would have a major fuel leak, so why was there not a more thorough plan in place when the first set of controls did not work?
As a security professional, everyone knows the common statement, “it isn’t if a breach will happen, but when it will happen.” One breach could be “nuclear” to a company’s reputation and livelihood.
The safety of human life is the first priority in security for an organization, followed by protecting data. Taking out the “top secret” and “national security” aspect of the nuclear missile, there comes a point where leadership needed to implement an evacuation plan to at least safeguard the public. Today, best practices state that it is better to be more transparent when a breach occurs than to hide it. In terms of the Titan, thousands of American lives could have been impacted.
If a company does not have such a plan in place, security professionals must advocate for one. Information security is ever-changing, and the security professional’s role is without borders. There is one duty to reasonably secure the company’s environment, and another to educate and advocate about the importance of security.
With breaches hitting the headlines weekly, this is the best time to approach the board and discuss the risks that the company faces. Security should partner with the board to prepare a one-year roadmap to develop policies and procedures that make sense for the company. Getting their buy-in will increase the chances for success, while also alerting the board to the reason why the security budget should be increased, not cut.
A great way to start is by advocating for a business continuity plan (BCP). The goal of this project is to forecast potential disasters and risks, their impacts, and develop a thorough recovery strategy to mitigate risks.
Part of a BCP is the Business Impact Analysis (BIA), which is a document that identifies potential risks and consequences that certain disruptions have on business operations. It also identifies the assets that are essential to core operations. This roadmap takes into consideration topics such as: redundancy; loss of business functions and interruptions; recovery strategies; the recovery point objective (the amount of data your company can afford to lose during an outage); the recovery time objective (the amount of time it should take to restore your critical systems after an outage); using remote sites; and the costs associated with each step. A BIA should be a living document, meaning that each time a professional learns of a new risk or gets pricing on services, software, or hardware, that information should be put into the BIA.
A Disaster Recovery Plan (DRP) identifies the processes to recover critical operations. There can be many parts to a DRP. Key elements are: 1) alternative worksite (if damage is done to infrastructure); 2) a hierarchical inventory of critical systems and assets; 3) a priority list of systems to restore after an outage; 4) redundancy implantation and the testing of systems before they are brought online; 5) communication plan to regulate after-action review and documentation. The after-action report is vital to analyze what happened, the source of the incident, and what lessons can be learned to improve the processes. Any lessons should be incorporated into the BIA.
Tabletop and functional exercises are difficult to coordinate, but both are vital to the success of surviving a breach or interruption of service. A tabletop exercise is a group discussion to examine different scenarios. Collectively, the group decides if alternatives exist other than what “looked good on paper.” A functional exercise provides employees the opportunity to simulate attacks or incidents. This hands-on exercise can be costly, but the potential reward is invaluable.
In conclusion, machines fail, people make mistakes, and it is widely understood, yet undervalued, that it is not if a breach of a company’s security landscape will occur but when. With those statements in mind, as well as knowing that each year there are thousands of reported and unreported instances of breaches occurring – can breaches be prevented? No, not likely; however, the impact of a “nuclear” breach can be mitigated by advocating and educating a company and the community at large.
Leave a Comment