Security through obscurity

From HORSE - Holistic Operational Readiness Security Evaluation.
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Abstract

In cryptography and computer security, security through obscurity (sometimes security by obscurity) is a controversial principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. The technique stands in contrast with security by design, although many real-world projects include elements of both strategies.

The principle of security through obscurity was more generally accepted in the days when all cryptographers were employed by national intelligence agencies, such as the NSA. Now that cryptographers often work in universities (where researchers freely publish their findings and test others' findings) or private industry (where findings are more often protected by patents and copyrights than by secrecy), the argument has lost some of its former popularity. An example is Pretty Good Privacy released as source code, and generally regarded (when properly used) as a military-grade crypto-system. The wide availability of high quality cryptography was disturbing to the US government, which seems to have been using a security through obscurity analysis to support its opposition to such work.

Arguments Against

As mentioned above, in cryptography, the argument against security by obscurity states that design of a cryptographic system should not require secrecy and should not cause inconvenience if it falls into the hands of the enemy.

This principle has been paraphrased in several ways:

  • System designers should assume that the entire design of a security system is known to all attackers, with the exception of the cryptographic key.
  • The security of a cipher resides entirely in the cryptographic key.


If any secret piece of information constitutes another point of potential compromise, then fewer secrets makes a more secure system. Therefore, systems that rely on secret details apart from the cryptographic key are less secure; that is, resident vulnerabilities in the secret details will render the choice of key—simple vs. complex—largely irrelevant.

The related full disclosure philosophy suggests that security flaws should be disclosed as soon as possible because the strength of the protection provided by keeping the cryptographic key secret has become weaker. In this case there is now effectively more than one key that provides access: the old cryptographic key and a key composed of the newly discovered flaws.

For example, if somebody stores a spare key under the doormat in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using the spare key. Furthermore, since burglars often know likely hiding places, the house owner will experience greater risk of a burglary by hiding the key in this way. The owner has in effect added another key-the fact that the entry key is stored under the doormat-to the system. The cryptographic key is no longer simply "the actual possession of the physical key that is used to open the door" but also it is now "the knowledge of the physical key's location".

In the past, several algorithms or pieces of software with secret internal details have seen their internal details become public. Furthermore, vulnerabilities have been discovered and exploited in software, even when the internal details remained secret. Taken together, these examples suggest that it is difficult or ineffective to keep the details of systems and algorithms secret.

  • The A5/1 cipher for mobile telephones became public knowledge partly through reverse engineering
  • Details of the RSADSI (RSA Data Security, Inc.) cryptographic algorithm software were revealed through probably deliberate publication of alleged RC4 source on Usenet.
  • Vulnerabilities in various versions of Microsoft Windows, its default web browser Internet Explorer, and its mail applications Microsoft Outlook and Outlook Express have caused worldwide problems when computer viruses, Trojan horses, or computer worms have exploited them.
  • Details of Diebold Election Systems voting machine software were published on an official Web site, apparently intentionally.
  • Portions of the source code of Microsoft Windows were revealed after apparently deliberate penetration of a corporate development network.
  • The Kernel Patch Protection (also called PatchGuard) in Microsoft Windows Vista uses this; it was cracked and exploited within one week of the Vista launch, rendering it useless until fixed.
  • Cisco Systems router operating system software was accidentally exposed on a corporate network.
  • The once open source Doom port, ZDaemon has been renowned for security through obscurity, binary cheats were released and the source was closed because of this.


Linus's Law states that many eyes make all bugs shallow also suggests improved security for algorithms and protocols whose details are published. More people can review the details of such algorithms, identify flaws, and fix the flaws sooner. We would thus expect the frequency and severity of security compromises to be less severe for open than for proprietary or secret software.

Finally, operators and developers and or vendors of systems that rely on security by obscurity may keep the fact that their system is broken secret, to avoid destroying confidence in their service or product and thus its marketability, and this may amount to fraudulent misrepresentation of the security of their products. Application of the law in this respect has been less than vigorous, in part because vendors impose terms of use as a part of licensing contracts in order to disclaim their apparent obligations under statutes and common law that require fitness for use or similar quality standards.

Arguments For

Perfect or "unbroken" solutions provide security, but absolutes may be difficult to obtain. Although relying solely on security through obscurity is a very poor design decision, keeping secret some of the details of an otherwise well-engineered system may be a reasonable tactic as part of a defense in depth strategy. For example, security through obscurity may (but cannot be guaranteed to) act as a temporary "speed bump" for attackers while a resolution to a known security issue is implemented. Here, the goal is simply to reduce the short-run risk of exploitation of a vulnerability in the main components of the system.

Software which is deliberately released as open source can never be said, certainly in theory, and in practice as well, to be relying on security through obscurity (the design being publicly available), but it can nevertheless also experience security debacles (e.g., the Morris worm of 1988 spread through some obscure-if widely visible to those who bothered to look-vulnerabilities). An argument sometimes used against open-source security is that developers tend to be less enthusiastic about performing deep reviews as they are about contributing new code. Such work is sometimes seen as less interesting and less appreciated by peers, especially if an analysis, however diligent and time-consuming, does not turn up much of interest. Combined with the fact that open-source is dominated by a culture of volunteering, security sometimes receives less thorough treatment than it might in an environment in which security reviews were part of someone's job description ( See http://www.eweek.com).

Security through obscurity can also be used to create a risk that can detect or deter potential attackers. For example, consider a computer network that appears to exhibit a known vulnerability. Lacking the security layout of the target, the attacker must consider whether to attempt to exploit the vulnerability or not. If the system is set to detect this vulnerability, it will recognize that it is under attack and can respond, either by locking the system down until proper administrators have a chance to react, by monitoring the attack and tracing the assailant, or by disconnecting the attacker. The essence of this principle is that raising the time or risk involved, the attacker is denied the information required to make a solid risk-reward decision about whether to attack in the first place.

A variant of the defense in the previous paragraph is to have a double-layer of detection of the exploit; both of which are kept secret but one is allowed to be "leaked". The idea is to give the attacker a false sense of confidence that the obscurity has been uncovered and defeated. An example of where this would be used is as part of a honeypot. In neither of these cases is there any actual reliance on obscurity for security; these are perhaps better termed obscurity bait in an active security defense.

However, it can be argued that a sufficiently well-implemented system based on security through obscurity simply becomes another variant on a key-based scheme, with the obscure details of the system acting as the secret key value.

There is a general consensus, even among those who argue in favor of security through obscurity, that security through obscurity should never be used as a primary security measure. It is, at best, a secondary measure; and disclosure of the obscurity should not result in a compromise.

Historical Notes

There are conflicting stories about the origin of this term. Incompatible Timesharing System fans say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred to the fact that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community; and (self-mockingly) the poor coverage of the documentation and obscurity of many commands. One instance of deliberate security through obscurity on ITS has been noted; the command to allow patching the running ITS system (altmode control-R) echoed as ##^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.

In the early days of Apple Computer, Jef Raskin coined the term TIC (Total Internal Confusion): If people inside a group don't know what's going on, outsiders will never find out. This idea is closely related to security through obscurity.

See Also

External References