This page uses content from Wikipedia and is licensed under CC BY-SA.
Information leakage happens whenever a system that is designed to be closed to an eavesdropper reveals some information to unauthorized parties nonetheless. For example, when designing an encrypted instant messaging network, a network engineer without the capacity to crack encryption codes could see when messages are transmitted, even if he could not read them. During the Second World War, the Japanese for a while were using secret codes such as PURPLE; even before such codes were cracked, some basic information could be extracted about the content of the messages by looking at which relay stations sent a message onward. As another example of information leakage, GPU drivers do not erase their memories and thus, in shared/local/global memories, data values persist after deallocation. These data can be retrieved by a malicious agent.
Designers of secure systems often forget to take information leakage into account. A classic example of this is when the French government designed a mechanism to aid encrypted communications over an analog line, such as at a phone booth. It was a device that clamped onto both ends of the phone, performed the encrypting operations, and sent the signals over the phone line. Unfortunately for the French, the rubber seal that attached the device to the phone was not airtight. It was later discovered that although the encryption itself was solid, if heard carefully, one could hear the speaker, since the phone was picking up some of the speech. Information leakage can subtly or completely destroy the security of an otherwise secure system.
A modern example of information leakage is the leakage of secret information via data compression, by using variations in data compression ratio to reveal correlations between known (or deliberately injected) plaintext and secret data combined in a single compressed stream. Another example is the key leakage that can occur when using some public-key systems when cryptographic nonce values used in signing operations are insufficiently random.
Information leakage can sometimes be deliberate: for example, an algorithmic converter may be shipped that intentionally leaks small amounts of information, in order to provide its creator with the ability to intercept the users' messages, while still allowing the user to maintain an illusion that the system is secure. This sort of deliberate leakage is sometimes known as a subliminal channel.
Generally, only very advanced systems employ defenses against information leakage.
Following are the commonly implemented countermeasures :