Cryptanalysis

format_list_bulleted Contenido keyboard_arrow_down
ImprimirCitar

cryptanalysis (from the Greek kryptós, 'hidden' and analýein, 'unleash') is the part of cryptology that is dedicated to the study of cryptographic systems in order to find weaknesses in the systems and break their security without the knowledge of secret information. In non-technical language, this practice is known as breaking or breaking code, although this expression has a specific meaning within technical jargon. People who are engaged in cryptanalysis are called cryptanalysts.

Cryptanalysis methods and techniques have changed drastically throughout the history of cryptography, adapting to increasing cryptographic complexity. Cryptographic systems have evolved from the pen-and-paper methods of the past, through machines like Enigma - used by the Nazis during World War II - to the computer-based systems of the present. As the computing power of cryptographic systems has increased, cryptographic schemes have also become more complex. In the mid-1970s, a new class of cryptography was invented: asymmetric cryptography. The methods used to break these systems are usually radically different from the previous ones, and usually involve solving a carefully constructed problem in the realm of pure mathematics. The best known example is the factorization of integers.

The results of cryptanalysis have changed as well: it is no longer possible to have unlimited success in breaking a code, and there is a hierarchical classification of what constitutes an attack in practice.

The cryptanalysis technique is based on looking for errors or some error in the system to penetrate it and do damage.

Target

The objective of cryptanalysis is to find weaknesses in cryptographic systems that allow attacks (cryptanalytic attacks) to be made that break their security without the knowledge of secret information. To do this, he studies in depth the design and properties of cryptographic systems.

For example, for a cryptographic encryption system, a cryptanalytic study may consist, for example, of obtaining the secret key or simply accessing the plaintext without even having said key. However, cryptanalysis not only deals with encryption, but its scope is more general, studying cryptographic systems with the aim of circumventing the security of other types of algorithms and cryptographic protocols.

However, cryptanalysis usually excludes attacks that do not primarily target weak points in the cryptography used; for example, security attacks that rely on bribery, physical coercion, theft, keylogging, and so on, although these types of attacks are a growing risk to computer security, and are gradually becoming more effective than cryptanalysis traditional.

Fields of study

To achieve its objective, to develop cryptanalytic attacks that 'break' the security of cryptographic systems, cryptanalysts study cryptographic systems with the aim of discovering weaknesses that can be exploited. To do this, they study systems from different approaches.

Information theory

Information theory provides tools for evaluating the security of cryptographic systems. For example, in encryption systems, the entropy of the key, of the cryptograms and of the clear messages is studied. As the clear message is usually expressed in human languages, the study of its entropy and especially its entropy ratio is also interesting.

Cryptanalysts also study the secrecy of cryptographic systems. For example, in encryption systems, they study the degree of secrecy, characterizing those systems that have perfect secrecy at a theoretical level. From his study it is concluded that the perfect secret requires that the number of keys be at least as large as the number of messages. This is impractical except for so-called one-time pad ciphers. In practice most systems have finite keys. To characterize the security of these systems, cryptanalysts have developed the concept of uniqueness distance, which is the minimum value of encrypted characters that means that there is only one possible key that has been used to obtain this cryptogram. To do this, the concept of conditional entropy of knowing the key once the encrypted text is known is used.

For a cipher there are two interesting conditional entropies from the cryptanalyst's point of view:

For a cipher there are a number of interesting conditional entropies:

Suppose

  • A message M1 is submitted to a encryption process using the K key1 obtaining E(K)1M1)=C1.
  • PC(K){displaystyle P_{C}(K)} represent the conditional probability of the K key given the cryptogram received C. Sometimes it also denotes by P(K日本語C){displaystyle P(KINDC)}
  • PC(M){displaystyle P_{C}(M)} represent the conditional probability of the M message given the cryptogram received C. Sometimes it also denotes by P(M日本語C){displaystyle P(MINDC)}

So:

  • We can measure the uncertainty (entropy) of the knowledge of the key once the encrypted text is known, and therefore measure the wrong message English message mistaketion), HC(K){displaystyle H_{C}(K)}, also denoted by H(K日本語C){displaystyle H(KINDC)}through the formula:
HC(K)=− − ␡ ␡ E,KP(E,K)logPE (K)=− − ␡ ␡ EP(E)␡ ␡ KPE(K)logPE (K){displaystyle H_{C}(K)=-sum _{E,K}P(E,K)log _{P_{E}}(K)=-sum _{E}P(E)sum _{K}P_{E}(K)log _{P_{E}}(K)}
The first equality is by the definition of conditional entropy and the second by application of the Bayes theorem.
Note that if HC(K)=0{displaystyle H_{C}(K)=0} means that the encryption can be broken because there is no longer uncertainty. This cancellation introduces us to the concept of distance of oneness.
  • We can measure the uncertainty (entropy) of message knowledge once the encrypted text is known, and therefore measure the mistake of the key English key errortion), HC(M){displaystyle H_{C}(M)}, also denoted by H(M日本語C){displaystyle H(MINDC)}through the formula:
HC(M)=− − ␡ ␡ E,MP(E,M)logPE (M)=− − ␡ ␡ EP(E)␡ ␡ MPE(M)logPE (M){displaystyle H_{C}(M)=-sum _{E,M}P(E,M)log _{P_{E}}(M)=-sum _{E}P(E)sum _{M}P_{E}(M)log _{P_{E}}(M)}}
The first equality is by the definition of conditional entropy and the second by application of the Bayes theorem.
  • We can measure the uncertainty (entropy) of the knowledge of the key once we know the encrypted text and the message clearly, and therefore measure the mistake of the key aspect English key appearance mistaketion), HC,M(K){displaystyle H_{C,M}(K)}, also denoted by H(K日本語M,C){displaystyle H(KINDM,C)}through the formula:
HC,M(K)=− − ␡ ␡ E,M,CP(E,K,M)logPE,M (K){displaystyle H_{C,M}(K)=-sum _{E,M,C}P(E,K,M)log _{P_{E,M}}(K)}
  • We can measure the uncertainty (entropy) of the message knowledge once the encrypted text and the key, denoted by HC,K(M){displaystyle H_{C,K}(M)} or H(M日本語K,C){displaystyle H(MIVAK,C)}. Given a key the relationship between encrypted text and clear text is one-on-one and therefore HC,K(M)=0{displaystyle H_{C,K}(M)=0}

It has been shown that the following relationship between the different entropies holds:

HC,M(K)=HC(K)− − HC(M){displaystyle H_{C,M}(K)=H_{C}(K)-H_{C}(M)}

From this relationship we can draw a conclusion:

The goal of anyone using an encryptor is to have a value HC,M(K){displaystyle H_{C,M}(K)} high for the system to have the maximum strength possible in the event that the attacker has both the encrypted text and the clear text (attack with known clear text). However, by the expression of the equation, it is necessary that HC(M){displaystyle H_{C}(M)} Be small. However, have a small value of HC(M){displaystyle H_{C}(M)} implies that there is little uncertainty about the clear text once the encrypted text is known (attack with only available encrypted text), which is contrary to the interests of anyone who records a message. A compromise solution is therefore necessary for the system to have a strength acceptable to both types of attack

Mathematical basis and computing power

For example, asymmetric cryptography employs "hard" as the basis for your security, so an obvious point of attack is to develop methods to solve the problem. Asymmetric algorithms are designed around the assumed difficulty of solving certain mathematical problems. If an improved algorithm is found that can solve the problem, the cryptosystem is weakened. Examples:

  • The security of the Diffie-Hellman protocol depends on the difficulty of calculating a discreet logarithm. In 1983, Don Coppersmith found a faster way to calculate discrete logarithms (within certain groups), and therefore forced cryptographers to use larger groups, or different types of groups.
  • The security of the RSA protocol depends partly on the difficulty in factoring integers. Therefore a progress in factoring would have a clear impact on RSA security. In 1980, a 50-digit number could be factored at a cost of 1012 elementary computer operations. By 1984 the technology in factoring algorithms had advanced to the point that a 75-digit number could be factored with the same 1012 operations. Advances in computer technology have also caused these operations to be performed at a much lower time. The Moore Law empirically predicts that computing speeds will continue to increase. Factoring techniques could show similar development, but with great probability will depend on the ability and creativity of mathematicians, none of which has ever been satisfactorily predictable. Numbers of 150 figures, such as those used in RSA, have been factored. The effort was greater than the one mentioned above, but it was not beyond the reasonable limits for a modern computer. At the beginning of the 21st century, the numbers of 150 figures are no longer regarded as key enough for RSA. Numbers of several hundred digits were still considered too difficult to factor in 2005, although methods will likely continue to improve over time, forcing key sizes to keep pace of growth or to develop new algorithms.

Another distinctive feature of asymmetric algorithms is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to use knowledge obtained from the public key.

Cryptanalytic attacks

Cryptanalytic attacks consist of the application of cryptanalytic studies to exploit weaknesses in cryptographic systems and thus 'break' your safety.

Cryptanalytic attacks vary in power and in their ability to threaten cryptographic systems. An attack is said to exploit a "certification weakness" if it is a theoretical attack that is unlikely to apply in any realistic situation; Many of the results demonstrated in modern cryptanalytic research are of this type.

Each attack has its properties, which characterize it, and which make that attack more or less feasible.

Not all cryptanalytic attacks aim to completely break the system. The objective of a cryptanalytic attack is to obtain unknown information about the cryptographic system in such a way that its security is gradually weakened.

Classification

Cryptanalytic attacks can be classified based on their characteristics.

Classification according to the attitude of the attacker

Attacks can be classified according to the attacker's way of acting

Passive Attacks

In passive attacks the attacker does not alter the communication, only listens or monitors it, to obtain information. Therefore, these types of attacks usually use packet listening techniques (sniffing) and traffic analysis. They are difficult to detect since they do not imply alteration of the data. In some cases, this type of attack can be made difficult by encrypting the information that could be the target of eavesdropping.

Active Attacks

They involve some modification of the data flow or the creation of false flows. There are many techniques used in these types of attacks. Examples:

  1. Splendid
  2. Message Modification:Capture packages to then be deleted (dropping attacks), manipulated, modified (tagging attack) or reordered
  3. Performance:Package and transmission
  4. Degradation: Techniques for Service to Degrade

Classification according to prior knowledge

Cryptanalysis can be performed under a number of assumptions about how much can be observed or discovered about the system in question before carrying out the attack. As a basic starting point it is assumed that, for the purposes of the analysis, the general algorithm is known; this is Shannon's Maxim, "the enemy knows the system". This is a reasonable assumption in practice - throughout history, there are countless examples of secret algorithms that were discovered through espionage, treachery, and reverse engineering. (On a few occasions, some codes have been reconstructed by pure deduction, for example, the Lorenz code and the PURPLE code, as well as a number of classical codes.)

Other cases can be categorized as follows:

  • Attack with only available encrypted text: the cryptoanalist only has access to a collection of encrypted or encoded texts.
  • Attack with known clear text: the attacker has a set of encrypted texts from which he knows the corresponding clear or deciphered text.
  • Shortcut with clear text chosen (attack with chosen encrypted text): the attacker can obtain the encrypted (clear) texts corresponding to an arbitrary set of clear (ciphered) texts of his own choice.
  • Selected light text adaptive attack: as a clear-text attack chosen, but the attacker can choose subsequent clear texts based on the information obtained from the deciphered above. Similarly, there is the appropriate encrypted text attack chosen.
  • Related key attack: as a clear text attack chosen, but the attacker can get encrypted text using two different keys. The keys are unknown, but the relationship between the two is known; for example, two keys that differ in a bit.

These types of attacks obviously differ in the plausibility of their occurrence in practice. Although some are more probable than others, cryptographers often take a conservative approach and assume the worst case imaginable when designing algorithms, reasoning that if a system is secure even against such unrealistic threats, then it should also withstand real-world cryptanalysis.

The assumptions on which these attacks are based are often more realistic than might appear at first glance. To obtain a known cleartext attack, the cryptanalyst might very well know or be able to infer a part that is likely to be part of the cleartext, such as the header of an encrypted letter ("Dear Sir"), or that the start of a computer session contains the letters "LOGIN". A chosen cleartext attack is less likely, but in some cases it may be plausible: for example, if you convince someone to forward a message that you yourself have sent before, but in encrypted form. Related-key attacks are basically theoretical, although they can be realistic in certain situations, such as when building cryptographic hash functions using a block cipher.

Classification according to the objective in cryptanalysis

The results of a cryptanalysis can also vary in usefulness. For example, cryptographer Lars Knudsen (Knudsen, 1998) classified various types of block cipher attacks according to the quantity and quality of secret information that could be discovered:

  • Total increase - the attacker deduces the secret key.
  • Global deduction - the attacker discovers a functionally equivalent algorithm for encryption and decryption of messages, but does not get the key.
  • Local deduction (or instance) - the attacker discovers clear or encrypted texts additional to those previously known.
  • Deduction of information - the attacker discovers some information in the sense of Shannon that was not previously known.
  • Distinction of the algorithm - the attacker can distinguish the encrypted information from random permutation.

These categories can be applied to attacks on other types of algorithms.

Classification according to cost

Attacks can be categorized by the amount of resources they require. These can take the form of:

  • Time - the number of "primitive operations" to be performed. This category is quite vague; primitive operations could be considered as a basic computer instruction, such as a sum, an XOR operation, a bit to bit displacement, etc., or as whole encryption methods.
  • Memory - the amount of storage required to perform the attack.
  • Data - the amount of clear and encrypted texts needed.

In academic cryptography, a weakness or break in an algorithm is defined rather conservatively. Bruce Schneier summarizes this position as follows: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with less complexity than brute force. Never mind that brute force might require 2128 ciphers; an attack requiring 2110 ciphers would be considered a break... put simply, a break may just be a certification weakness: evidence that the code is not as good as advertised& #34; (Schneier, 2000).

Examples

There are a multitude of cryptanalytic attack methods. These can be classified into whether they are specialized in some type of cryptography or if they are more general. The main ones are the following:

  • Specialized in classic encryption:
    • Frequency analysis
    • Kasiski Method
    • Match index
    • Mutual coincidence index
  • Specialized in symmetric cryptography:
    • Differential Criptoanalysis
    • Linear analysis
    • Integral Criptoanalysis
    • Statistical Criptoanalysis
    • Module Criptoanalysis n
    • Attack XSL (eXtended Sparse Linearisation)
    • Slide Attack
  • Generals (applied in different areas):
    • Birthday attack
    • Attack Man-in-the-middle
    • Attack Meet-in-the-middle
    • Gross force attack
    • Gardening (cryptoanalysis)
    • Energy analysis

Quantum computers

Quantum computers are potentially useful for cryptanalysis. Because quantum states can exist in a superposition (i.e., be entangled), a new computational paradigm is possible, in which a bit represents not just the 0 and 1 states, but any linear combination of these. Peter Shor of Bell Laboratories tested the possibility, and various teams have demonstrated one aspect or another of quantum computing in the years since. At the moment, only a very limited test of possible designs has been demonstrated. There is, as of 2006, no credible prospect of a real, usable quantum computer.

However, if a quantum computer were built, many things would change. Parallel computing would likely be the norm, and various aspects of cryptography would change.

In particular, since a quantum computer would be capable of extremely fast brute-force key lookups, key sizes considered today beyond the resources of any brute-force attacker would be within reach of this attack. The key sizes required to be beyond the capacity of a quantum computer would be considerably larger than today. Some popular writers have stated that no encryption would remain secure if quantum computers were available. Others claim that simply adding bits to key lengths will prevent brute force attacks, even with quantum computers.

A second possibility is that the increase in computational capacity could make possible other key search attacks, beyond simple brute force, against one or more of the currently impregnable algorithms. For example, not all progress in prime factorization has been due to improved algorithms. Some of it is due to the increasing computational power of computers, and the existence of a working quantum computer could speed up factoring tasks considerably. This aspect is quite predictable, although not clearly. What cannot be anticipated is a breakthrough in the theoretical field that quantum computing requires, which could make currently unfeasible or even unknown attacks achievable. In the absence of a method to predict these advances, we can only wait.

It is unknown if there is a polynomial-time encryption method that requires exponential time to crack, even for a quantum computer.

History of cryptanalysis

Cryptanalysis has co-evolved with cryptography, and the competition between the two can be traced throughout the history of cryptography. New keys were designed to replace already broken schemes, and new cryptanalysis techniques were developed to unlock the improved keys. In practice, both are considered to be two sides of the same coin: to create a secure cryptographic system, it is necessary to take into account the discoveries of cryptanalysis. In fact, today the scientific community is often invited to try to crack new cryptographic keys, before considering that a system is secure enough for use.

Classical cryptanalysis

First page A manuscript for deciphering cryptographic messagesAl-Kindi.

Although the expression cryptanalysis is relatively recent (it was coined by William F. Friedman in 1920), the methods for breaking codes and ciphers are much older. The first known explanation of cryptanalysis is due to the 9th century Arab scholar, Yusuf Yaqub ibn Ishaq al-Sabbah Al-Kindi, in his Manuscript for Deciphering Cryptographic Messages. This treatise includes a description of the frequency analysis method (Ibraham, 1992).

Frequency analysis is the basic tool to break classical ciphers. In all known languages, certain letters of the alphabet appear more frequently than others; For example, in Spanish, vowels are very frequent, occupying around 45% of the text, with E and A appearing more frequently, while the added frequency of F, Z, J, X, W and K does not reach 2%. Likewise, statistics of the appearance of pairs or triples of letters can be collected. Frequency analysis will reveal the original content if the encryption used is not capable of hiding these statistics. For example, in a simple substitution cipher (in which each letter is simply substituted for another), the most frequent letter in the ciphertext would be a likely candidate to represent the letter "E".

Frequency analysis relies on both linguistic knowledge and statistics, but as ciphers became increasingly complicated, mathematics gradually became the predominant approach in cryptanalysis. This change was particularly evident during World War II, when efforts to break the Axis codes required new levels of mathematical sophistication. Furthermore, automation was applied to cryptanalysis for the first time in history, in the form of the Bomba and Colossus devices, one of the first computers.

Modern Cryptanalysis

Replica of a Bombe device.

Although computing was used with great success during World War II, it also made possible new cryptographic methods that were orders of magnitude more complex than those used to date. Taken as a whole, modern cryptography has become far more impervious to the cryptanalyst than the pen-and-paper methods of the past, and it now seems to have an advantage over the methods of pure cryptanalysis. Historian David Kahn wrote: "Too many cryptosystems for sale today by hundreds of commercial companies cannot be broken by any known method of cryptanalysis. In fact, on certain systems even a chosen cleartext attack, in which a selected cleartext fragment is compared with its encrypted version, does not reveal the code to break other messages. In a sense, then, cryptanalysis is dead. But this is not the end of the story. Cryptanalysis may be dead, but, mixing my metaphors, there is more than one way to skin a cat." (Remarks on the 50th Anniversary of the National Security Agency, November 1, 2002). Kahn then mentions the increased possibilities for interception, "bugging", side-channel attacks, and quantum cryptography as substitutes for traditional cryptanalysis methods.[1]

Kahn might have been too hasty in declaring cryptanalysis dead; weak ciphers are not yet extinct. In academia, new designs are regularly presented, and they are also frequently broken: the Madryga block cipher of 1984 proved vulnerable to an attack with only ciphertext available in 1998; FEAL-4, proposed as a replacement for the standard DES data encryption algorithm, was demolished by a spate of attacks from the academic community, many of which were not entirely feasible under practical conditions. In industry, too, encryption is not free of flaws: for example, the AS/1, AS/2 and CMEA algorithms, used in the mobile phone industry, can be broken in hours, minutes or even in real time by widely available computer equipment. In 2001, the WEP algorithm, used to protect Wi-Fi networks, was shown to be susceptible to attack via a related-key attack.

The results of the cryptanalysis

The Zimmerman Telegram, deciphered.

Successful cryptanalysis has undoubtedly influenced history. The ability to read the supposedly secret thoughts or plans of others can be a decisive advantage, and never more so than in times of war. For example, during World War I, the decryption of the Zimmermann Telegram was crucial to the entry of the United States into the war. In World War II, cryptanalysis of German codes, including the Enigma machine and the Lorenz code, has been considered from a factor that barely shortened the war by a few months in Europe, to a crucial element that determined the final outcome (see ULTRA). The United States also benefited from cryptanalysis of the Japanese PURPLE code during the war (see MAGIC).

All governments have long been aware of the potential benefits of cryptanalysis for military intelligence, both purely warlike and diplomatic, and have frequently established organizations dedicated exclusively to cracking the codes of other nations, for example GCHQ and NSA, American organizations still very active today. In 2004, the news broke that the United States had broken the codes used by Iran: [2]).

Contenido relacionado

Swastika

The swastika or swastika and the swastika are a cross whose arms are bent at right...

Triskaidekaphobia

triskaidekaphobia is the irrational fear of the number 13. It is normally considered a superstition. The specific phobia of Friday the 13th is called...

Anna Frank

Annelies Marie Frank, known in Spanish as Ana Frank was a German girl of Jewish descent, known worldwide thanks to the Diary of Anne Frank, the edition of her...
Más resultados...
Tamaño del texto:
undoredo
format_boldformat_italicformat_underlinedstrikethrough_ssuperscriptsubscriptlink
save