Cryptography and entropy

In many cryptographic applications, one needs to distinguish a "friend" from an "enemy".  A friend has capacities which an enemy is not supposed to have concerning the processing of data (for instance, turning a cypher text into a clear text).  This is achieved by giving the friend access to a piece of knowledge, a key, which is unknown to the enemy.  The distinction between friend and enemy hence relies on knowledge the friend has, which the enemy doesn't have.  In other words, the key has to have entropy, and the friend has the corresponding knowledge and the enemy doesn't.  One can compare this to the state of an observer of a random process before the trial, which is the situation of an enemy, and the state of the friend, which is like the state of the observer after the trial.

The distinction between a friend and an enemy hence can be measured by the quantity of information that the friend possesses ; in other words, the quantity of information in the key.  The amount of entropy will determine the maximum possible security of the cryptographic system.  The real security can be much lower because of other problems, but in any case, the security of the system cannot be better than given by the entropy of the key.  Depending on the specific application, the quantity of entropy that gives sufficient security can be very different.  

A way for an enemy to obtain the key, is by guessing and trying.  If the enemy has a reaction that indicates whether his guess is right or wrong, then the system itself is giving information to the enemy through this response.  As such, the enemy can extract sufficient information from the system in order to compensate the entropy of the key.

If the enemy can only use a few trials (for instance, as a password on an interactive system  that only allows a few trials), a relatively modest amount of entropy in the key (here, the password) can be sufficient, because the amount of information the system will give is limited too.  On the other hand, if the enemy has a way to try as many keys as he can afford, then in any case, in principle, any enemy can extract all the information from the system concerning the key.  What is limiting here, is the proof of work needed for a trial and the amount of information needed.  In other words, a high amount of entropy in the key will require a large amount of work on the side of the enemy ; so large an amount of work that for all practical purposes, the work is not feasible.

The method of guessing keys, and obtaining a response from the system that indicates whether the key is right or wrong, is called a brute force attack.

In as much as the system itself can provide, through its answer on a key trial, a response that indicates whether the key was right or not, the system can always provide enough information to kill entirely the entropy of any key.   But it can also be that the system doesn't give that answer.  That is, the system does provide an answer, but it is not clear from the answer whether the key was right or not.  Such systems are the only ones who can preserve the key entropy indefinitely.  In any other case, a brute force attack is always possible, and the only limitation on it is the huge amount of proof of work that is needed to extract a given amount of entropy.

In all these cases, the genuine entropy of the key is the determining factor of the security of the system.

Consider for instance an entropy of 32 bits.   With 4 billion trials, the key will be found with certainty.  If an enemy can try as much keys as he wants, and if a trial takes a microsecond, in about an hour the key will be found.  On the other hand, consider a key entropy of 1024 bits.  This time, the number of trials is 10^308.  That's for each of the elementary particles in the universe, running over each couple of particles in the universe and trying a key.   Even at a microscopic amount of work per trial, this is not feasible.  A key entropy of 1024 bits, if the only possible attack is brute force, is totally impossible to execute in practice and hence such a system can be totally secure.  On the other hand, if the enemy has only, say, 10 trials, then even a 32 bit key will give you a security that corresponds to a probability of one out of 400 million to be broken.

The true entropy of the key is hence primordial.  But in order for the entropy to be true, there may not be any way to find out the key based on the way it has been determined.  This is the difference between a true random generation and a pseudo-random generation.

Statistical randomness and pseudo randomness

 Consider a (long) table of numbers.  One can calculate statistical properties of these numbers.   For instance, one can calculate the averages over successive series of, say, 100 numbers.  One can calculate the average correlations between a number, and the number that follows that number.  Or the correlation between that number, and the number that occurs 5 places further in a row. 

It is possible that for certain tables of numbers, these statistical properties are compatible with ensembles of random numbers with certain prescriptions.   In that case, that table of numbers can be used as a trial drawn from such a probability ensemble.

Of course, the best way to guarantee these statistical properties is to make such a table based upon a genuine process that is supposed to exhibit the desired statistical properties, but after the fact, the only thing one can say about the table, is whether it has the statistical properties or not.  The "true origin" of the table cannot be verified by just looking at the table.  If we are hence able to find a way to compute a table with such statistical properties, then that would be equivalent to obtaining that table from a genuine trial, as far as the statistical properties are concerned.

There are algorithms that can generate tables of numbers that have statistical properties that correspond rather closely to those of "simple and useful ensembles of random variables and processes".  Most of the time, these algorithms are inspired by chaotic dynamics.   Of course, certain very sophisticated statistical tests will always fail on such series, namely those that test exactly the correlation of the algorithm.  But for most applications where one might need random numbers, these highly contrived correlations do not matter.  Except for one application of course: cryptography.

The standard terminology is that series of numbers that are supposed to be random, but that are generated by a computational algorithm, are called pseudo random numbers, while random numbers that emerge from the measurement of some kind of natural process are true random numbers.

I would like to alter that definition somewhat.  The reason is that there is no way to distinguish a table of pseudo random numbers from true random numbers if one is just given the table - apart from trying to guess the algorithm that generated the pseudo random table ; but it is always possible to devise an algorithm that will generate any given table of course, so this is of limited utility as a definition.

I would like to extend the class of pseudo random numbers to any potentially publicly known "table of random numbers", whether by algorithm or by publicly available publication or measurement.  I think that the reason is obvious: in cryptography, the ignorance by the enemy has to be genuine.  If the numbers are publicly known or potentially available, then one cannot claim that they are genuinely ignored by the enemy, even if they originated from a "true random process".

The reason I make this distinction is that there are publicly available tables of "true random numbers".  Before the advent of powerful and cheap computers, these tables were useful for people needing tables of numbers with statistical properties (to do Monte Carlo simulations for example).   There are people publishing true random numbers, like random.org, but for cryptographic use, it should be obvious that these publicly available numbers should be called pseudo-random even though their source may be "a true random process".  The very fact of making them public reduces their ignorance, their entropy, to zero

I would even say, relying on an external third party to give you entropy-rich random numbers, is somewhere a logical contradiction, as there is no means for the customer to know how much his random numbers are ignored.  At least he already knows that their entropy for the manufacturer of the numbers is zero !

Hence: true cryptographic entropy can only be generated locally and privately.

This may, at first sight, sound at odds with the golden rule of cryptography, of only using known and published techniques.  The reason is of course that in cryptography, the power of cryptography lies in the entropy of the keys, which should be totally private, and the secret doesn't rely in the method, which should be robust against attacks (and hence tested by a lot of people).

Cryptographic pseudo random number generators

 As said before, there exist algorithms that can produce tables of numbers that have statistical properties of random variables from a simple ensemble.  But of course, anybody running the algorithm again, will obtain the same table.   The entropy of such a table is hence zero.  Nevertheless, most of these generators have the possibility to generate several different possible tables, depending on a key which is called a seed.  You could think of the seed as the number of the table that has to be generated.   Each table is incredibly long of course, and our algorithm can generate several of these tables ; the seed is the number of the table that we want generated.  If our algorithm has the possibility to generate, say, 128 different tables of each 4 billion numbers, then if our seed is 23, we ask the algorithm to compute us the 23th table.  We can hence say that if the seed is unknown, our random numbers do have some entropy to them.  Indeed, we don't know from what table they are generated.  In our case, the entropy of such a random number would be 7 bits (as they are drawn from one of 128 possible tables).  However, chances are that from the moment that I have 2 or 3 successive random numbers from the generator, I can find out which of the 128 tables we are using.  Once I know this, I fully know all the other random numbers that will be generated and their entropy is zero again.

So even if a seed re-introduces entropy (the seed's entropy) into the random numbers from a pseudo-random number generator, if one isn't careful, the first few random numbers from the generator can reveal the seed, and hence render the entropy of the next random numbers zero.  There are certain "cryptographically secure" pseudo random number algorithms, which are computationally hardened against this problem.   That means that although of course, a few random numbers determine the table that is generated and hence all next random numbers, to find that table, and to find the seed that goes with it, is made computationally so costly that it is in practice not computationally feasible to do so.   As such, cryptographically secured pseudo random number generators can generate random numbers that have more or less the entropy of the seed without revealing it.

The standard golden rule is also valid for cryptographic pseudo random number generators: only use well-known algorithms.

As we see, (cryptographic) pseudo random number generators are not a source of entropy.  They are at best entropy-transformers.  They need a source of entropy (the seed) in order to be able to generate other random numbers which have, at best, the same amount of entropy.   So where does the essential entropy in a cryptographic system then come from ?

What entropy sources to use ?

One shouldn't forget what is the essential function of entropy in a cryptographic system: it is necessary ignorance by the enemy.  As such, any thing that could reduce this necessary ignorance (entropy) is to be avoided.  This is why we said already that cryptographic entropy has to be obtained locally and privately.  Next, it has to be kept privately.  These are the two main challenges of cryptographic entropy.

No computation can generate entropy.  This is often a conceptual mistake.  Entropy can only come about from measurement or observation.  In as much as one needs a guarantee that this will remain true entropy (true ignorance) for enemies, one needs to make sure that the enemy has no access to the same source of entropy as the one that is used.  This is why any publicly available process as a source of entropy, no matter how rich and powerful, is a bad idea.  The physical processes on which the measurements are based that are to generate entropy, shouldn't be accessible externally.

 The best sources of entropy are:

  • thermodynamic entropy
  • quantum entropy

Apart from those sources, one can also use complex local processes as a source of entropy.  In fact, the Linux kernel uses the timing of user input (keyboard strokes and mouse movements) as a source of entropy.   Although of course the timing of specific key strokes and specific mouse movements done by hand by a user are the result of a complex physiological process in the brain of the user, and in the muscles and nerves of the user, this is only a very modest source of entropy.  It can be sufficient to have, say, an entropy of a few thousand bits which can be enough to generate a secure cryptographic key, but it is open to some critique: in how far are these movements truly unpredictable ?  Couldn't it be that specific users have specific timing patterns when they type certain phrases, which are quite reproducible ?  How much true uncertainty remains on their timings ?  It is true that for most practical applications where only a modest amount of entropy is necessary, the Linux based entropy is good enough in many cases.

But we can do much better.  The best sources are thermodynamic and quantum-mechanical local entropy, because they are believed to be fundamental.  If it were possible to find them out externally, it would mean that the second law of thermodynamics is violated, or it would mean that quantum-mechanics has been violated.   Although there are philosophical musings about these possibilities, for all practical purposes, these are true sources of entropy.

Quantum-mechanical phenomena, such as radioactive decay, or the reflection or transmission of a single photon on a half-silvered mirror, are fundamental sources of entropy.  Nobody can, even in principle, know them.

But thermodynamic entropy based phenomena, such as fundamental electronic noise, are just as fundamental sources of entropy.  In practice, it is much cheaper and much easier to use fundamental electronic noise as an entropy source than it is do set up a quantum-mechanical measurement.  Moreover the entropy flux from an electronic noise source can, with simple means, be much larger than the entropy flux from quantum sources.  Quantum sources are perfect entropy sources.  But so are thermodynamic entropy sources.  Some people claim that quantum sources are better (in order to sell you expensive quantum-mechanical based sources): this is simply not true.  Thermodynamic entropy sources are just as good as quantum-mechanical entropy sources: both are perfect for all practical considerations.   One can even add that most often, thermodynamic fundamental 

If it is externally possible to perform a calculation such that this local thermodynamic entropy can be deduced means that people would be able to deduce the specific local micro state your entropy generator must have had, from non-local measurements after the fact. If that is true, then people are also able to calculate just any micro state of any physical system in the past (necessary because if our resistor has interacted with it thermally, both systems are linked and have a common dynamics).  It means that people can calculate the micro state of any system with which your source has been interacted, in other words, the entire environment.  If people can do so, that means that they have enough information to make Maxwell's demon work, and hence the second law is not valid any more.  As such, the claim that thermodynamic entropy sources are less reliable (that is, more predictable) than quantum entropy sources is equivalent to claiming that one can violate the second law of thermodynamics for the entire environment of the source.

Cryptographic pseudo-random generators or physical entropy sources ?

 The problem with physical entropy sources is that they need some dedicated hardware - which is, however, not expensive, and entrop-x can provide you with such devices.  It is only in those situations where it is impractical to use these devices, that the option of pseudo-random generators should be considered.  In all other cases, physical entropy sources are to be considered superior for cryptographic applications ; although in many cases it is true that the superior quality of physical noise sources is not really needed.  That said, in matters of security, why take a risk when avoiding the risk is low cost ?

Entropy sources based upon fundamental electronic noise are perfect sources for cryptographic entropy, as we argued before.  Their entropy flux is in fact only limited by the need to guarantee that one is using the fundamental noise, and not pick up noise (which can be much more predictable, which is in principle also externally available etc...).   As such, the signal to noise ratio of the noise source must be considered, where the "noise" source is this time an upper limit on a possible pick up signal, and the "signal" is the power of the fundamental noise we want to use.  The problem with fundamental electronic noise is that it usually has a small power, and hence the guaranteed pick up noise ceiling may very well not be very much below that power level.  To remain on the safe side, one mustn't hope for more than a few powers of two in the signal to noise ratio.  Using Shannon's formula, this means that a fundamental electronic noise source which is white and has a bandwidth of B, should be able to generate an entropy flux of the order of 10 B or so.

As such entropy sources with fluxes of a few Mb/s or a few 10 of Mb/s are very easily constructed with not much hardware ; somewhat more expensive hardware can go up to several Gb/s.   For most if not all cryptographic applications, such fluxes are more than sufficient, and the efficiency increase by using cryptographic pseudo-random number generators can be questioned whenever the use of such hardware is feasible.

Tampering with cryptographic entropy sources

As said before, the cryptographic security is totally dependent on the entropy of the keys in many systems.  In as much as this entropy is zero for some enemies, the cryptographic system is worthless, no matter how sophisticated it is for the rest.  The entropy source is hence a crucial element in the security of a cryptographic system, and if the entropy source is tampered with, this can render any system that depends on its entropy, totally ineffective and insecure.

The most obvious way to tamper with an entropy source is to make it generate random numbers which are known to the tamperer, for instance, by using pseudo-random generators with a very limited set of seeds.   A technique can be for instance, to take a user-provided seed (which can be entropy-rich), to hash that seed into a small number, and to use that small hash as the seed of a well-known pseudo-random number generator.  The entropy of the result is then limited to the very small entropy in the hash.   For the tamperer, it will be relatively easy to run though the limited amount of keys that can be generated with this system, while the user has the illusion to have a very secure and entropy-rich system.  It is very difficult to find out this behaviour of a black-box entropy generator.

If one acquires an entropy source which is said to be based upon a physical noise source, but of which the actual data are just the result of an internal pseudo-random generator, then exactly the same security hole is present in every cryptographic system that depends on this entropy source.  This is why it can be important to be able to verify oneself in how much the random numbers coming out of the source are indeed obtained from the physical noise.

An example.