The challenges of very low frequencies

Under very low frequencies, we understand here, the range below a few Hz, down to, say, a few mHz.  1 mHz is about a period of a quarter of an hour.  10 mHz is about a period of 2 minutes.   For most people, we are now talking about "static" values, but there are circumstances where the dynamics of signals with such periods is important such as in the case of the study of seismic vibrations.

At a certain point, the acquisition of low frequency signals blends into what one calls "instrument stability and drift", and the signal is then considered as a constant DC level, and successive acquisitions are considered as different measurements of different DC levels.  For instance, if you want to measure the temperature variation during the day by taking a measurement every hour, you might even consider that you can switch off your electronic thermometer in between measurements, and whether you're going to measure "the same" temperature signal hour after hour, or you are measuring "different" temperature signals (on top of the mountain, at the shore, in the house, in the corn field...) with the same thermometer is somehow considered indifferent.   The errors committed will be called calibration (drift) errors and accuracy of the thermometer: its imprecision from measurement to measurement.    However, we might think that the thermometer is going to have a higher relative accuracy between two measurements done the same day, than done 6 months apart.  We would somehow accept that all the calibration elements "have drifted" somewhat more over 6 months than over a few hours.   But these "independent DC measurements" is not what we're talking about here.  We are talking about the twilight zone where we still consider the signal as "dynamic" but where the periods start to be so long that we could, on the other hand, start conceiving them as "independent DC measurements", apart from the fact that we want to, or have to, take into account electronically their dynamics.

With most electronics conception oriented versus higher and higher frequencies, and faster and faster data treatment and data fluxes, one might think that the very low frequencies are particularly easy.  It is true that on the digital side, it usually is not much of a challenge.  But there are unexpected difficulties which are very hard to solve on the unavoidable analogue part:

  • AC coupling is extremely hard to achieve in moderate impedance conditions
  • many electromagnetic compatibility techniques don't work at very low frequencies
  • there is a 1/f noise which is omni-present and can reach levels that start perturbing even moderately accurate applications
  • external influences such as temperature variations and vibrations start having important noise contributions
  • power supply stability can be an issue at these frequencies and filtering them is near impossible

As we will see, these challenges are quite daunting, on one hand because most electronics designers are not used to see them as problems and are not armed with the tools needed to solve them.  But they are also daunting because there are severe limits to the solutions available.

AC coupling

One of the core difficulties in very low frequency design is AC coupling at relatively low impedances.  Indeed, consider the ohmic resistance of the load of the signal 1 KΩ, then in order to couple 0.1 Hz signals, one would need a capacitor with a value at least 10 times 1.6 mF.   If the load is 50 ohms, this becomes totally impossible, with a needed capacitive element of more than 10 times 30 mF.

The only way to have genuine AC coupling is to have resistive loads of over 100 KΩ.  The nice thing is that one doesn't have to take into account the imperfections of electrolytic capacitors: their series resistance, leakage and self inductance have absolutely no influence at very low frequencies, so one can use them without problems, and as such the range 10 - 100 μF are perfectly usable.  One shouldn't use resistors over a few 10 MΩ either, because at a certain point, parallel leakage may become significant.

In the face of this difficulty of AC coupling, one might wonder whether it isn't much better to use DC coupling then.   The problem with DC coupling is that one incorporates all the offsets in the circuit elements.  Of course, one can filter them out at a certain point, as long as they don't saturate the signal level and this can be an issue when one needs high amplification levels.  Also, even without saturation, the accumulated and amplified offsets might make the working point of the circuit uncontrollable, which is a quality control problem: some circuits may work well, and others, because the variability of their working point, may not work so well.

So, what can one do ?  The toolbox of solutions to the AC coupling problem at low frequencies incorporates the following techniques:

  • using high-resistance circuits.  In the middle of the signal chain, when one is applying amplifier and filter blocks and the signal is relatively well-controlled, one should opt for those circuits with guaranteed high input resistance.  As such, one can use electrolytic capacitors in a reasonable range for AC coupling.  This should be taken into account when choosing the circuit topologies of, for instance, active filters.
  • using active DC elimination feedback.   At some point in that circuit, one needs a passive AC coupling, but that coupling can be done in high-resistance conditions.
  • using differential signals.  In as much as we want to keep DC coupling, the only way to have a significant compensation of offset contributions is to use matched differential amplifiers. 
  • using choppers.  The idea is to mix the low frequency signal with a moderate frequency signal (say, of a few KHz).  The super-heterodyne principle is then used in the opposite way, where a low-frequency signal is up-converted to a higher frequency band.  The useful signal is now in the KHz range, and one can use normal band-limited amplifiers and filters in that range.   Chopping used to be more favoured in the past than now, because the artefacts it introduces are not always much better than the direct performance of today's components on the market.   Although the principle of chopping is very attractive, its actual implementation can be very tricky

The problem with all these solutions is that they introduce noise sources, which can be rather important, especially if they are prone to 1/f noise: high resistances introduce significant voltage noise ; however, it is true that with high-quality resistors, one can limit the 1/f noise contribution.  But a 10 MΩ resistor introduces already 400 nV of noise per square root of Hz, which is close to 1 here.   A 100 KΩ resistor is preferable, as this only introduces an unavoidable 40 nV per square root of Hz.

Electromagnetic compatibility at low frequencies

EMC is different at low frequencies, and this can trick people.   At low frequencies, certain EMC issues one was used to at higher frequencies don't exist, and at low frequencies, certain protections which one uses at higher frequencies don't work any more.

A first point is that the inductive impedance which rendered equipotentiality impossible to guarantee at higher frequencies, doesn't play a role here.  But now we have other issues: thermo-electric contacts and tribo electricity can render equipotentiality also hard to obtain.

Capacitive effects are absent in low frequencies, but capacitive filtering is essentially impossible to achieve (see the AC coupling issue, it is the same).   Shielding against electric fields works very well, but is unnecessary, and shielding against magnetic fields it totally ineffective, especially at low frequencies. 

Most "Faraday cage" protections have limited utility.  In fact, the only couplings that have an effect at low frequencies are conductive couplings with the perturbation source, and magnetic field couplings, and these two couplings are in fact not much taken care off by shielding.  However, shielding is important to protect our circuits from higher frequency perturbations which can nevertheless have an effect on a low frequency circuit.   So, yes, one should shield, but shielding is not going to protect us much from specific low frequency perturbations our circuits are especially sensitive to.

Most inductor and capacitive filters at the point where the external cables enter the box are totally ineffective against low frequency common mode.

The usual advice in EMC protection is to wire all grounds together, because ground loops are unavoidable through capacitive coupling, and we should prefer many small loops over a few big loops.   Also, as the main problem in normal EMC is the inductive impedance, having a geographically extended mesh gives rise to much lower impedance than having only a few line-like links.    All these considerations are much less the case in low frequencies: there are no capacitive couplings, the inductive impedance is negligible, and thermoelectric couplings don't get better if we spread them out, on the contrary, because temperature variations in a big mesh are larger than with just a few contacts.  The meshing of grounds is as such much less important and much less effective in low frequencies than it is at normal frequencies.  It can even be problematic in a way. We have to keep in mind that low-frequency circuits can be perturbed by high frequency sources, so shielding remains important, but is not as effective as one is used to. 

As such, the most important EMC advice is to work as much as one can, with differential signals, and twisted pair links, and make all metallic connections on the two wires as symmetric as possible (if they can be at the same temperature that is very good).  In as much as one can have floating sources, this is better than having sources grounded on one side.  Indeed, at low frequencies, the "transformer effect" is totally absent and there is absolutely no compensation by magnetic coupling between the ground/shield and the signal wires: the shielding doesn't induce any impedance for common mode signals.   If one can avoid ground loops all together, this is much better as a protection against common mode.  If this implies that one cannot connect a shield on one side, then so be it.  The usual method of rendering a source floating, namely with a signal transformer, doesn't work at low frequencies.

In low frequencies, one should not take the ground as a common reference, and avoid as much as possible, galvanic ground loops which are connected in one way or another to a sensitive signal.    In other words, if the source is floating within a shielded environment, one can connect the shield (it is the preferred situation).  But if the source is connected to the shield on one side, it might be preferable, if possible, not to connect that shield = ground to the shield of the cable on this side, because the ground loop, and the asymmetry, introduced by this connection will  induce larger common mode signals than leaving the source "floating" and having the shield only connected to one side (which still protects against electrical influence).  It can be a good idea to connect the shield capacitively, so as to have still the shield connected on both sides for high frequencies, but have a galvanic separation at low frequencies.  However, this capacitive coupling usually also introduces some inductive effect, rendering its protection at very high frequencies much less effective than by a direct connection.

On a single board, one may convert the differential signal to a single-sided, ground-referenced signal unless one is in a very difficult environment, where it is better to remain in differential mode all the way.  However, the differential mode is always noisier than the single-sided mode, so there is a trade-off to be found.

Coaxial links are to be avoided in very low frequencies.   There is no coaxial effect (the coaxial transformer doesn't work at low frequencies) and the confusion between shield and ground destroys most of the advantages of a differential link.  One should by large prefer (shielded) twisted pairs.

1/f noise

1/f noise seems to occur in almost all electronic components, but some are more prone to 1/f noise than others.   But one should first understand what it meant by 1/f noise.  It is a power spectral density which has a 1/f behaviour. 

This is different from the power spectral density of white noise that went through an integrator.  That spectral density has a 1/f2 dependence. In fact, a 1/f spectral density is impossible to obtain from white noise and a lumped filter function.

The physical origin of 1/f noise is in most cases poorly understood, and it seems to be such a universal phenomenon that one has difficulties imagining that a single phenomenon in, say, a semiconductor is responsible for it.  1/f spectra are related to certain chaotic phenomena such as intermittency.

1/f noise has equal contributions to the rms noise over identical logarithmic frequency decades: the rms noise  between, say, 1 mH and 1 Hz is identical to the rms noise between 1 μHz and 1 mHz, which is the same between 1 nH and 1 μHz and so on.  As such, 1/f noise has an infinite rms value between true DC and any frequency, and seems unphysical as such.  But in fact, one never measures true DC noise, because the life time of any circuit, and any measurement, is finite.

Imagine that a certain 1/f noise gives rise to 20 mV of noise between 1 mHz and 1 Hz.  In that case, we have another contribution of 20 mV between 1 μHz and 1 mHz, and also 20 mV 1 nH and 1 μHz.  In other words, if we measure over a period of 15 minutes, we expect 20 mV of RMS variation.  If we measure for 11 days, we expect 28 mV of noise.   If we measure for 32 years, we expect 35 mV of noise.  And if we measure for 32000 years, we expect 40 mV of RMS noise.

So yes, it tends to infinity, but very, very, very slowly.  There have been long measurements of 1/f noise, and the spectrum doesn't seem to level off at low enough frequencies.

There are two ways to specify the 1/f noise.  One is to indicate the corner frequency where the "white" noise is smaller than the 1/f noise.  In most cases, for "good" components, this is in the neighbourhood of 10 Hz or so.  That means that below 10 Hz, 1/f noise is dominant, and above 10 Hz, we can consider the noise power spectral density to be essentially flat.  The other way of indicating the 1/f noise, is to specify the rms value, say, between 0.1 Hz and 10 Hz, or 0.01 Hz and 1 Hz or so ,where the range is implicitly assumed to be dominated by 1/f noise.

Power supply

Power supply is a serious head ache in low frequency circuits.  Its origin can be found in the difficulty to have AC coupling in low impedance environments, and power supplies are usually low impedance situations.  In fact, one can forget all passive filtering of power supplies for low frequencies.  Mind you, you do need passive filtering of power supplies, but to filter out the high frequency components which might perturb low frequency circuitry.  However, this doesn't help at all against low frequency noise of the power supply.  In fact, only very good quality regulators can be used.  Switched power supplies can actually perform very well in the low frequency domain.   Stable references can be useful if one wants to protect against low frequency temperature noise.

Digital signal processing

There is in fact no doubt that most signal processing in this domain should be done digitally: the sample rates are low, and even a modest controller can apply sophisticated algorithms to them without the use of much resources (included power consumption).   In fact the only analogue signal processing needed is the bringing into the right dynamic range of the interesting bandwidth.  This by itself can be somewhat challenging.  For instance, if one wants to do dynamometric measurements of low-frequency vibrations, usually the mechanics of the dynamometer is such that its transfer function goes in f2 for low frequencies.  This means that low frequencies are quadratically suppressed: a vibration at 0.01 Hz will give a 100 times smaller electrical sensor signal than a vibration of 0.1 Hz of the same mechanical amplitude, and this will have in its turn a 100 times smaller electrical signal than a vibration of 1 Hz.    If one is hence interested in mechanical vibrations between 0.01 Hz and 1 Hz, the signal range at 0.01 Hz is 10 000 times smaller than the signal range at 1 Hz.  If digitized directly, the significant bits at 0.01 Hz are 13 bits lower than the significant bits of the 1 Hz range. 

It can be useful to compensate this f2 dependence of the signal amplitude with an amplifier that has a 1/f2 behaviour in the range 0.01 Hz to 1 Hz (that is, that there are two poles in the transfer function somewhere below 0.01 Hz).   However, this means that we have a circuit that has a gain of about 10 000 at 0.01 Hz: 1 microvolt of noise at the input becomes 10 mV at the output, and it is hard to do much better.

Our circuit should also have also a high pass filter of at least order 3 below 0.01 Hz, because we don't want to amplify even more the 0.001 Hz range ; if we do, we would have a gain of 1 000 000 there, and our 1 microvolt noise becomes 1 volt noise at the output, which will start giving saturation problems.   But mind you that you do not need any sophisticated filter.  No need to put a strong cut-off filter (say, a Cauer filter of order 4): we only need to cut away the low frequencies enough so that we don't saturate any amplifier by the noise, or the uninteresting signals in that range.  The strong filtering can be done numerically.

There needs of course also to be an anti-aliasing filter, but this one is very easy.   The sampling frequency will be much, much higher than the interesting domain.  In our case, there is no reason to sample much below 1 KHz (even at 1 KHz, a simple controller can do all the needed computations).  As such, in our case, first order filter might even be sufficient (if we are not interested in more than 10 significant bits or 60 dB) ; and a simple second order filter will be totally adequate and would suppress any aliasing at 1 KHz by a factor of 1 million or 120 dB.  In any case, the noise level would be hardly better than 40 dB if we already have a noise level of 10 mV due to the 1/f noise...

Note that in our specific example, we are in a particularly uncomfortable (but very often encountered) situation: the 1/f2 amplitude filter corresponds to a 1/f4 power amplification transfer function.  Combined with 1/f noise, this gives rise of a 1/f5 noise spectral density at the output !

So the essential analogue part of a low-frequency chain consists of:

  • In many cases a differential receiver as ideally the signal is transported over a differential twisted pair, floating from ground/shield
  • A very low frequency high pass filter (to limit the 1/f noise and prevent it from saturating)
  • A stage where the dynamics of the signal is equalised (in our example, we needed a 1/f2 dependence)
  • A simple low pass anti-aliasing filter

All more sophisticated treatment should be done digitally, because each analogue stage will introduce more 1/f noise.  In many cases, the differential receiver is the most damaging noise source, but it is essential for EMC reasons.  There are special constructions with discrete components that can seriously limit this noise source, and ENTROP-X can advise you on this topic.