Linear filters

Linear filters are linear systems (digital or analogue) which have about the same purpose as in chemistry: to let pass those parts of the input signal that we want to keep, and to reject those that we don't want to keep.  As linear, time invariant systems have sine and cosine functions as eigenfunctions, essentially the effect of a linear filter is nothing else but to change the amplitude and/or phase of each eigenfunction.  This is nothing else but what the transfer function of the filter tells us.  In fact, as long as the transfer function is not exactly equal to zero, a filter function is moreover entirely reversible.  This would mean that the entropy of a signal is not modified by any linear filtering that doesn't eliminate frequency components entirely, as the filter is supposed noiseless (cannot contribute any entropy) and reversible (the original signal can be restored).

So at first sight, the only thing that a linear filter can do is to let pass certain frequencies, and stop others.   Although this is an important class of filters (frequency selective filters), there's more than meets the eye.  There are several other types of filters that serve other purposes.  Although linear filters cannot change the information content, they can make the information in a signal more simply available to be accessed by non-linear operations.

In this contribution, we limit ourselves to filters which can as well be implemented in an analogue way as in a digital way.  As such, we won't discuss adaptive filters, Kalman filters, least mean square filters and the like, which can only realistically be implemented digitally, and are in certain respects, not linear.

Frequency filters

Although the main aim here is to make the reader aware that there is more to filtering than just frequency filtering, nevertheless, frequency filtering remains one of the most important applications of filtering.

Ideally, a frequency filter has a transfer function equal to 0 where we don't want the frequencies to pass, and equal to 1 where we want them to pass.  Classically, one has:

  • low pass
  • band pass
  • band reject
  • high pass

filters, although one could of course find combinations of them to make more sophisticated selections.  Usually, people study the low pass filters, as there are ways to transform any low pass filter into a band pass or a high pass filter.

The ideal low pass filter has the following transfer function:

H(ω) = 1 for ω < ω0 = 2 π f0

H(ω) = 0 for ω > ω= 2 π f0

The corresponding impulse response is the famous sinc function:

h(t) = 2 f0 sin(2 π f0 t) / (2 π f0 t) = ω0 / π  sin(ω0 t) / (ω0 t) 

This transfer function has quite some theoretical and practical difficulties for implementation.  The most obvious implementation difficulty is of course that one needs all future time evolution of the input signal to be able to compute the instantaneous output signal value.  In other words, this filter is not causal.  Another problem is that there's no way to implement such a filter with analogue components.

This is why there exist several approximations of this ideal low pass filter which can be realized with analogue components, and/or which solve the problem of non-causality.  The price to pay, of course, is that some frequencies in the pass band will suffer amplitude and phase variations, and that some frequencies outside the pass band will nevertheless get a finite amplitude in the output.  The filter will hence not be perfectly selective, and will introduce some distortion in the signal to be transmitted.  Nevertheless, these imperfections can be limited arbitrarily.

A filter type usually defines a whole family of filters, as a function of an order parameter.  when the order parameter increases, the filter complexity increases, but it can obey more and more severe limitations on the imperfections.  In the limit of infinite order, in principle, every filter type will converge in a certain way to the ideal filter (at which point it becomes unrealisable).  

The standard frequency filter families are in fact limited to a set of transfer functions which are rational functions with their poles in the left half complex plane.  The order of the rational function gives the order of the filter, and there's a representative for each order in each family.  The different families then differ by their optimisation criterion, and sometimes by an extra mathematical condition on the rational function.

The most known families are:

  1. Butterworth filters.  These are pure-pole filters (no finite zeros) in such a way that in the low pass version, the filter is maximally flat at low frequencies.  That is, for a given order, the Butterworth filter is the one that will affect the least the very low frequencies.
  2. Chebyshev filters. The Chebychev filters use a property of the Chebyshev polynomials: The Chebyshev polynomial of order n is that polynomial of order n which, when confined to the "box" from -1 to 1 in x and -1 to 1 in y, will rise the most rapidly after x = 1.  In its "normal" version, the Chebyshev filter of order n allows (over the entire pass band) an amplitude ripple of given size (say, 3dB - to be chosen by the user), and will then, at the cutoff frequency, be the steepest filter possible.  That is, no other pure pole filter of the same order that has a ripple in the pass band confined to what the user specified, will cut away the frequencies in the stop band more than the Chebyshev filter.
  3. Inverse Chebyshev filters.  With inverse Chebyshev filters, we don't have pure pole filters any more: there are finite zeros.  The inverse Chebyshev filters are also based on the Chebyshev polynomials, but one now works with 1/s instead of s: the "confinement box" is in the stop band, and the fast transition to is towards the pass band.
  4. Elliptic filters (also called Cauer filters).  Elliptic filters are "double Chebyshev" filters, where there is a confinement as well in the pass band as in the stop band.  In the limit where the confinement ripple in the pass band goes to 0, the Cauer filter tends to the inverse Chebyshev filter ; in the limit where the confinement ripple in the stop band goes to 0, the Cauer filter tends to a (normal) Chebychev filter.  In the limit where both the confinements go to zero, the Cauer filter tends to the Butterworth filter.   In fact, the Cauer filter is based upon the inverse Chebyshev filter theory, but instead of having a Chebyshev polynomial, a Chebyshev rational function is used.

In the same spirit, there are two classical filter types which are frequency filters, but where the importance is not so much the selectivity in frequency, but rather the well-behaved-ness of a pulse that is treated by the filter.  They are:

  1. The Bessel filter.  The Bessel filter is a pure pole filter such that, for a given order, the phase response is as linear as possible.   This means that a pulse gets as little distorted as possible (that the timing information is as well-preserved as possible).
  2. The Gaussian filter.  The Gaussian filter is a pure pole filter of which the impulse response is the best possible approximation to a Gaussian.  The Gaussian filter is the filter that can transmit a pulse with smallest width (second order moment in time) for a given bandwidth.

Signal detection filters

 If we have known properties of the signal we want to detect (of course, we don't know entirely the signal because otherwise it wouldn't carry any information!) and we have known properties of the noise with which this signal is contaminated, then there are optimal filters to allow us to recognize as best as possible, in the time domain, the properties of our signal that are unknown.

The most spectacular filter in this area is probably the matched filter.   The matched filter is spectacular because it can detect a signal that is entirely buried in noise, and "by eye" nobody can tell you where the signal is, while the matched filter can pick it out with no problem.  It is often used to impress people during signal processing demonstrations.  But it can also be very useful technically.  The spectacular feat of the matched filter comes about because in fact, the signal doesn't carry much information.  The matched filter can be used when we know in advance the exact shape of the signal in time.  The only two things we don't know (and hence the only two things that carry information) are: the moment in time when the signal will arrive, and the amplitude of the signal.  We are also supposed to know the noise spectral density Sn(f) of the noise by which this signal is contaminated.  The matched filter will then give the highest possible output amplitude when the signal is present, over the RMS amplitude when the signal is not present.  In other words, the output will be some noise, with a large spike when the signal is discovered.  The more "involved" the signal shape is, the better the matched filter works.  Its initial application was in radar.  The emitter of a radar sends out a known signal shape, which is reflected by the objects to be detected by the radar.  The returning reflection is very small, often swamped in noise.  The information resides essentially in the time of arrival of the reflection (indicating the distance to the object) and the amplitude (indicating the size of the object).  However, the shape of the reflection is the same as the emitted wave form (up to some distortions).  However, the matched filter plays important roles in digital communication too.

The Wiener filter is somewhat akin to the matched filter, except that this time, we don't know the exact time shape of the signal to be detected, but only its spectral density description.  We can say that this time, there is also information in the phases of the frequency components.  The signal carrying much more information, the resulting filter will have much less spectacular properties.  Nevertheless, the Wiener filter is that filter that brings out the highest amplitude of the to be detected signal, over the RMS value of the noise at the output when the signal is not present.   The Wiener filter has hence two elements in its problem description: the spectral density Ss(f) of the signal and the spectral density Sn(f) of the noise.  When the spectral density of the signal and of the noise are separate, then the Wiener filter will reduce to a frequency filter.

Implementations: digital versus analogue filtering.

The classical filter families which are mentioned above, can be implemented digitally, but have their origins in analogue signal processing.  Although the domain of analogue signal processing reduces strongly, as digital counter parts become easier and easier to implement, there are nevertheless a few domains where analogue signal processing is still to be considered, or is even impossible to be replaced by digital signal processing.  These applications are essentially:

  • anti-aliasing before analogue-digital conversion.  Before any digitisation step, Nyquist's theorem requires that all frequency components are to be removed except in one single band (usually the base band).  This filtering has of course to be an analogue filter.  Anti-aliasing filtering will always remain necessarily an analogue filter application.
  • high frequency filtering.  High frequencies imply high sample rates and high data treatment rates.  The cost and complexity of such a digital implementation can be seriously higher than an analogue equivalent, especially if the filter function to be implemented is relatively simple.  For instance, if we want to limit a received radio signal to a specific band around, say, 440 MHz, it would require advanced ADC and processing electronics to do so, while a simple passive filter with a few coils and capacitors would have the same effect.
  • low signal level filtering.  A relatively weak signal (say, in the mV range) would need to be amplified before one could digitize it.  If the aim is for instance to remove noise and parasites before amplification, the filter doing so would obviously need to be implemented in an analogue way.  An analogue filter has no "lower bound" on the signal level it can treat.  Imagine that we have a weak sensor signal of 5 micro volt  that is potentially polluted with 50 or 60 Hz hum of several tens of mV from the mains.   A simple notch filter at 50 Hz would eliminate this before tackling the sensitive amplifier ; without the notch, one might saturate the amplifier or generate harmonic distortion due to the high noise level.
  • high power or high dynamics filtering.  Passive analogue filtering has potentially a much higher dynamic range than active analogue and digital filters.  Filtering very small signals in the presence of very strong components is better done with passive analogue filters.
  • low power applications.  If power consumption is important, especially at high frequencies, analogue-digital conversion and data processing can be huge power hugs as compared to an analogue filter.

Nevertheless, digital technology improves and where one might have, for sure, opted for an analogue filter 10 or 15 years ago, the digital option should  be considered today; especially if it can be implemented in a relatively cheap FPGA.  Entrop-x can help, advise and design your solution whether digital.

Analogue filters.

The classical filters have in fact been devised to be able to be implemented by analogue filters.  Analogue filters can be divided in two classes: passive and active filters. Passive filters are made up solely of passive elements: resistors, capacitors, inductors and transformers.   Active filters include amplifiers (nowadays, semiconductor amplifiers).

Passive filters are harder to design and have more constraints, but have also some advantages over active filters: they can be essentially noiseless, they are always stable, and they can go to very high bandwidths.  Moreover, they don't consume any power, and are very robust and reliable.  Usually, passive filters are limited to very simple filters, but if they can be used, one should really consider them. 

Active filters are much easier to design, because one can split the filter in smaller, independent sections.  The active amplifiers make it possible to "forget" the impedance at the output of a section.  As such, it is easier to build complex filters with active filters.  Also the tuning of the filter is easier with active sections, as one can tune each section independently.   The disadvantages with active filters come from the fact that one uses active elements.   These need a power supply, they are more fragile and somewhat less reliable than passive components, and can have non-ideal behaviour, like extra noise, non-linear behaviour, saturation and the like.  This means that one needs to know much better the properties of the input signal when one uses an active filter than when one uses a passive filter.  It can in fact be interesting to combine both techniques, where a first section is passive and the next sections are active.  The passive section can eliminate already many unwanted components of the input signal before they can wreck havoc in the active sections.

There are different ways to build passive filters, and there are also different ways to build active filters.  It requires some expertise to pick the best technology for the application at hand. 

Digital filters

Digital filters are essentially algorithms, so there are many different ways to perform a calculation.  Nevertheless, there are two broad classes of approaches, called FIR and IIR filters.  FIR (Finite Impulse Response) filters are conceptually much simpler and much more flexible, but require more computational resources, than IIR (Infinite Impulse Response) filters.   IIR filters are conceptually much closer to the classical filters.  As they require much less computational resources, they can be very powerful.   An IIR filter can easily be implemented in a small FPGA and run at 100 MHz or more. Wiener and matched filters, on the other hand, are often only easily implementable as FIR filters. One should consider eventually a digital IIR filter instead of a sophisticated analogue active filter if power consumption is not an absolute constraint.   Digital filters don't age or drift, and they can easily be changed by modifying the software/firmware, which cannot be said of an analogue filter. 

Conclusion

A filter problem should take into account many aspects of the problem to solve to decide upon what type of filter to use and whether to use digital or analogue filtering, and to decide upon the technology to use.  One might also consider a mix of different technologies.  For instance, one might first have a simple passive filter that protects the rest of the circuit against totally unwanted components ; next one might have a more sophisticated active filter that serves also as an anti-aliasing filter, followed by a digital filter implemented as an IIR filter in an FPGA, where the most sophisticated things happen.