
T312_1 Electronic applications
About this free course
This free course is an adapted extract from the Open University course T312 Electronics: signal processing, control and communications  http://www.open.ac.uk/courses/modules/t312 .
This version of the content may include video, images and interactive content that may not be optimised for your device.
You can experience this free course as it was originally designed on OpenLearn, the home of free learning from The Open University –
Electronic applications
There you’ll also be able to track your progress via your activity record, which you can use to demonstrate your learning.
Copyright © 2020 The Open University
Intellectual property
Unless otherwise stated, this resource is released under the terms of the Creative Commons Licence v4.0 http://creativecommons.org/licenses/byncsa/4.0/deed.en_GB . Within that The Open University interprets this licence in the following way: www.open.edu/openlearn/aboutopenlearn/frequentlyaskedquestionsonopenlearn . Copyright and rights falling outside the terms of the Creative Commons Licence are retained or controlled by The Open University. Please read the full text before using any of the content.
We believe the primary barrier to accessing highquality educational experiences is cost, which is why we aim to publish as much free content as possible under an open licence. If it proves difficult to release content under our preferred Creative Commons licence (e.g. because we can’t afford or gain the clearances or find suitable alternatives), we will still release the materials for free under a personal enduser licence.
This is because the learning experience will always be the same high quality offering and that should always be seen as positive – even if at times the licensing is different to Creative Commons.
When using the content you must attribute us (The Open University) (the OU) and any identified author in accordance with the terms of the Creative Commons Licence.
The Acknowledgements section is used to list, amongst other things, third party (Proprietary), licensed content which is not subject to Creative Commons licensing. Proprietary content must be used (retained) intact and in context to the content at all times.
The Acknowledgements section is also used to bring to your attention any other Special Restrictions which may apply to the content. For example there may be times when the Creative Commons NonCommercial Sharealike licence does not apply to any of the content even if owned by us (The Open University). In these instances, unless stated otherwise, the content may be used for personal and noncommercial use.
We have also identified as Proprietary other material included in the content which is not subject to Creative Commons Licence. These are OU logos, trading names and may extend to certain photographic and video images and sound recordings and any other material as may be brought to your attention.
Unauthorised use of any of the content may constitute a breach of the terms and conditions and/or intellectual property laws.
We reserve the right to alter, amend or bring to an end any terms and conditions provided here without notice.
All rights falling outside the terms of the Creative Commons licence are retained or controlled by The Open University.
Head of Intellectual Property, The Open University
9781473031395 (.kdl) 9781473031401 (.epub)
Introduction
The modern world would not be able to function without electronic systems. Using a variety of teaching material, including videos and interactive activities, this free course, Electronic applications , will show you how electronic systems can be found everywhere in communications, control and signal processing. It focusses on electronic filters, particularly digital filters.
Note that the interactive activities have been designed to work in the Firefox and Chrome browsers, so you will need to use one of these browsers if you want to access the interactive content.
This OpenLearn course is an adapted extract from the Open University course T312 Electronics: signal processing, control and communications
After studying this course, you should be able to:
understand the mathematical representations and techniques for manipulating of signals in the time and frequency domains
explain the application, benefits and limitations of communications, control and signal processing techniques in real world applications
select and apply appropriate techniques to the analysis of timevarying signals represented in both the time and frequency domain
use a digital filter to remove Gaussian noise from a signal.
1 Electronics everywhere
Begin by considering the following situation:
You are sitting at home watching the television, and a woman in a diving suit, surrounded by sharks is speaking directly to you, live on television from the Caribbean (Figure 1).
Figure 1 Swimming with sharks
This figure is a screenshot taken from a live broadcast. It says BBC in the top left corner and Blue Planet LIVE in the top right corner. A woman in a diving suit is shown underwater with her hand touching a shark in the foreground. Several more sharks are shown swimming around behind her.
It is possible to take this type of situation for granted, but when thinking about it more closely you might wonder how this is possible. How can someone swimming underwater in the Caribbean be seen and heard by you in your house in the UK?
In this course you will start thinking about everything that’s involved in these electronic systems – from signal processing to control and communications – which allow you to watch a live broadcast from the Caribbean.
If you have studied some electronics before, you may be familiar with analogue components such as resistors, capacitors and inductors, as well as more complex components such as the operational amplifier. You may also know something about digital circuits, which include logic gates, microprocessors and even software. And you may be aware of some of the fundamental principles that describe the way that voltages and currents in a circuit are related, including Ohm’s law and Kirchhoff’s voltage and current laws.
Whilst all of this is necessary to understand electronics, it doesn’t really explain how electronics allows you to watch a live broadcast from the Caribbean. In part, this is due to the sheer scale of electronic systems. A typical computer in 2020 has around 10 000 000 000 (or 10 billion) transistors. Nobody could sit down and design such a computer armed with Ohm’s and Kirchhoff’s laws alone. Electronic systems have to be broken down into subsystems, and these subsystems in turn have to be broken down into further subsystems. Go far enough and you’ll find that you can explain what is going on using Ohm’s and Kirchhoff’s laws, but that’s a long way down.
At a higher level, when designing subsystems, descriptions like ‘the control subsystem’ or ‘the communications subsystem’ are used. These are designed using principles that describe the function of the subsystem. An electronics engineer would use these systemlevel principles to design the subsystem, then implement it electronically.
So, what are the systems and subsystems that allow you to watch someone swimming with sharks live on television?
<?oxy_insert_start author="al22273" timestamp="20200407T160355+0100"?>1.1 <?oxy_insert_end?>Three key subsystems
In order to watch someone swimming with sharks live on television somebody or something must be operating the camera and recording the sounds. The images and sounds are converted into electronic signals; these signals are then sent as electromagnetic waves, via satellite, to your satellite dish at home. This then delivers the electronic signals to your television, which converts them back into images and sounds. As you can see, there are several subsystems involved:
sensors – converting images and sounds into electronic signals
processors – converting the electronic signals into a form that can be stored and transmitted as radio waves
communication – actually transmitting the waves, and receiving them at the other end
processors – converting the waves back into images and sounds
display – your television.
These components can be broadly categorised into signal processing and communications.
Signal processing explains how signals are manipulated electronically to filter out noise and to alter the signals so that they can be communicated.
Communications looks at how the communication subsystems work, showing how the electronic signals are converted to radio waves for transmission and reception.
Are there any other possible subsystems?
Yes, there are many, but the one other major area of electronics is control . In the scenario above, it’s not obvious how control comes into it, however there are probably control systems involved, carrying out tasks such as the autofocusing function on the cameras.
Another and perhaps more obvious way you could introduce control would be to make your viewing experience more interactive. What if you could control the camera remotely so that you decide what you want to see? Is that so farfetched? There are already systems which allow you to control your home from your phone – you can adjust the heating, switch on lights, and even see who is at the door. So why not have the full interactive experience of seeing and hearing what you want to see and hear by controlling the live camera equipment through your television?
What has been described here so far is mainly for entertainment. However, the same principles can be applied to other systems, such as a Mars rover (Figure 2). This is a vehicle that travels around the surface of Mars semiautonomously, collecting samples and analysing them, then sending the data back to Earth. Such a system is clearly making use of signal processing and communication, but it is also using control subsystems to move the vehicle around.
Figure 2 Mars Rover
This figure is an image of a Mars rover, which is a small robot with six rugged wheels and a body with what appear to be solar panels on top. A ‘neck’ extends from the top of the body, leading to a ‘head’ that appears to contain two cameras. An arm extends from the front of the rover and the body has what looks like an aerial sticking up from it. The rover is positioned on reddish rocky terrain with hills in the background under a pale pink sky.
Systems that use signal processing, control and communications are not just found on other planets. On Earth there are similar systems, including semiautonomous delivery robots that can bring goods to homes or places of work (Figure 3).
Figure 3 Delivery robot
This figure is a photograph of a delivery robot moving along a pavement. The robot has six thin wheels and a white, rounded body that is large enough to contain a small amount of shopping. The top of the robot is black and gently domed, and it has a long aerial sticking up from one side with a flag on top. At the front of the robot are what look like cameras or sensors, and it also has illuminated headlights. People are walking back and forth on the pavement alongside the robot, and there is a row of shops in the background.
For the rest of this course you will focus on one of these three subsystems: signal processing. You will look at some of the basic principles of signal processing and how it is implemented.
2 Signal <?oxy_insert_start author="al22273" timestamp="20200407T162117+0100"?>p<?oxy_insert_end?><?oxy_delete author="al22273" timestamp="20200407T162116+0100" content="P"?>rocessing
Signal processing is a branch of electronics concerned not just with the properties of signals, but also with the properties of the devices and systems that carry the signals. The objective of signal processing is to optimise the recovery of some particular aspect of the signal that is of most interest, or to optimise the use of a communication medium.
Signal processing usually means filtering a signal. This could be to reduce or remove interference; it might mean changing the signal so that the communication channel can be used more economically or more efficiently; or it might mean processing a signal so that it can be sampled satisfactorily in an analoguetodigital converter.
What is filtering?
In the context of electronic signals, filtering means altering the signal so that some aspects of the signal are removed while other parts of the signal remain. In this section you will learn about the difference between ‘ideal’ filters and real filters. You will also look at different types of filters and their characteristics and a type of graph used to show the frequencydependent gain of a filter.
One of the most common uses of filters is to reduce the intrusion of unwanted signals, or noise, into wanted signals. A common example of filtering is radio and television tuning. Here, the antenna picks up multiple transmissions being broadcast on different frequencies; the tuning circuit ideally passes the wanted broadcast to the output and blocks all the others, which are on different frequencies from the wanted broadcast.
Filters can be analogue or digital. Analogue filters use components such as inductors, capacitors, resistors, and sometimes operational amplifiers (opamps) . Digital filters are basically computers and achieve their filtering effects through mathematical operations on the sample values of a digitised signal. You will look at these more closely in Section 3. In this section, however, you will focus only on analogue filters.
2.1 Frequencydependent gain
In Figure 4, the box in the middle represents a device with frequencydependent gain $G\left(\omega \right)$ – in other words, a filter. Here the symbol $\omega $ is being used to represent angular frequency, measured in radians per second.
Figure 4 Filter with input amplitude ${V}_{\text{in}}$ and output amplitude ${V}_{\text{out}}$
This figure is a block diagram consisting of a single block with an input of ‘V subscript in’ and an output of ‘V subscript out’. The block itself is labelled ‘G as a function of omega’.
A sinusoidal input signal with amplitude ${V}_{\text{in}}$ is applied to the filter, and the output is a sinusoidal signal with amplitude ${V}_{\text{out}}$ . The voltage gain of the filter at the frequency in question is $\frac{{V}_{\text{out}}}{{V}_{\text{in}}}$ – in other words, it is the ratio of the output voltage amplitude to the input voltage amplitude. So, at any particular frequency,
$$\frac{{V}_{\text{out}}}{{V}_{\text{in}}}}=G$$
The voltage gain can be expressed simply as a number or fraction (or decimal). For example, a gain of 2 means that the output amplitude is twice the input amplitude. A gain of $\frac{1}{4}$ (or 0.25) means that the amplitude of the output is onequarter that of the input.
The above gain is referred to as ‘voltage gain’ because gain is sometimes expressed as a ratio of output and input powers . In this course this way of expressing gain will be referred to as power gain . As power ratios can be expressed in decibels, so power gains are almost invariably given in decibels .
For the time being you will only consider sinusoidal inputs and outputs, as these have a single frequency. This limitation to a single frequency helps to clarify what a filter does. In practice, though, a filter would typically operate on a complex waveform consisting of many frequency components. In such a case, the inputs and outputs would themselves be functions of frequency.
Although two different types of gain have been mentioned (voltage gain and power gain), you will often just see the word gain used by itself. If the gain is given as an ordinary numerical value (such as 2, 10, 3000 or 0.001), voltage gain is almost invariably indicated. If the numerical value is in decibels, power gain is being represented.
The output of a filter differs from the input not only in amplitude but (usually) also in phase. You will look more closely at the question of phase later in the course. However, for now you will continue to focus on gain. Complete Activity 1 to test your understanding so far.
Activity 1
Allow about 5 minutes
A passive filter has an input signal of ${v}_{\text{in}}\left(t\right)=10\text{\hspace{0.17em}}\mathrm{sin}200t$ volts. The steadystate output is ${v}_{\text{out}}\left(t\right)=2\text{\hspace{0.17em}}\mathrm{sin}(200t0.6\pi )$ volts. What is the gain as a voltage ratio?
The input to the filter in part (a) remains unchanged in amplitude, but its frequency changes. The steadystate output is now found to be ${v}_{\text{out}}\left(t\right)=5\text{\hspace{0.17em}}\mathrm{sin}(100t0.2\pi )$ volts. What is the new gain as a voltage ratio?
Here ${V}_{\text{in}}$ , the amplitude of ${v}_{\text{in}}\left(t\right)$ , is 10 V and ${V}_{\text{out}}$ , the amplitude of ${v}_{\text{out}}\left(t\right)$ , is 2 V. Therefore the gain is $\frac{2\text{}\text{V}}{10\text{}\text{V}}$ , or 0.2.
${V}_{\text{in}}$ is still 10 V and ${V}_{\text{out}}$ is now 5 V. Therefore the gain is now $\frac{5\text{}\text{V}}{10\text{}\text{V}}$ , or 0.5.
In the next section you will discover the characteristics of an ideal filter.
2.2 Gain functions of ideal filters
Figure 5 shows some common types of ‘ideal’ filter. Ideal filters are sometimes characterised as ‘brickwall’ filters because graphs of their gain functions have perfectly horizontal or vertical lines. In practice, such brickwall gain functions can never be achieved.
Figure 5 Four types of ideal filter: (a) lowpass filter; (b) highpass filter; (c) bandpass filter; (d) bandstop (notch) filter
This figure consists of four graphs of gain against frequency, representing four types of filter. In each case, the gain is either at 1 or at 0, depending on the frequency. Areas where the gain is 1 are labelled ‘passband’, while areas where the gain is 0 are labelled ‘stop band’. The transitions between 1 and 0 (or vice versa) are vertical, and labelled ‘cutoff frequency’. Part (a) is a lowpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 1 (passband). At frequencies above the cutoff, the gain is 0 (stop band). Part (b) is a highpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 0 (stop band). At frequencies above the cutoff, the gain is 1 (passband). Part (c) is a bandpass filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 0 (stop band). At frequencies between f subscript c1 and f subscript c2, the gain is 1 (passband). Part (d) is a bandstop filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 1 (passband). At frequencies between f subscript c1 and f subscript c2, the gain is 0 (stop band). Because the stop band is a narrow region between two passbands, the stop band is also known as the notch in this type of filter.
Notice that each of the four types of filter has a name summarising what it does. For example, the lowpass filter (Figure 5(a)) passes all frequencies below the cutoff frequency ${f}_{\text{c}}$ and blocks all frequencies above it.
In all the filters, a frequency band where the signal is passed is called a passband , and a frequency band where the signal is blocked is called a stop band . All the filters in Figure 5 have one or more passbands, one or more stop bands, and one or more cutoff frequencies.
In all the passbands shown, the voltage gain is 1, but this is a convention for this type of diagram. The actual passband gain depends on various factors such as whether the filter is passive (that is, consists only of passive components, such as resistors, capacitors and inductors) or active (that is, includes amplification as well as passive components).
Similarly, all the stop bands are shown with a voltage gain of 0. In practice, the gain is likely to be above 0. However, provided the stopband gain is several orders of magnitude below the passband gain, the term ‘stop band’ is reasonable.
In the next section you will see what is meant by interference and noise, which is what is trying to be removed from a signal using a filter.
2.3 Types of interference
In the same way that Figure 5 shows simplified models of filters, there exist simplified models of the type of signals and noise we might want to apply filters to.
A common type of interference is adjacent channel interference, in which the interfering signal is in a frequency band above or below that of the wanted signal, as shown in Figure 6.
Figure 6 Adjacent channel interference: (a) below wanted signal; (b) above wanted signal
This figure consists of two graphs of signal strength against frequency. Each one has a wider rectangle representing interference and a smaller rectangle of the same height representing the wanted signal; these two rectangles are shown side by side on the horizontal axis with a gap in between them. In graph (a), the interference is at a lower range of frequencies than the wanted signal, while in graph (b), the interference is at a higher range of frequencies than the wanted signal.
In each case, an appropriate filter can be used to reduce the interference. For example, you can see that in the case of Figure 6(b), where the adjacent channel interference is above the wanted signal, a lowpass filter can be used, with the passband coinciding with the wanted signal and the stop band coinciding with the interference. With this arrangement an ideal filter could, in principle, remove the interference altogether. (However, as you will see later, the reality is somewhat different.)
Life is trickier when the signal and interference overlap in frequency, as in the narrowband interference and wideband interference shown in Figure 7. Here, no form of filtering can give us what we would like, which is a noisefree signal with no adverse effect on the signal.
Figure 7 (a) Narrowband interference; (b) wideband interference
This figure consists of two graphs of signal strength against frequency. Each one shows two rectangles of the same height, a narrow one and a much wider one, with the frequency band of the narrow rectangle completely overlapping with part of the frequency band of the wider rectangle. In graph (a), the wanted signal has a wide bandwidth, and narrowband interference overlaps with part of it. In graph (b), there is a broad spectrum of wideband interference, and a narrow wanted signal overlaps with part of it.
Note that, just as brickwall filters are unachievable in practice, the brickwall frequency bands and signal strengths of these various types of interference are not achievable in practice. In realworld situations, the boundaries are less clearly defined.
Now complete Activity 2 and apply the correct filter to the interference type.
Activity 2
Allow about 5 minutes
For each of the following types of interference, suggest a suitable filter to improve the signaltonoise ratio, and say how the passbands and stop bands should be arranged. Explain any drawbacks.
Narrowband interference
Wideband interference
For narrowband interference you can use a bandstop filter, with the stop band centred on the interference and of an equal width. This could in principle remove the interference. The drawback is that the stop band also removes some signal power.
When you have wideband interference, a suitable remedy is to use a bandpass filter with the passband centred on the wanted signal and equal in width to the bandwidth of the signal. However, although this gives the best signaltonoise ratio, it cannot remove all the interference.
Having seen the characteristics of ideal filters and the sorts of interference that you want to remove, the next section will look you what real filters are like.
2.4 Firstorder filters
In addition to the filter categories already introduced (lowpass, bandpass, etc.), filters are categorised by their order . The order of a filter is determined by the form of the differential equation governing the filter’s behaviour. The simplest type of filter, with the simplest equation, is called a firstorder filter. Higherorder filters are more complex than firstorder filters, both in their circuitry and in the differential equation that governs them. The higher the order, the more effective the filter.
An example of a firstorder filter is the simple circuit in Figure 8.
Figure 8 Firstorder filter
This figure is a circuit diagram in which the input voltage, V subscript in, is produced by an alternating source. The source is in series with a resistor, R , and a capacitor, C . The output voltage, V subscript out, is taken across the capacitor.
Activity 3
Allow about 5 minutes
Which of the four categories of filter shown in Figure 5 does the filter in Figure 8 belong to? You should be able to work it out from the behaviour of the capacitor at low frequencies and high frequencies. Explain your answer.
Figure 5 (repeated) Four types of ideal filter: (a) lowpass filter; (b) highpass filter; (c) bandpass filter; (d) bandstop (notch) filter
This figure consists of four graphs of gain against frequency, representing four types of filter. In each case, the gain is either at 1 or at 0, depending on the frequency. Areas where the gain is 1 are labelled ‘passband’, while areas where the gain is 0 are labelled ‘stop band’. The transitions between 1 and 0 (or vice versa) are vertical, and labelled ‘cutoff frequency’. Part (a) is a lowpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 1 (passband). At frequencies above the cutoff, the gain is 0 (stop band). Part (b) is a highpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 0 (stop band). At frequencies above the cutoff, the gain is 1 (passband). Part (c) is a bandpass filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 0 (stop band). At frequencies between f subscript c1 and f subscript c2, the gain is 1 (passband). Part (d) is a bandstop filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 1 (passband). At frequencies between f subscript c1 and f subscript c2, the gain is 0 (stop band). Because the stop band is a narrow region between two passbands, the stop band is also known as the notch in this type of filter.
It is a lowpass filter.
At 0 Hz, or DC, the capacitor is open circuit. Under those circumstances, all the input voltage would appear on the output. At high frequencies, the capacitor becomes increasingly like a short circuit, and the output voltage decreases as frequency increases. Therefore low frequencies are passed and high frequencies are (to some extent) blocked.
Lowpass and highpass filters can be firstorder, secondorder, thirdorder, and so on. However, bandpass filters and bandstop filters must be secondorder or higher, although it is possible to achieve their effect by combining two firstorder filters.
Despite being the simplest type, a firstorder filter is not simple to analyse mathematically, as its behaviour is governed by a firstorder differential equation. This is what the voltage gain function of the filter in Figure 8 turns out to be:
$$G={\displaystyle \frac{{V}_{\text{out}}}{{V}_{\text{in}}}}={\displaystyle \frac{1}{\sqrt{1+{\left(\omega RC\right)}^{2}}}}$$
You could plot a graph of the gain function against frequency from this equation, but the details would depend on the choice of $R$ and $C$ . However, irrespective of the values chosen for $R$ and $C$ , the general shape and certain other properties of the gain function would be the same for any firstorder lowpass filter. Therefore, rather than show how the gain function changes with frequency for particular values of $R$ and $C$ , in the next section you will look at a ‘normalised’ version of the gain function. (Normalisation is the presentation of information in a generalised way that can easily be adapted to specific cases.)
2.5 Normalised firstorder lowpass filters
A graph of gain function against frequency is called a frequency response or a Bode plot . A normalised firstorder lowpass frequency response (or Bode plot) is shown in Figure 9.
Figure 9 Normalised firstorder lowpass frequency response
This figure is a graph of gain against normalised frequency. The horizontal axis shows normalised frequency on a logarithmic scale from 0.001 to 1000. Two vertical axes are used for the gain, one showing power gain in decibels from minus 60 to 10, and the other showing the equivalent voltage gain. The graph line starts at a power gain of 0 decibels (voltage gain of 1) for a normalised frequency of 0.001, and remains at this value up to a normalised frequency of around 0.3. The graph line then begins to curve gently downwards, reaching a power gain of minus 3 decibels (voltage gain of 0.707) at a normalised frequency of 1. This frequency is the cutoff frequency. Above the cutoff frequency, the graph line continues to curve downwards until it becomes a diagonally descending straight line. This line has a power gain of minus 20 decibels (voltage gain of 0.1) at a normalised frequency of 10 and a power gain of minus 40 decibels (voltage gain of 0.01) at a normalised frequency of 100, meaning that the slope of the line is minus 20 decibels per decade. Frequencies below the cutoff frequency of 1 are in the passband, while frequencies above the cutoff frequency are in the stop band.
You can see in the graph that on the horizontal axis, the frequency scale is in a form that is known as logarithmic. What this means is that at equally spaced points instead of having 1, 2, 3 Hz etc., you have 1 , 10, 100 Hz etc. The frequency increases by a factor of 10 at each interval.
Secondly, the vertical axis shows the power gain, but is measured in decibels. This again is a logarithmic measure which is designed to measure the ratio of powers, but is probably more commonly known in the measurement of sound.
The reason logarithmic axes are used is so that the shape of the graph is very nearly made up of a horizontal straight line in the pass band, and a sloping straight line in the stop band.
Just from the shape of the graph, it is clear that this gain function of a real filter is very different from the idealised ones that you saw earlier. For example, there is no sharp distinction between the passband and the stop band, and consequently no distinct cutoff frequency. By convention, the frequency at which the power gain drops 3 dB below the passband gain (or, in terms of voltage ratio, falls to 0.707 times the passband gain) is called the cutoff frequency, and this is true also for an active filter where the passband gain is likely to be other than 1.
In this normalised graph, the frequency axis is the normalised part. Notice that the frequency axis has no units, and the numbers on it are relatively low. (In electronics, such apparently low frequencies are seldom of interest; instead typically frequencies ranging from hundreds of hertz to gigahertz are more often used.) The normalised frequencies have been calculated by dividing actual frequencies by the cutoff frequency. This is why the cutoff point sits at 1 on the normalised frequency axis.
Dividing one frequency by another results in a pure number (that is, one without units), which is why the normalised frequency axis has no units. Similarly, dividing an output voltage by an input voltage produces a gain figure that also has no units. If the gain is converted to a decibel value, then strictly speaking that too has no units, although in Figure 9 ‘dB’ has been added after the number as though it were a unit as a reminder of the logarithmic nature of the function used.
As an example of how to translate normalised frequencies to actual frequencies, suppose a practical lowpass filter had a cutoff frequency of 10^{4} radians per second. Simply multiplying all the numbers on the horizontal axis of the normalised graph by 10^{4} and giving the unit as ‘radians per second’ would transform the graph into one showing gain against actual frequency for that particular filter. If the cutoff frequency were 10^{4} Hz (rather than radians per second), the procedure would be just the same: multiply all the numbers on the axis by 10^{4} and give the unit as Hz.
All firstorder lowpass filters have a gain function of this shape and with these slopes. The passband gain might differ if the filter is active (that is, if it incorporates amplification). For firstorder passive filters (that is, those without amplification) the passband gain cannot exceed 1.
Activity 4
Allow about 10 minutes
An alternating voltage source is connected to the input of a lowpass filter. What power is drawn from the output of the filter if the voltage source operates:
well below the cutoff frequency of the filter and supplies 0.2 W to the input of the filter?
at the cutoff frequency of the filter and supplies 0.1 W to the input of the filter?
Well below the cutoff frequency, the gain of the filter is 0 dB, which is a power ratio of 1:1. Hence the power drawn from the output of the filter is the same as that supplied at the input, 0.2 W.
At the cutoff frequency, the power gain is −3 dB or a half. Hence the output power is half the input power, or 0.05 W. The difference between the input power and the output power is also 0.05 W, which is dissipated as heat.
Notice that as you move right along the frequency axis, the gain never reaches zero (that is, zero as a voltage ratio rather than as a decibel number). Zero gain would correspond to a negatively infinite number of decibels.
All real firstorder filters (as opposed to ideal ones) lack a sharp cutoff frequency; in addition, lowpass filters never fully cut off, although if the gain is low enough you can regard it as having fallen to 0. Higherorder filters can give a sharper cutoff than a firstorder filter, but the brickwall cutoff of an ideal filter can never be achieved in practice.
As you move left along the frequency axis, each division is onetenth of the one before, and this can continue indefinitely. A logarithmic frequency scale therefore never reaches a frequency of 0.
The steepness of the gain in the stop band is referred to as the filter’s rolloff . All firstorder filters have a 20 dB/decade rolloff. The same rolloff can also be specified as 6 dB/octave. An octave is a term borrowed from music and represents a doubling of frequency. (It is so called because the frequency span in a doubling of frequency is divided into the eight notes of a musical scale.) Higherorder filters have a steeper rolloff. For secondorder filters it is 40 dB/decade (or 12 dB/octave) and for thirdorder filters it is 60 dB/decade (or 18 dB/octave). Each successive order adds a further 20 dB/decade (or 6 dB/octave) to the rolloff.
Note that at the first decade above the cutoff frequency, the gain of a firstorder filter is 20 dB below the passband gain, not 20 dB below the gain at the cutoff frequency.
Now have a go at Activity 5.
Activity 5
Allow about 10 minutes
A firstorder lowpass filter has a cutoff frequency of 6.28 × 10^{3} radians per second.
What is the cutoff frequency in hertz? (Round your answer to 2 significant figures.)
What is the gain at 1 kHz? Express your answer in decibels.
What is the gain at 100 kHz? Express your answer in decibels.
Angular frequency $\omega =2\pi f$ , where $f$ is the frequency in hertz. So $\begin{array}{l}f={\displaystyle \frac{\omega}{2\pi}}\\ \phantom{f}={\displaystyle \frac{6.28\times {10}^{3}}{2\pi}}\text{Hz}\\ \phantom{f}=999.493\dots \text{Hz}\end{array}$ To 2 significant figures, $f$ = 10^{3} Hz. So the cutoff frequency is 10^{3} Hz or 1 kHz.
1 kHz is the cutoff frequency, so the gain here is −3 dB.
100 kHz is two decades above the cutoff frequency, so the power gain is −40 dB.
As you saw earlier, filters affect the gain and phase of a signal. In the next section you will see how you can show this in the Bode plot.
2.6 The full Bode plot: gain and phase
In Section 2.5, you heard how a full Bode plot would show not only how the gain changes with frequency, but also how the phase difference between output and input changes with frequency. Conventionally the phase plot is put under the gain plot, with their respective frequency axes aligned, as in Figure 10. This example is for a firstorder lowpass filter.
Figure 10 Normalised firstorder lowpass frequency response showing gain and phase
This figure consists of two graphs, one above the other. They have the same horizontal axis, showing normalised frequency on a logarithmic scale from 0.01 to 100. The upper graph is gain G against normalised frequency, where the vertical axis for gain is on a logarithmic scale from 0.01 to 1. The graph line is the same shape as the one shown in Figure 9. It starts at a gain of 1 for a normalised frequency of 0.01 and remains at this value up to a normalised frequency of just over 0.1. The graph line then begins to curve gently downwards, until it becomes a diagonally descending straight line with a gain of 0.2 at a normalised frequency of 5 and a gain of 0.02 at a normalised frequency of 50. The lower graph is phase difference against normalised frequency, where the vertical axis for phase difference is on a linear scale from minus 90 degrees to zero. The graph line starts at a phase difference of zero for a normalised frequency of 0.01, then curves down at first gently and then more steeply until it becomes a diagonally descending straight line that reaches a phase difference of minus 45 degrees at a normalised frequency of 1. After this, the graph line continues to curve down at first steeply and then more gently, until it flattens out again at a phase difference of minus 90 degrees for a normalised frequency of 100. The overall shape of the graph line is a flattened, backwards S with rotational symmetry around the central point (the steepest part of the curve) at normalised frequency 1, phase difference minus 45 degrees.
Figure 10 shows that in the passband, the output is virtually in phase with the input. As frequency increases towards the cutoff frequency (1 on the normalised frequency axis), a phase difference opens between the output and the input, with the output lagging the input (the negative values of phase angle indicate lagging phase). At the cutoff frequency, the phase difference is −45°. This means that for a sinusoidal input, the output lags the input by 45°. By the time you are well into the stop band, the phase difference levels off at −90°.
That concludes the discussion of firstorder filters. To end Section 2 you will consider some commonly found higherorder filters and have a look at their characteristics.
2.<?oxy_insert_start author="al22273" timestamp="20200408T113902+0100"?>7<?oxy_insert_end?><?oxy_delete author="al22273" timestamp="20200408T113900+0100" content="10"?> Chebyshev and Butterworth filters
The design of higherorder filters is a specialist area, and mathematically complex, so in this section you will look at gain functions of just two celebrated types. The first is the Butterworth filter. The gain function of a Butterworth filter has the familiar flat passband and rolloff you would expect. However, this filter comes in various orders, such as 2, 4, 6, 8 and 10, depending on how steep the rolloff needs to be.
Figure 11 shows the normalised responses of firstorder, secondorder and eighthorder lowpass Butterworth filters. As the order of the filter increases, it approximates more closely the ideal brickwall response.
Figure 11 Normalised Butterworth magnitude response curves
This figure is a graph of gain G against normalised frequency. The ideal brickwall response is shown as a horizontal line at gain 1 to the left of the cutoff frequency, then a vertical line at the cutoff frequency, with the gain above the cutoff frequency being zero. Three graph lines are shown in comparison to this ideal, representing firstorder, secondorder and eighthorder filters. Each line starts at a gain of 1 for low frequencies, and curves down first gradually and then more steeply to reach a gain of 1 over root 2 at the cutoff frequency. After this, each line continues to descend steeply and then more gradually, until at high frequencies it approaches zero. The higher the order of the filter, the steeper the descent and the closer the graph line comes to the ideal.
Steeper rolloffs than those of a Butterworth filter can be had if other desirable features of a filter are compromised. The Chebyshev filter has a steeper rolloff, but at the price of a passband response that is not flat. Figure 12 compares seventhorder normalised Chebyshev and Butterworth filters. Notice the ripple in the Chebyshev filter’s response in the passband. (There is another type of Chebyshev filter that has a flat response in the passband and ripples in the stop band.)
Figure 12 Comparison of seventhorder Butterworth and Chebyshev gain
This figure is a graph of gain G against normalised frequency. Two graph lines are shown, representing the Butterworth and Chebyshev filters. The Butterworth line follows the shape seen in the previous figure: it starts at gain 1 for low frequencies, and curves down first gradually and then more steeply to reach a gain of 1 over root 2 at the cutoff frequency. After this, the line continues to descend steeply and then more gradually, until at high frequencies it approaches zero. Below the cutoff frequency, the Chebyshev line oscillates between a gain value of 1 and a gain value of halfway between 1 and 1 over root 2. At the cutoff frequency, it drops very steeply and approaches a gain of zero at lower frequencies than the Butterworth line.
That concludes the introduction to filters and, in particular, analogue filters. In Section 3 you will learn about digital filtering, which is currently a more popular approach to filtering.
3 <?oxy_delete author="al22273" timestamp="20200522T111833+0100" content="An introduction to d"?><?oxy_insert_start author="al22273" timestamp="20200522T111839+0100"?>D<?oxy_insert_end?>igital signal processing
Digital signal processing has developed rapidly over the last 50 years: digital signalprocessing circuits have become faster and cheaper, and memory storage capabilities have increased dramatically. One result of these developments is a migration from analogue to digital circuits for some types of signal processing, and digital filters are an example of this trend.
Digital filters have some advantages over analogue filters. They are programmable, so their operation is determined by a program stored in a processor’s memory. This means a digital filter can easily be changed without affecting the hardware. Also, digital filters are extremely stable with respect to both time and temperature. For complex filters, the hardware requirements are relatively simple and compact in comparison to the equivalent analogue circuitry.
The design of a digital filter is complicated and involves quite advanced mathematics, so software tools that produce a filter design from your specification of filter characteristics are commonly used. However, as you may know from the use of software tools such as circuit simulators, you need to be very wary of using a software design tool to create circuits without having a good understanding of the electrical characteristics of the circuit that you want to create and the parameters used in the design process. A good understanding of how digital filters work will help at every stage of filter design.
For the remainder of the course you will find out about various aspects of filtering a signal digitally. First you will see how a continuoustime signal is converted to produce the digital discretetime signal used as input to the filter. Then you will find out how mathematical operations applied to the discretetime signal can remove or diminish unwanted aspects of the signal.
To get started, Figure 13 shows the basic setup of a digital filter. Don’t worry too much about the detail at this point – it will be covered later. As part of the filtering process, the continuous input signal must be sampled and digitised using an analoguetodigital converter (ADC) to produce a sampled discrete signal. The resulting binary numbers, representing successive sampled values of the input signal, are transferred to the processor, which carries out numerical calculations on them. These calculations typically involve multiplying the input values by constants and adding the products together. If necessary, the results of these calculations, which now represent sampled values of the filtered signal, are output through a digitaltoanalogue converter (DAC) to convert the signal back to continuous form. Given that the continuous input signal and the filtered continuous output signal are continuous in time, they are often referred to as continuoustime signals. Similarly, the sampled discrete signal and the digitally filtered signal are often referred to as discretetime signals. (Here the word ‘discrete’ means ‘consisting of separate parts’ as opposed to the word ‘discreet’ which means ‘unobtrusive’).
Figure 13 Digital filtering of a sampled signal
This figure is a block diagram showing the four stages that a signal goes through during digital filtering. The signal starts as a continuous input signal. This has the overall shape of a sine wave but is more jagged, since it is subject to noise that causes it to constantly deviate by small amounts from the smooth curve of the sine wave. The midpoint of the sine wave crosses the horizontal axis. The input signal enters an analoguetodigital converter. The output is a sampled discrete signal, which consists of a series of evenly spaced vertical lines along the horizontal axis – some extending above the axis and some below. These represent the samples. Each line has a small filled circle at the end furthest from the axis. The height of each line is equal to the distance, at that point in time, from the horizontal axis to the input signal curve. Therefore if you were to join the filled circles, the resulting shape would be approximately the same as that of the input signal. The sampled signal enters a processor. The output is a digitally filtered signal, which again consists of a series of evenly spaced vertical lines at the same points along the horizontal axis as for the previous signal. Once again, each line has a small filled circle at the end furthest from the axis. However, the heights of the vertical lines have now been adjusted so that if you were to join the filled circles, the resulting shape would be a sine wave. The digitally filtered signal enters a digitaltoanalogue converter. The output is a filtered continuous signal that has the shape of the same sine wave as the original input signal, but unaffected by noise.
In the next section you will see an example of a digital filter being used as part of a medical system to illustrate the component parts.
3.1 Digital filters
You read about analogue filters in Section 2, so you know that they are electronic circuits made up of components such as resistors, capacitors and inductors connected together to produce the required filtering effect. In comparison, a digital filter uses a digital processor to perform numerical calculations on sampled values of the signal to be filtered. The processor could be a generalpurpose computer such as a PC, but it is much more likely to be a specialised digital signal processor (DSP) chip, which is designed to carry out the intensive mathematical operations used in digital signal processing quickly and with low power consumption. The low power consumption is important because it means that purposedesigned DSP chips can be used in mobile devices such as phones and tablets.
A potential use for a digital filter with low power consumption is described by Asgar and Mehrnia (2017), who propose using a digital filter in an electrocardiogram (ECG) heartmonitoring system. Figure 14 is a block diagram of the system, taken from their paper. It shows a sensor connected to the surface of the user’s skin to monitor their heart function. This signal is then conditioned. Typical actions here might be to amplify and remove aliases from the signal; then an analoguetodigital (ADC) converter is used to change the signal from analogue to digital form prior to it being filtered.
Figure 14 Block diagram of the ECG heartmonitoring system
This figure is a block diagram showing how signals from the heart can be detected, processed and displayed. A set of signals from the heart are picked up by an electrode sensor within the wearable ECG device. Still within the device, these signals are sent through an analogue interface with signal conditioning, then an analoguetodigital converter, then a highpass filter, and finally a notch filter. The signal then leaves the wearable ECG device and enters a wireless transmitter, from which it is sent to a wireless receiver attached to a computer. The computer carries out signal processing, after which the processed signal may be displayed or stored.
In the paper, the authors describe six different sources that can cause noise to contaminate the measured signal. Three of these sources are as follows:
electrode contact noise, which is due to electrode ‘popping’ or a loose contact with the skin
instrumentation noise, which is due to radiofrequency interference from other equipment (e.g. implanted devices such as pacemakers)
electromyographic (EMG) noise, which is induced by electrical activities of skeletal muscles during periods of contraction.
This shows the variety of noise sources that are associated with specific applications and the insights that are needed to understand the noise sources in any filter application. The filtering solution proposed by the authors uses a lowcomplexity, linearphase digital filter design.
The next section will describe in more detail the signals that are found in digital systems.
3.2 Characteristics of discretetime and continuoustime signals
A continuoustime signal is shown in Figure 15(a). The signal is continuous because it has a value at any instance of time – that is, for any value of $t$ , it is possible read a value of $x\left(t\right)$ from the graph. Most signals in the real world are continuous in time. For example, if you were monitoring the temperature of a room, you would be able to take a measured value of temperature at any time.
Figure 15 (a) Continuoustime signal; (b) discretetime signal
This figure consists of two graphs. Graph (a) shows x (as a function of t ) against t . The graph line fluctuates randomly up and down, above and below the horizontal axis. Graph (b) shows x [n ] against n . It consists of a series of vertical lines along the horizontal axis at integer values of n – some extending above the axis and some below. Each line has a small filled circle at the end furthest from the axis. The distance between two vertical lines is labelled T .
A discretetime signal (sometimes referred to as a timediscrete signal or simply a discrete signal) is shown in Figure 15(b). In the rest of this course the standard convention of drawing the vertical lines in a discretetime signal with a round dot on the end will be used; these lineswithdots are often called ‘lollipops’. The signal in Figure 15(b) is discrete because it only has a value at fixed points placed at discrete time intervals $T$ seconds apart along the $x$ axis. $T$ is called the sampling interval. Values of $x\left[n\right]$ can be found for the integer values of $n$ , such as $n=1$ , $n=2$ , etc., but there is no value for the signal at, say, $n=1.5$ . Thus $n$ represents the number of the sample.
It is hard to think of examples of realworld discretetime signals, since most realworld signals are continuous; however, if you took the temperature reading of a room every day at the same time, the result would be a discretetime signal. Most discretetime signals come from sampling continuoustime signals to get them into a digitised form that can be processed by digital computers.
Activity 6
Allow about 5 minutes
State whether the following are discretetime signals or continuoustime signals, giving a reason for each answer:
the wind speed across the blades of a wind turbine
the position of a robotic arm as it picks items from a conveyor belt
the total distance travelled by the robotic arm each hour over a 24hour period.
The wind speed is a continuoustime signal, because you can take a reading at any time.
The robotic arm always has a position – even if it is in a resting position, you know where it is – so this is a continuoustime signal.
The total distance travelled by the robotic arm is recorded just once in each hour, so this is a discretetime signal. Over a 24hour period there will be 24 discrete values recorded.
In the next section you will learn how a continuous signal is converted to a discrete signal.
3.3 Sampling a continuoustime signal
To convert a continuoustime signal into its discretetime signal equivalent, you need to sample the waveform. To ensure the discretetime signal contains the full range of frequencies in the continuoustime signal, the continuoustime signal normally needs to be sampled at a rate that is greater than twice the highest frequency component contained in the signal. The lowest sampling frequency that can fully reconstruct the continuoustime signal is called the Nyquist frequency after Harry Nyquist (1889–1976), a Swedishborn American engineer who made important contributions to communications theory.
What happens if the sample rate is equal to or less than twice the highest frequency in the signal – or, to express this another way, if the continuoustime signal is sampled twice or less than twice within each cycle of the highest frequency component contained in the signal? Figure 16(a) shows a sine wave (solid line) that is being sampled less than twice each cycle. The samples are represented by blobs. However, another sine wave with a lower frequency can be drawn through these samples. It is shown as a dashed sine wave. This lowerfrequency wave is called an alias of the original sine wave. The way to avoid aliases is to sample more frequently, as in Figure 16(b). Here there are twice as many samples as in Figure 16(a). Now the alias from (a), also shown dashed in (b), misses some of the samples in (b). There is no waveform of a lower frequency than the sampled waveform that can be fitted to all the samples.
Figure 16 (a) More than one sine wave can be drawn through these samples, so there is a lowfrequency alias of the original wave; (b) with more samples, there is no alias that can be fitted to all the samples
This figure consists of two graphs. Each one shows a sine wave being sampled. The sine wave is drawn above a horizontal axis. A series of evenly spaced vertical lines, representing the samples, extend from this axis to the sine wave. Each line has a small filled circle at the point where the line meets the wave. In graph (a), seven samples are taken for five cycles of the sine wave. A single cycle of a lowerfrequency sine wave is drawn through all seven filled circles, showing that there is an alias. In graph (b), thirteen samples are taken for five cycles of the sine wave. The lowerfrequency sine wave from graph (a) is shown again, but it does not pass through all the filled circles.
The way to eliminate the possibility of an alias is either to have greater than two samples per cycle or, conversely, to ‘bandlimit’ the signal to make sure that there are no frequencies equal to or higher than one half of the sampling frequency. This is done by including an analogue filter before the sampler in the analoguetodigital converter that removes all frequencies higher than a certain frequency. As such, a filter is designed to stop any chance of aliasing occurring, it is sometimes called an antialiasing filter .
After sampling the signal, the samples need to be converted to a digital signal. The next section describes how this is done.
3.4 Quantisation of a signal
When a continuoustime signal is sampled, the amplitude of each sampled point undergoes quantisation which means that it is forced to have only certain discrete values. The amplitude of each sample is represented by a digital binary code, and the word length of the code will be a fixed number of digital bits. Representing the amplitude of the samples in this way means the value is represented by a binary number, so it is truncated or quantised to its closest binary equivalent.
A 1bit binary number can represent two levels because it can only take a value of 0 or a value of 1. A 2bit binary number represents four levels, where each level takes one of the values 00, 01, 10 or 11. Figure 17 shows a discretetime signal whose values are limited to a 3bit binary number, which represents eight possible combinations of 0s and 1s – each possible combination is shown on the $y$ axis of the figure. Note that each sampled, digitised signal is a binary representation of the analogue value, so in conversion to these binary representations some rounding of the signal values has occurred: some sampled values are just above and some just below the continuoustime signal.
Figure 17 Sampled and quantised signal
This figure is a graph of x [n ] against n . The vertical axis shows eight evenly spaced levels consisting of threebit binary numbers ranging from 000 to 111, and horizontal lines are drawn across the graph to indicate these levels. A graph line representing the signal is shown fluctuating up and down. A series of evenly spaced vertical lines, representing the samples, extend from the horizontal axis to the horizontal level line to which the signal is nearest at that time. Each vertical line has a small filled circle at the point where the vertical line meets the horizontal level line.
Any binary word will always give an even number of quantisation levels. In general, a binary word with $n$ bits gives ${2}^{n}$ quantisation levels – hence a 3bit word gives 8 levels, a 4bit word gives 16 levels, a 5bit word gives 32 levels, etc.
Figure 18 shows a sine wave along with a set of quantisation levels. Here the 0 or midpoint of the sine wave occurs at a midpoint between the 011 and 100 levels. The gaps between the quantisation levels are called the quantisation intervals. Sometimes a quantisation level is assigned to the 0 or midpoint, which means that on one side of the 0 there is one more level than on the other side.
Figure 18 A sine wave and a set of quantisation levels
This figure is a graph of signal value against time. The horizontal axis extends from 0 to 1 units of time in increments of 0.1. The vertical axis is divided evenly into the same eight levels as in the previous figure, each represented by a threebit binary number from 000 to 111. The distance between two levels is the quantisation interval. One cycle of a sine wave is shown on the graph: it starts midway between the 011 and 100 levels at time 0, rises to reach the maximum 111 level at time 0.25, falls to cross the midway point again at time 0.5, falls further to reach the minimum 000 level at time 0.75, then rises back to the midway point at time 1. Overlaid on the sine wave is another line consisting of a series of steps, showing what the sine wave would look like if it were quantised to the eight levels. This stepped line remains at each level until the sine wave crosses the halfway point between that level and the one above or below it, at which point the stepped line rises or falls vertically to the new level.
The larger the quantisation intervals, the more error will be introduced into the sampled signal. The quantisation error for a sample is the difference between the value of the input signal and the resultant quantised signal, with the maximum quantisation error being half the quantisation interval. Quantisation errors can be reduced by increasing the number of levels; however, as the number of levels increases, so does the number of bits needed to represent each sample.
Activity 7
Allow about 15 minutes
An electrocardiogram (ECG) signal contains useful information in frequencies up to 400 Hz.
At what rate must the signal be sampled to ensure that no information is lost in sampling?
If a 3bit quantiser is used to represent the signal, how many bits of data are generated per second if the sampling rate is set to 1000 Hz?
If the signal range from the ECG extends from +7 V to −7 V, where 7 V equates to the highest quantisation level and −7 V to the lowest quantisation level, what is the quantisation interval and what is the maximum quantisation error in the system?
If the quantiser is changed to a 4bit system, what happens to the maximum quantisation error?
The signal must be sampled at greater than twice the maximum frequency of the signal, so the minimum sampling rate is 800 Hz.
With a 3bit quantiser each sample point uses three bits of data, so the number of bits of data generated is 3 × 1000 = 3000 bits per second.
With a 3bit quantiser there are a total of eight levels and seven quantisation intervals in the system. If the voltage range covers 14 V, then each quantisation interval is 2 V. The maximum quantisation error is half this quantisation interval, or 1 V.
A 4bit system has 16 quantisation levels and 15 quantisation intervals, so each quantisation interval is
$\frac{14\text{V}}{15}}=0.933\dots \text{V$
This reduces the maximum quantisation error to
$\frac{0.933\dots \text{V}}{2}}=0.47\text{V (to 2 s.f.)$
In the next section you will return to digital filtering and how they work in the time domain.
3.5 Digital filtering in the time domain
A simple form of digital filter is the threeterm averaging filter, in which the output value is equal to the average of three successive signal sample values. In Figure 19(a), which shows a discretetime signal applied to a digital threeterm averaging filter, the output $y\left[n\right]$ is given by
$y\left[n\right]=\text{\hspace{0.17em}}\frac{1}{3}x\left[n\right]+\frac{1}{3}x[n1]+\frac{1}{3}x[n2]$
The signals $x\left[n\right]$ and $x[n1]$ are spaced $T$ seconds apart in time, where $T$ is the sampling interval. Similarly, $x[n1]$ and $x[n2]$ are spaced $T$ seconds apart.
Figure 19(b) shows the values of the input signal $x\left[n\right]$ that will be applied to the filter input.
Figure 19 (a) Threeterm averaging filter; (b) input signal to the threeterm averaging filter
This figure consists of two parts. Part (a) is a simple block diagram showing a threeterm averaging filter with input x [n ] and output y [n ]. Part (b) is a graph of x [n ] against n . Input samples x [n ], represented as usual as vertical lines extending from the horizontal axis with filled circles at the end, are shown for integer values of n from minus 3 to 5. The values are as follows: when n is minus 3, x [n ] is 0 when n is minus 2, x [n ] is 0 when n is minus 1, x [n ] is minus 1 when n is 0, x [n ] is 2 when n is 1, x [n ] is 4 when n is 2, x [n ] is 6 when n is 3, x [n ] is 4 when n is 4, x [n ] is 0 when n is 5, x [n ] is 0.
The values of $n$ and $x\left[n\right]$ are listed in Table 1. Assume that $x\left[n\right]=0$ for any $n<3$ and $n>5$ .
Table 1 Values of $n$ and $x\left[n\right]$ from Figure 19(b)
$n$
−3
−2
−1
0
1
2
3
4
5
$x\left[n\right]$
0
0
−1
2
4
6
4
0
0
For the output $y\left[n\right]$ to be calculated, the values of $x\left[n\right]$ , $x[n1]$ and $x[n2]$ must be stored in memory and accessible to the processor performing the calculation. The order of a digital filter is the number of previous inputs (stored in the processor’s memory) used to calculate the current output, so this threeterm averaging filter is secondorder.
For values of $n<1$ the output will be 0, so the calculations below start at $n=1$ .
For $n=1$ you can substitute in values to give $y[1]=\frac{1}{3}\times (1)+\frac{1}{3}\times 0+\frac{1}{3}\times 0=\frac{1}{3}$ .
For $n=0$ you get $y\left[0\right]=\frac{1}{3}\times 2+\frac{1}{3}\times (1)+\frac{1}{3}\times 0\text{\hspace{0.17em}}=\frac{1}{3}$ .
For $n=1$ you get $y\left[1\right]=\frac{1}{3}\times 4+\frac{1}{3}\times 2+\frac{1}{3}\times (1)\text{\hspace{0.17em}}=\frac{5}{3}$ .
Table 2 shows the results of all calculations for $y\left[n\right]$ , shown as decimal values to 2 significant figures. Note that you can stop at $n=5$ , since above this the output will be 0 again.
Table 2 Results of calculations for $y\left[n\right]$
$n$
−3
−2
−1
0
1
2
3
4
5
$x\left[n\right]$
0
0
−1
2
4
6
4
0
0
$y\left[n\right]$
0
0
−0.33
0.33
1.67
4.00
4.67
3.33
1.33
The resulting output discretetime waveform is given in Figure 20.
Figure 20 Filter output in response to the input in Figure 19(b)
This figure is a graph of y [n ] against n . Output samples y [n ], represented as usual as vertical lines extending from the horizontal axis with filled circles at the end, are shown for integer values of n from minus 3 to 5. The values are as follows: when n is minus 3, y [n ] is 0 when n is minus 2, y [n ] is 0 when n is minus 1, y [n ] is minus 0.33 when n is 0, y [n ] is 0.33 when n is 1, y [n ] is 1.67 when n is 2, y [n ] is 4 when n is 3, y [n ] is 4.67 when n is 4, y [n ] is 3.33 when n is 5, y [n ] is 1.33. A note states that the output values are zero beyond n equals 5.
Figure 21 shows the same filter applied to an input $x\left[n\right]$ that has more noise in the signal and a longer sequence.
Figure 21 Longer sequence filtered by the threeterm averaging filter (Wickert, 2011, p. 7)
This figure consists of two graphs. In each case, the signal values are represented as vertical lines extending from the horizontal axis with filled circles at the end, and can take any value from minus 0.5 to 0.5. They are shown for integer values of n from minus 5 to 50. The first graph is x [n ] against n . The input values x [n ] vary from one to the next by quite large amounts. The second graph is y [n ] against n . The output values y [n ] vary less from one to the next; they appear to follow the same overall shape as the input values, but more smoothly and with less extreme values (the largest value is approximately 0.3 in magnitude, rather than 0.5). For example, on the input graph, the first five values are 0, the sixth is nearly 0.5, the seventh is close to 0 again, the eighth is about 0.4, the ninth is about minus 0.3, and the tenth is 0.5. On the output graph, the first five values are 0, the sixth and seventh are about 0.2, the eighth is about 0.3, the ninth is just above 0, and the tenth is just above 0.2.
Activity 8
Allow about 5 minutes
Looking at Figure 21, describe the effect that the threeterm averaging filter has on the output. What kind of filter is this?
The threeterm averaging filter has removed the shortterm fluctuations in the signal to show the longerterm trend. This is akin to removing higherfrequency noise from a signal, so it is operating like a lowpass filter.
This digital filter is an example of a system that is both linear and timeinvariant, sometimes referred to as an LTI system.
You can see that the threeterm averaging filter is a lowpass filter, but it is difficult to characterise its response. For example, what frequencies is the filter eliminating from the signal? Earlier in this course, you saw how analogue filters can be designed to give a desired frequency response; now you will look at how digital filters can also be designed in the frequency domain.
Filters are usually described in terms that make sense in the frequency domain, e.g. a low pass filter allows the parts of the signal with low frequencies to pass. In the following section you will see how a digital filter is designed in the frequency domain.
3.6 Designing a digital filter in the frequency domain
Most digital filters are designed in the frequency domain. Input signals are characterised by their frequency spectrum and design filters to modify that spectrum by, for example, removing highfrequency noise with a lowpass filter.
Figure 22 shows the four basic filter structures in the frequency domain. These ideal filters are identical for both analogue and digital filters (you have already seen them earlier in the course in Section 2.2).
Figure 22 Ideal filter responses: (a) lowpass; (b) highpass; (c) bandpass; (d) bandstop (repeat of Figure 5)
This figure consists of four graphs of gain against frequency, representing four types of filter. In each case, the gain is either at 1 or at 0, depending on the frequency. Areas where the gain is 1 are labelled ‘passband’, while areas where the gain is 0 are labelled ‘stop band’. The transitions between 1 and 0 (or vice versa) are vertical, and labelled ‘cutoff frequency’. Part (a) is a lowpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 1 (passband). At frequencies above the cutoff, the gain is 0 (stop band). Part (b) is a highpass filter. There is a single cutoff frequency, f subscript c. At frequencies below the cutoff, the gain is 0 (stop band). At frequencies above the cutoff, the gain is 1 (passband). Part (c) is a bandpass filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 0 (stop band). At frequencies between f subscript c1 and f subscript c2, the gain is 1 (passband). Part (d) is a bandstop filter. There are two cutoff frequencies, f subscript c1 and f subscript c2. At frequencies below f subscript c1 and above f subscript c2, the gain is 1 (passband). At frequencies between f subscript c1 and f subscript c2, the gain is 0 (stop band). Because the stop band is a narrow region between two passbands, the stop band is also known as the notch in this type of filter.
In the design process, the aim is to produce a filter frequency response that best matches the profile of the filter; however, compromises have to be made. You have already seen how the order of the design of the filter affects the rolloff, so decisions are made in the design of these analogue filters to best match the filter to the ideal response.
Figure 23 shows some of the characteristics of a typical lowpass digital filter. In comparison to the ideal shape (shown in red), there is a transition region between the passband and stopband sections, and also a ripple in both the passband and the stop band. These effects can be altered by changing various parameters in the design.
Figure 23 Typical lowpass digital filter response
This figure is a graph of gain against frequency. An ideal lowpass filter line is shown that is horizontal at a gain of 1 for low frequency values, then falls vertically at the cutoff frequency and remains horizontal at a gain of 0 for frequency values above this point. Another line is shown representing the actual response of the digital filter. At low frequencies, it oscillates by a small amount around a gain of 1; this is known as the passband ripple. At a frequency slightly below the cutoff, it descends diagonally until it reaches a frequency slightly above the cutoff; the distance between these two frequency values is known as the transition width. At frequencies above this, the response oscillates by a small amount around a gain of 0; this is known as the stopband ripple.
When designing a digital filter in the frequency domain, software tools are most often used to generate the mathematical expression for the filter. This expression is in the form of a difference equation – an equation involving combinations of samples at specific times. You have already seen a difference equation in this section: the equation $y\left[n\right]$ = $\text{\hspace{0.17em}}\frac{1}{3}x\left[n\right]+\frac{1}{3}x[n1]+\frac{1}{3}x[n2]$ for the threeterm averaging filter. As another example, the following difference equation contains five terms:
$y\left[n\right]=\text{\hspace{0.17em}}x\left[n\right]+2x[n1]0.5x[n2]+2.5x[n3]+x[n4]$
In a digital filter design, the number of terms in the difference equation is often referred to as the number of taps , and is specified as a design parameter. A larger number of taps may give a filter design that more closely matches the desired specification; however, more taps will mean that it takes longer to compute the filter outputs.
There are two main classes of LTI (linear timeinvariant) digital filter: the finite impulse response (FIR) filter and the infinite impulse response (IIR) filter. An IIR filter requires fewer computations to achieve the same performance as an FIR filter, so has a speed advantage. However, an IIR can have stability issues and also nonlinear phase issues (where signals of different frequencies are delayed by different amounts, resulting in a distortion of the output signal). The difference between an FIR filter and an IIR filter is that the FIR filter uses only the filter inputs when generating its output, whereas an IIR filter uses both the filter inputs and past filter outputs – in other words, it uses feedback. The difference equation above is for an FIR filter. The difference equation below has four terms and the final term is an output value, so it is an IIR filter:
$y\left[n\right]=\text{\hspace{0.17em}}2x\left[n\right]+x[n1]3x[n2]+3y[n1]$
To calculate the output of an IIR filter, both previous input samples and previous output samples are stored in the processor’s memory. The order of the IIR filter is the larger of the number of input values stored and the number of output values stored. In the above example, the filter is secondorder, since two previous input values are stored but only one previous output value.
Before you look at a filter being designed, you need to know a little more about the relationship between the time domain and the frequency domain. You will cover this next.
3.7 Fourier transforms and the sinc pulse
You saw earlier (Figure 5) that the ideal frequency responses shown in Figure 22 are sometimes referred to as brickwall filters because of the sharp transitions between passbands and stop bands. In other words, they are rectangular functions. However, whilst it is possible to use a rectangular function in the frequency domain to specify the filter, you must perform the calculations to implement the filter in the time domain. To do this, you need to translate between the time and frequency domains; in particular, for a brickwall filter, you need to know what a rectangular function in the frequency domain looks like in the time domain. For a continuoustime system, you would use a Fourier transform to do this; for a discretetime system, you use a corresponding discretetime Fourier transform.
You can perform mathematical calculations on paper to work out the Fourier transform of a signal in either the time or the frequency domain. Under those circumstances, you would use the formula for either the discrete or the continuous transform, depending on the type of system you were dealing with. However, the majority of Fourier transforms will be carried out by a computer system – even if the system is dealing with continuoustime signals as input and output, the signal processing will be happening in the discretetime domain of the computer. There is a particular algorithm called the fast Fourier transform (FFT) that is used to carry out Fourier transforms efficiently. Such was the need to perform these calculations at great speed that the FFT was included in a list of the top 10 algorithms of the twentieth century by the IEEE journal Computing in Science & Engineering in the year 2000 (Dongarra and Sullivan, 2000).
Using the discretetime Fourier transform, you can see that the timedomain representation of a rectangular function in the frequency domain is the sinc pulse, as shown in Figure 24.
Figure 24 Fourier transform pair: a rectangular function in the frequency domain is represented as a sinc pulse in the time domain
This figure consists of two graphs with a doubleheaded arrow in between them. The graph on the left shows signal value against time. It is made up of a large number of vertical lines extending from the horizontal axis. If you were to join the ends of the lines, the shape would be a symmetrical curve consisting of a tall central peak with much smaller lobes on each side, alternating below and above the horizontal axis, and reducing in height the further away they are from the centre. This shape is known as the sinc pulse. The graph on the right shows spectrum against frequency. It starts as a horizontal line close to 0, then rises vertically to a new value, where it stays for a certain range of frequencies before descending vertically back to nearly 0. The overall shape is that of a rectangle.
Mathematically, a sinc pulse or sinc function is defined as sin(x)/x.
Figure 25(a) and Figure 25(b) show a sinc envelope producing an ideal lowpass frequency response. However, there is an issue because the sinc pulse continues to both positive and negative infinity along the time axis. Whilst mathematically you can readily take the Fourier transform of a sinc pulse, it can’t be computed because of the extension to infinity. The obvious solution is to truncate the sinc response as in Figure 25(c), so that the ripples no longer extend to infinity. Now that the pulse is finite, it can be shifted so that it only has positive sample numbers. The effects of this in the frequency domain are shown in Figure 25(d) – ripples in the passband and the stop band. In essence this shows why you can never have the perfect ideal or ‘brickwall’ filter.
Figure 25 (a) Ideal sinc function in the time domain; (b) frequency response of the ideal sinc function; (c) abruptly truncated sinc function in the time domain; (d) frequency response of the truncated sinc function
This figure consists of four graphs. Graphs (a) and (c) are timedomain graphs, showing signal value against sample number. Graphs (b) and (d) are frequencydomain graphs, showing spectrum against frequency. Graphs (a) and (b) are paired with a doubleheaded arrow, as are graphs (c) and (d). Graph (a) shows samples that can be joined to make the shape of the sinc pulse from the previous figure: a symmetrical curve consisting of a tall central peak with much smaller lobes on each side, alternating below and above the horizontal axis, and reducing in height the further away they are from the centre. The curve is centred on sample number 0. Graph (b), its frequencydomain equivalent, shows a graph line that takes a value of 1 at frequencies below f subscript c and a value of 0 at frequencies above f subscript c, with a vertical drop from 1 to 0 at f subscript c. Graph (c) has the same shape as in graph (a), but cut off abruptly after a couple of lobes on each side. It has then been shifted so that the left cutoff point is at sample number 0. All samples above the right cutoff point have a value of 0. Graph (d), its frequencydomain equivalent, has a similar shape to graph (b), but there are oscillations around 1 below f subscript c and oscillations around 0 above f subscript c. Also, the graph line no longer drops vertically at f subscript c, but falls diagonally from a frequency slightly below the cutoff to a frequency slightly above the cutoff.
A technique for dealing with the truncated sinc is to apply a window function that brings the endpoints of the truncated sinc to zero. Figure 26(a) shows a suitable shape of window, Figure 26(b) shows the effects of applying the window to the truncated sinc and Figure 26(c) shows the resultant frequency response.
Figure 26 Using windowing to compensate for a truncated sinc pulse: (a) suitable window shape; (b) window applied to the truncated sinc; (c) resultant frequency response
This figure consists of three graphs. Graphs (a) and (b) are timedomain graphs, showing signal value against sample number. Graph (c) is a frequencydomain graph, showing spectrum against frequency. Graphs (b) and (c) are paired with a doubleheaded arrow. Graph (a) shows samples that can be joined to make a smooth, symmetrical bellshaped curve that starts at 0 for sample number 0, then rises smoothly to a peak at signal value 1, then falls smoothly back down to 0. To the right of the bellshaped curve, the value is 0 for all sample numbers. Graph (b) shows samples that can be joined to make the shifted sinc pulse seen in Figure 6.12(c). Again, the left cutoff point is at sample number 0. However, the lobes have been reduced and smoothed out. The same thing has happened on the righthand side, so that there is no longer an abrupt right cutoff point but a smooth transition from the reduced lobes into the zero values. The peak of the sinc pulse corresponds with the peak of the curve in graph (a). Graph (c), its frequencydomain equivalent, is at a value of 1 for frequencies below f subscript c, with no oscillations. The graph line then falls diagonally from a frequency slightly below the cutoff to a frequency slightly above the cutoff. It remains at a value of 0 for frequencies above f subscript c, again with no oscillations.
When a window function is applied, it is effectively ‘multiplied’ with the sinc function. The process used to do this is called convolution . Convolution is outside the scope of this course, but when applied, what is left is where the signals overlap. You can think of this as a ‘view’ through the window function.
There are various shapes of window that can be applied. Two common shapes are the Blackman window and the Hamming window, although there are others (such as Kaiser, Bartlett and Hann). The Hamming window gives a better transition response, but the Blackman window has better stopband gain and lower passband ripple.
You are now ready to design a digital filter and try it out.
3.8 A lowpass filter design
You will now look at a real example of lowpass filter design. For this, a software package called Signal Wizard has been used. You are not required to download Signal Wizard, but for information, it is a free package that can be installed on your computer. Details of the Signal Wizard and where to find it are given can be found here: Installing Signal Wizard for use in offline mode .
Figure 27 shows a screenshot from Signal Wizard which shows the input signal waveform both in the time domain (the ‘Time Waveform’) and in the frequency domain (the ‘Frequency Spectrum’). The aim is to remove any frequencies in this waveform above 5000 Hz, so a lowpass filter with a 5000 Hz cutoff frequency is required.
Figure 27 Characteristics of the input signal
This figure is a screenshot from Signal Wizard. It shows two graphs, one labelled Time Waveform and one labelled Frequency Spectrum. The time waveform fluctuates randomly around a value of 0. Most of the time it stays within the range minus 0.25 to plus 0.25, only occasionally going outside this. The overall shape of the frequency spectrum is a diagonal line that descends steadily from minus 40 at a frequency of 0 to minus 100 at a frequency of just below 20 thousand, but again it fluctuates randomly by a small amount around this line.
The specification of an FIR lowpass filter with a gain of 1 and a cutoff frequency of 5000 Hz is entered into the filter design interface. The graphical interface windows in Figure 28 all show the ‘brickwall’ specification in red with the implementation in black. These designs all use a rectangular window, which gives an abruptly truncated sinc function. The first design uses 15 taps (Figure 28(a)), the second uses 63 taps (Figure 28(b)) and the third uses 127 taps (Figure 28(c)). All other parameters are unchanged. As the number of taps in the design increases from the top image to the bottom, the transition zone narrows and the designed filter more closely matches the filter specification. However, the amplitude of the ripples in the passband and the stop band remains unchanged, although the frequency increases as the number of taps increases.
Figure 28 Digital filter design with a rectangular window: (a) 15 taps; (b) 63 taps; (c) 127 taps
This figure consists of three parts, each showing a screenshot from Signal Wizard. In each case, the ideal filter response is shown as a horizontal line at 1 below the cutoff frequency and a horizontal line at 0 above the cutoff frequency, with the cutoff frequency represented as a vertical line at 5000. The actual response is shown as a second line. In screenshot (a), the actual response oscillates slowly around 1, then descends diagonally between frequencies of approximately 3700 and 6300, then oscillates slowly around 0. In screenshot (b), the actual response oscillates more rapidly around 1, then descends diagonally between frequencies of approximately 4700 and 5300, then oscillates more rapidly around 0. The oscillations die away more quickly than in screenshot (a). In screenshot (c), the actual response oscillates very rapidly around 1, then descends diagonally between frequencies of approximately 4900 and 5100, then oscillates very rapidly around 0. The oscillations die away more quickly than in screenshot (b).
The next set of designs, in Figure 29, keeps the number of taps at 127 and varies the window function used. The design implemented in Figure 29(a) uses a Blackman window, while the design implemented in Figure 29(b) uses a Hamming window. You can see that both have reduced the ripples in the passband.
Figure 29 Filter designs with different window functions: (a) 127 taps with Blackman window; (b) 127 taps with Hamming window
This figure consists of two parts, each showing a screenshot from Signal Wizard. In each case, the ideal filter response is shown as a horizontal line at 1 below the cutoff frequency and a horizontal line at 0 above the cutoff frequency, with the cutoff frequency represented as a vertical line at 5000. The actual response is shown as a second line. In screenshot (a), the actual response seems to follow the ideal response exactly in the passband and stop band – that is, it does not oscillate but remains horizontal. However, it does not drop vertically between one and the other at the cutoff frequency of 5000, but descends diagonally between frequencies of approximately 4200 and 5800. In screenshot (b), the actual response is very similar to that in screenshot (a). However, it appears to have a slight ripple in the passband, and to descend slightly more rapidly around the cutoff frequency.
Figure 30 zooms in on sections of the transition band and passband to see the effects in more detail. Comparing Figure 30(a) and Figure 30(c) shows that the transition zone of the Hamming window is narrower than that of the Blackman window. Comparing Figure 30(b) and Figure 30(d) shows that the Blackman window has reduced the ripples in the passband more than the Hamming window. These results confirm that the Hamming window gives a better transition response, while the Blackman window has lower passband ripple.
Figure 30 Details of the Blackman and Hamming windows: (a) 127 taps with Blackman window, transition zone; (b) 127 taps with Blackman window, passband; (c) 127 taps with Hamming window, transition zone; (d) 127 taps with Hamming window, passband
This figure consists of four parts, each showing a screenshot from Signal Wizard. In each case, the ideal filter response is shown as a horizontal line at 1 below the cutoff frequency and a horizontal line at 0 above the cutoff frequency, with the cutoff frequency represented as a vertical line at 5000. The actual response is shown as a second line. Screenshot (a) zooms in to the horizontal scale for the first filter in the previous figure, so that the transition zone around the cutoff frequency can be examined. The actual response deviates from the ideal response at a frequency of approximately 4200, descending at first slowly and then more steeply across the cutoff frequency, before easing off again to rejoin the ideal response at a frequency of approximately 5800. Screenshot (b) zooms in to the passband for the filter in screenshot (a). Almost no ripples can be seen. Screenshot (c) zooms in to the horizontal scale for the second filter in the previous figure, so that the transition zone can be compared to that of the first filter. This time the actual response deviates from the ideal response at a frequency of approximately 4400, descending at first slowly and then more steeply across the cutoff frequency, before easing off again to rejoin the ideal response at a frequency of approximately 5600. This shows that the transition zone is narrower. Screenshot (d) zooms in to the passband for the filter in screenshot (c). In this case, a slight ripple is present.
The input signal was filtered using the 127tap Hamming window design, and the output response is shown in the ‘Time Waveform’ and ‘Frequency Spectrum’ charts in Figure 31. The frequency spectrum shows that the filter has indeed taken out the redundant higherorder frequencies above 5000 Hz.
Figure 31 Filter response output
This figure is a screenshot from Signal Wizard. It shows two graphs, one labelled Time Waveform and one labelled Frequency Spectrum. The time waveform fluctuates randomly around a value of 0. Most of the time it stays in the range minus 0.25 to plus 0.25, only occasionally going outside this. The overall shape of the frequency spectrum is a line that remains at minus 40 for frequencies between 0 and approximately 3000, then drops gradually to minus 60 over the range of frequencies between 3000 and 5000, then drops sharply down to minus 100 at a frequency just above 5000. Again, it fluctuates randomly by a small amount around this line.
When a digital filter such as the one above is being implemented, the mathematical calculations will be defined in a software program, which will then run on some digital hardware. The basic processes taking place in the digital hardware are adding and subtracting; these processes will occur thousands, probably millions of times in the filtering of a sampled signal. Digital signal processor (DSP) chips are designed specifically to carry out these calculations, and their internal structure (referred to as their architecture) has been optimised to do so – thus it is different from that of a generalpurpose processor used in a computer. There are several major DSP chip manufacturers, including Texas Instruments and Analog Devices.
You’ve therefore seen that usually a digital filter is designed using a software package. To finish this course, you will have a chance to explore digital filtering using an interactive resource. The next section introduces this resource.
3.9 Digital filtering in practice
You are now going to use an interactive resource to add Gaussian noise to a noisefree (or ‘clean’) data signal. The noisy data signal is passed to a detector, which determines whether received samples are binary zeros or ones. The added noise causes the detector to make mistakes; hence there are bit errors. Using a finite impulse response (FIR) digital filter, you will ‘clean up’ the noisy data signal and thereby reduce the biterror rate.
Rightclick on the image or link below to open Interactive 1 in a new tab or window so you can continue to work through the course and activities with the interactive alongside. The interface consists of four equally sized zones: top left, top right, bottom left and bottom right. Note that depending on the browser you are using, the sliders and other components in the interface may have a different visual appearance from those shown in the screenshots in this document, but the functionality will be the same.
Interactive 1 FIR filter
The top left zone shows the signal, whether it is clean or noisy. When the interactive resource is launched, the noise level is at zero and the signal is clean. The data signal is shown as a single step from 0 to 1, but there are 10 000 samples for this data signal, half of which are from the 0 part of the signal and half of which are from the 1 part. Noise is added by sliding the Noise level (V) slider to the right and clicking on Apply .
The top right zone displays a histogram of the signal. As there is an equal number of zeros and ones in the signal, the probability of each is 0.5. The detected probabilities of zeros and ones in the received signal are shown by the vertical red lines situated at 0 and 1. Without added noise, perfect detection can be achieved.
The top right zone allows the decision level at the detector to be set using the Decision level (V) slider, and also the degree of lowpass filtering to be set using the FIR filter width (number of samples) slider. Finally, the top right zone also gives statistics about the number of true and false detections. Figure 32 explains how to interpret this data. Before noise is added, all samples are correctly identified, so False detections should be at 0 to start with.
Figure 32 Interpreting the statistics
This figure shows the four fields from the interactive interface that contain statistics about the number of true and false detections. They are arranged in a two by two grid, with the rows labelled Data 1 and Data 0, and the columns labelled Correct detections and False detections. The four possible combinations of these row and column values have the following meanings. Data 1, Correct detections: samples correctly identified as 1, as a percentage of all samples. Shown as 50 per cent in this example. Data 0, Correct detections: samples correctly identified as 0, as a percentage of all samples. Shown as 50 per cent in this example. Data 1, False detections: samples originally 0 but incorrectly identified as 1. Shown as 0 per cent in this example. Data 0, False detections: samples originally 1 but incorrectly identified as 0. Shown as 0 per cent in this example.
Note in Figure 32 that if the correct detection of data 1s were to drop below 50%, the deficit would appear as an increase in the false detection of data 0s. Similarly, if the correct detection of data 0s drops below 50%, the deficit should appear as an increase in the false detection of data 1s. Thus the diagonal pairs of statistics in Figure 32 should always add up to 50% – or approximately 50%, given the possibility of rounding errors in the calculation.
The bottom left zone shows the effect of filtering applied to a noisy signal. The bottom right zone is similar to the top right zone, showing a signal histogram and false detection statistics; however, these are after filtering, whereas the top right zone is before filtering.
In the next section you will have a go at adding noise to the data signal in the interactive.
3.10 Adding noise
After familiarising yourself with the interface in Interactive 1, try adding noise in the top left zone. The Noise level (V) slider is not calibrated, except for having a minimum value of 0 and a maximum of 1. Judging by eye, set the slider to around 0.1 V and click on Apply . You can hear what the noisy signal sounds like by going to Audible signal with noise and clicking on the Play (triangle) icon. The top part of the interface should resemble Figure 33.
Figure 33 Noise applied to the data signal
This figure is a screenshot of the top half of the interactive interface. On the left, the unfiltered data signal is shown as a noisy signal around 0 volts for 0.625 seconds, followed by a noisy signal around 1 volt for another 0.625 seconds. Below this are the ‘Audible signal with noise’ controls, which allow you to play the signal; a Zoom slider; and the ‘Noise level (V)’ slider, which goes from 0 to 1 and is set at approximately 0.1. On the right, the probability histogram of the unfiltered signal is shown as two similar overlapping histograms: one centred around 0 and spanning approximately minus 0.8 to plus 0.8, and one centred around 1 and spanning approximately 0.2 to 1.8. Below this is the ‘Decision level (V)’ slider, which is set at the middle of its range; the detection statistics, which will be described shortly; and the ‘FIR filter width’ slider, which is set near the lower end of a scale from 2 to 200. The detection statistics are as follows. Data 1, Correct detections: 49.99 per cent. Data 0, Correct detections: 24.95 per cent. Data 1, False detections: 25.05 per cent. Data 0, False detections: 0.01 per cent.
Activity 9
Allow about 5 minutes
Why has the display in the top right zone changed? What is the display now showing?
The effect of the added noise is to change the voltages representing 0 and 1 randomly around the mean values of 0 and 1. The display shows a histogram of the probabilities of particular signal voltages. Voltages close to the mean are most likely, so the histogram has peaks at 0 and 1; voltages further away are possible, but less likely the further you go from the mean values.
Next you will look at changing the decision level.
3.11 Changing the decision level
Again, using the interactive in Section 3.9 you will now look at changing the decision level.
The decision level is represented by the righthand vertical edge of the grey area on the histogram display. Use the Decision level (V) slider to set the decision level midway between 0 and 1. You will have to judge its position by eye. Click on Apply to implement the decision level and look at the statistics.
Activity 10
Allow about 5 minutes
Why are there are false detections?
Why are the detection rates (correct and false) practically the same for 0 and 1?
The effect of noise is to take the signal occasionally over the decision level, so that a 0 is detected as a 1 and vice versa.
The percentages of correct and false detections are virtually the same for 0s and 1s because of the symmetry of the situation. The noise affects the 0s and 1s equally, and the decision level is symmetrically placed between 0 and 1.
The symmetrical arrangement that you have used so far, with the decision level halfway between the two binary symbols, is typical of much practical implementation of binary signal detection, but looking at an asymmetrical arrangement is instructive.
Use the Decision level (V) slider to place the decision level asymmetrically between 0 and 1 (that is, much closer to 1 than to 0, or vice versa), then click on Apply .
Activity 11
Allow about 10 minutes
Explain, in general terms, the correct and false detection statistics that have resulted from your asymmetrical placement. You will not be able to give a precise account, but you might be able to explain the relative sizes of the statistics.
You may have got something similar to Figure 34, with the decision level fairly close to 1.
Figure 34 Asymmetrical decision level
This figure is a screenshot of the top right zone in the interactive interface. The probability histogram of the unfiltered signal is shown as two similar overlapping histograms: one centred around 0 and spanning approximately minus 0.75 to plus 0.75, and one centred around 1 and spanning approximately 0.25 to 1.75. Below this is the ‘Decision level (V)’ slider, which is set nearer to 1 than to 0, as indicated by the shading behind the histograms. The detection statistics are as follows. Data 1, Correct detections: 36.21 per cent. Data 0, Correct detections: 50 per cent. Data 1, False detections: 0 per cent. Data 0, False detections: 13.79 per cent.
Because the decision level is so close to 1, noiseaffected 0 signals very rarely go beyond the decision level. This is why false detections of 1 are at 0%, as this statistic reflects 0s that are wrongly detected as 1s. As there are no false detections of 0s, all detections of 0 must be correct, which is why correct detections of 0 are shown at 50%.
With the decision level close to 1, noiseaffected 1s quite often drop below the decision level and are detected as 0s. This is why the false detection rate for 0 is relatively high, at almost 14%. Correspondingly, the correct detection rate for 1 is relatively low, at just above 36%. These two statistics add to approximately 50%.
Now you have explored how adding noise and changing the decision level affects the data signal, to end this course you will apply the FIR filter.
3.12 Applying the FIR filter
Use the Decision level (V) slider to return the decision level to midway between 0 and 1, then click on Apply . Confirm that the statistics are as would be expected; that is, correct detections of 1s and 0s are at approximately the same percentage. The correct detection percentages do not need to be identical, but try to get them to within about 1% of each other by adjusting the detection level.
You are now going to apply the FIR filter. In the top right zone, set the FIR filter width (number of samples) slider by eye to around 30–40 samples, then click on Apply .
Activity 12
Allow about 5 minutes
Compare the top right zone (histogram and statistics before filtering) to the bottom right zone (histogram and statistics after filtering). What effect has the filter had on the statistics for correct detection?
Explain any difference in detection statistics between those beneath the unfiltered signal and those beneath the filtered signal.
The filter increases the correct detection rate.
The filter reduces the spread of values around the mean, which is shown by the fact that the histogram peaks are narrower than in the unfiltered histogram. The reduced spread of values reduces the probability of sample values crossing the decision level.
Some points to consider as a result of your work with the FIR filter interactive resource are as follows:
Noise causes the actual voltages for 1s and 0s to be distributed around the intended voltage.
The distribution of voltages can cause errors when, for example, the voltage representing a 1 is closer to 0 than to 1.
The wider the distribution (that is, the noisier the signal), the more likely errors become.
Filtering can reduce the width of the distribution of voltages, thereby reducing the error rate.
Conclusion
In this course you have learned about discretetime signals and the discretetime systems that use them. In doing so, you have focused on digital filtering and found out why advances in digital computer processing have allowed digital filtering to be used in scenarios where analogue filters would once have been the only viable solution. You have seen how relatively simple averaging filters can remove highfrequency noise, and also that more complex filters are designed.
This completes your study. You should now be able to:
understand how filtering of discretetime signals can be achieved by mathematical processes such as averaging
understand how mathematical operations applied to a discretetime signal in the time domain can result in the removal or reduction of unwanted aspects of the signal
explain why filters are designed in the frequency domain, and specify a digital filter to achieve a desired filtering effect.
This OpenLearn course is an adapted extract from the Open University course T312 Electronics: signal processing, control and communications
alias
An error appearing in a sampled signal when the bandwidth of the signal is greater than half the sampling frequency (that is, when the sampling frequency is lower than the Nyquist frequency). Such effects are also referred to as artefacts or ghosts.
antialiasing filter
A lowpass filter that is able to remove aliasing in sampled signals by cutting all the spectral components that are greater than or equal to half the sampling frequency.
Bode plot
Loosely, a graph of the frequency response of a device or system. Strictly, a pair of graphs showing frequency response and phase response over the same span of frequencies.
convolution
A mathematical operation that combines two signals to produce a third signal. When two signals are convolved, the resultant third signal expresses how the shape of one signal is modified by the other.
decibel
A logarithmic way of expressing a power ratio. For powers P1 and P2, their ratio in decibels is defined as 10 log10 (P1/P2). The symbol for decibels is dB. Strictly the decibel is not a unit, as any ratio must be a pure (that is, dimensionless) number. However, it is often regarded as a unit.
difference equation
An equation in which all variables have been sampled at fixed intervals, and these variables are multiplied by some coefficient.
differential equation
A mathematical equation in which one or more terms contains a mathematically differentiated variable.
digital signal processor (DSP)
A semiconductor device similar to a microprocessor. Whilst a microprocessor is a generalpurpose device, a DSP has been optimised to carry out the computations used for processing discrete signals.
firstorder
As applied to a filter, the simplest type of filter, having in its passive form a single reactive element (a capacitor or an inductor) and a rolloff of 20 dB/decade, or 6 dB/octave.
As applied to a differential equation, such an equation in which the main variable is differentiated once. Any system that can be modelled with such a differential equation would be referred to as a firstorder system.
Fourier transform
A transformation that extends the concept of the Fourier series to nonperiodic signals. It allows us to estimate the spectrum of a signal and perform a frequency analysis.
frequency response
The response of a system (e.g. a filter) when we input sine waves at different frequencies (but equal amplitude). It tells us how the system will modify the spectrum of any input signal we feed to the system.
gain
In amplification, a measure of how many times the input signal amplitude is increased. It is generally measured as the ratio between the input signal amplitude and the output signal level. If a gain value is given as just a number (i.e. with no units), the gain is likely to be a ratio of voltages; if the value is given in decibels, it is a ratio of powers. See also voltage gain and power gain.
octave
The span of frequencies covered by a doubling of frequency, or by a halving of frequency. For example, the frequency span from 500 Hz to 1000 Hz is an octave, as is the span from 500 Hz to 250 Hz. In music, the eight notes of a diatonic scale (that is, doh, re, me … ti, doh) cover an octave; hence the name ‘octave’ for this span of frequencies.
operational amplifier
A generalpurpose analogue amplifier intended to be used as a component in other electronic circuits, and usually supplied as a multipin integratedcircuit device with two inputs and a single output. Typically an opamp is a differential amplifier (that is, it amplifies the difference between its two inputs) and has an unusably high gain and extremely high input impedance. To give useful and predictable behaviour, external feedback circuitry must be applied. This circuitry determines essential parameters such as input impedance, output impedance, overall gain and frequency response, and also whether the circuit operates as a singleinput amplifier or a differential amplifier.
order
A numerical classification for filters (e.g. ‘first order’, ‘second order’, ‘third order’, etc.). The order is determined by the differential equation of the filter. For a firstorder filter, the highest differential coefficient in the equation is firstorder (e.g. dv/dt); for a secondorder filter, the highest differential coefficient is secondorder (e.g. d2v/dt2). The higher the order, the steeper the rolloff and the sharper the cutoff between passband and stop band. Increasing the order by one adds 20 dB/decade to the filter’s rolloff.
passband
The band or bands of frequencies passed by a filter with least attenuation, or no attenuation. Frequencies outside the passband are cut off, or stopped, by the filter. Passband is the counterpart of stop band.
power gain
The ratio of output power to input power. It is usually expressed in decibels (dB). A power gain of 0 dB means that the output power is the same as the input power. A power gain of 3 dB (or, more exactly, 3.0103 dB) means that the output power is double the input power.
quantisation
Conversion of an analogue quantity, which could take any value within a range, to one of a set of discrete values.
rolloff
The steepness of a filter’s attenuation in a stop band. Also, the steepness of the attenuation of any device that produces attenuation (for example, a linear amplifier at the extremes of its operatingfrequency range).
stop band
The band or bands of frequencies stopped, or cut off, by a filter. The counterpart of the passband.
taps
In a digital filter, the number of taps is the number of terms in the mathematical expression that describes the filter. This expression is given in the form of a difference equation. In digital filter design, the maximum number of taps to be used in the implementation is required as part of the design specification.
voltage gain
For a sinusoidal input and output, voltage gain is the ratio of the output voltage’s amplitude to that of the input voltage. It has no units.
window function
In signal processing, a mathematical function that is zerovalued outside some chosen interval. When a signal is convolved with a window function, the resultant signal is also zerovalued outside the chosen interval, so it is the original signal viewed through the window function.
Asgari, S. and Mehrnia, A. (2017) ‘A novel lowcomplexity digital filter design for wearable ECG devices’, PLOS ONE , vol. 12, no. 4 [Online]. Available at https://doi.org/10.1371/journal.pone.0175139 (Accessed 25 March 2019).
Dongarra, J. and Sullivan, F. (2000) ‘Guest editors’ introduction: the top 10 algorithms’, Computing in Science & Engineering , vol. 2, no. 1, pp. 22–3.
Wickert, M. (2011) ‘Chapter 5 FIR Filters’, ECE 2610 Introduction to Signals and Systems [Online], University of Colorado Colorado Springs. Available at www.eas.uccs.edu/~mwickert/ece2610/lecture_notes/ece2610_chap5.pdf (Accessed 3 June 2019).
This free course was written by Allan Jones, Bernie Clarke and Phil Picton. It was first published in July 2020.
Except for third party materials and otherwise stated (see terms and conditions ), this content is made available under a Creative Commons AttributionNonCommercialShareAlike 4.0 Licence .
The material acknowledged below is Proprietary and used under licence (not subject to Creative Commons Licence). Grateful acknowledgement is made to the following sources for permission to reproduce material in this free course:
Figures
Course image: paulclee / www.pixabay.com
Figure 1: © BBC
Figure 2: © NASA
Figure 3: Courtesy Starship Technologies
Figure 14: Adapted from: https://doi.org/10.1371/journal.pone.0175139
Installing Signal Wizard for use in offline mode PDF, Figure 1: Taken from: http://www.signalwizardsystems.com/
Every effort has been made to contact copyright owners. If any have been inadvertently overlooked, the publishers will be pleased to make the necessary arrangements at the first opportunity.
Don't miss out
If reading this text has inspired you to learn more, you may be interested in joining the millions of people who discover our free learning resources and qualifications by visiting The Open University – www.open.edu/openlearn/freecourses .