Digital signal processing (DSP) has come a long way in 20 years. As late as the early 1980s, it still needed to be done on mainframe computers. Research work was done by digitizing signals for analysis in complex computer analysis runs that might return a result a day later. Scientists wrote their own programs or used libraries of programs written specifically to do analysis with. In fact, digital signal processing was used as a way to test filter designs for new electronic circuits. At that time, no one expected that computers would get faster and smaller as quickly as they did.

If we strip away the DIGITAL from Digital Signal Processing, we are left with something that we’ve been doing in electronics since it was first invented, Signal Processing! Signal processing is all about taking a signal, applying some change to it, and then getting a new signal out. That change might be amplification or filtration or something else, but nearly all electronic circuits can be considered to be signal processors. Looked on in this way, the signal processor as a black box might be composed of discrete components like capacitors and resistors, or it could be a complex integrated circuit with many circuits to accomplish a more complex task, or it could be a digital system which accepts a signal on its input and outputs the changed signal. So long as it accomplishes its defined task, it doesn’t matter how the box works internally.

Digital signal processors require several things to work properly:

A processor fast enough and with enough precision to support the mathematics it needs to implement.

Supporting memory to store programming, samples, intermediate results, and final results.

Analog-to-Digital (A/D) and Digital-to-Analog (D/A) Converters to bring real signals into and out of the digital domain.

Programming to do the job.

Digital signal processors, even single chip DSP systems, are built from these elements. Twenty years ago, anyone using DSP had to be quite a mathematician to be able to implement and use the algorithms. Today, DSP can be incorporated into devices so simple that they can be mass-produced and operated with as little as the press of a button.

The internal programming of a DSP chip is far too complex to deal with here. It is generally proprietary as well, but the basics of how DSP works are simple enough to understand. While you won’t be able to implement your own algorithms after reading this, you will know a little more about how DSP works.

Sampling
Digital systems don’t work with continuous waveforms. Much work was done trying to create analog computers to handle calculations on continuous systems, but analog computers proved to be inflexible, slow, and hard to reconfigure to new tasks. In particular, it was hard to implement general algorithms on them. Maybe if digital systems had not developed so fast through the 1950s and 1960s, we might have solved the problems, but by the 1960s, it was already rare to find an analog computer anywhere. Serious work was done on digital computers where complex algorithms could be coded relatively easily. This meant that we had to get our data into digital form.

In the 1960s, digitizing data was often a manual task. As late as the 1980s, much data was read into digital computers from paper tape systems. However, increasingly computers were being harnessed directly to circuits that could produce a digital output when given an analog input. It became possible to take a signal such as the one below:

And pass it into the computer automatically as a stream of digital data that looked like this:

Once in the computer, the process could be reversed by playing it out through a D/A converter to produce a close approximation of the original waveform. Notice the words ‘close approximation’. No digital sampling system can perfectly reproduce the original signal because each sample is a single number representing the signal in some small, but finite interval. Modern techniques make it possible for that digital sample to be so good that you can’t tell the difference on your CD or DVD, but the difference is still there. Digital systems have errors just like any system. Minimizing these errors to make them unnoticeable occupies much of the time of the digital system designer.

Processing
Once the signal exists inside our digital system as a stream of samples, we can now process them in a variety of ways. This is where the math gets complicated and we have a real need to know about the mathematics. However, if what needs to be done is simple enough so that it can be done with existing DSP chips, we may not need to know any of the complex mathematics ourselves, leaving it to the chip designers to make the math work right. If you’re doing something special though, then even with DSP chips, you’ll need to be skilled enough to know what can be done and what cannot.

Consider though how a simple algorithm can work in the digital domain. One of the simplest possible filters is an averaging filter. Using a digital averaging filter on the signal above, which was generated from two pure tones and a random noise generator, we can smooth out the signal and make it less noisy:

It doesn’t perfectly get rid of the noise element, but it does smooth the signal and this may be enough for what we want to do. More complex filters can be constructed by weighted averages of samples taken from the digital stream, but now we run into another complex problem, time.

Many of us don’t think much about the time it takes to do calculations. We may be frustrated by how slow our word processor is working or how long it takes for our spreadsheet to calculate, but unless you have worked at the microprocessor level, programming at a very low level, microprocessor time cycles have very little meaning. However, when working with DSP, processing time becomes all important. Consider what we would have to do to average the last three digital samples together and output a signal at the same rate as we are sampling:

A new sample has to be taken and stored for use.

The new sample has to be averaged with the last two.

The result has to be output.

All of this has to take place before the next sample is taken. This is STREAM processing, processing the signal in real time. If we are sampling at 10,000 samples per second, then we have 1 ten-thousandth of a second to complete the calculation. This is the long time in the life of any real microprocessor, but may not be enough if we want to use more complex algorithms.

Block processing collects a large number of samples, say 1024, at a time and processes them while the next 1024 are being collected. Some algorithms, such as the Fast Fourier Transform can only work in this mode, but even this may not be enough time for very complex calculations.

Consider again the signal at the beginning of this note. It was made from two pure tones plus a random noise element. With block processing, we can apply a Fast Fourier Transform to the digitized signal to get an output that looks like this:

A Fourier Transform is a special mathematical algorithm that transforms the signal into a representation we can think of as the energy in the signal vs frequency. In this case, we can see that most of the energy is concentrated in two single frequencies and the rest is spread out randomly across the spectrum. The noise is that random element. An Inverse Fourier Transform can return the signal back to it original time sampled form, called the time domain.

Why might we want to do a Fourier transform? Simply because we can apply algorithms in the frequency domain that we can’t apply in the time domain. Some things, like tones, stand out in the frequency domain. Other things, like noise, appear as a random element below some threshold in the frequency domain. Clearly, if the signal we wanted to process was our simple two tone system, this would be an easy way of isolating the tones and eliminating most of the noise. However, real life is never that simple. Real signals, like SSB or CW Morse signals vary over time. They change in what seem to be dynamic and almost random ways. There are patterns, or we wouldn’t be able to understand them, but the state of DSP has not grown sophisticated enough yet for us to understand them. We may be able to understand the signals, but even the most complicated processing yet devised can’t come close. This runs us right back up against the time wall.

Time is very important to the digital designer. Advances in the speed of DSP processors and microprocessors in general open up new things that a digital designer can do, but always we are limited by time and space requirements. Doing the complex algorithms is easy when you have as much space as you want to store data and as much time as you need for the calculations, but when a DSP processor has to do something in real time, there are severe limits on just how much can be done. To do practical work with DSP, we have to accept that the sophistication, while great, is not infinite.

Sophisticated understanding of DSP allows us to recognize a signal in noise by its characteristics. We can process out much of the noise, improving the ratio of signal to noise thereby making it more understandable. We can recognize interfering tones and process them out. Even more, we can adapt as the signal and the noise change over time. No algorithm is perfect, but compared to 20 years ago, the level of improvement in noise reduction is phenomenal. SGC’s ADSP2 takes advantage of these advances and makes digital noise reduction a reality for a wide variety of transceivers and receivers, allowing you to concentrate on the communication and not the noise.

For further reading about Digital Signal Processing, SGC has a small book available online for download as a PDF file which can be found on our publications page. For those who would like a more sophisticated understanding of DSP, we recommend Digital Signal Processing Technology, Doug Smith, American Radio Relay League, 2001.

SGC Inc., Tel:
425-746-6310 Fax: 425-746-6384
Email: sgc@sgcworld.com SGC reserves the right to change specifications, release dates and price without notice.