The Implementation Of A Software Defined Radio Computer Science Essay

Published: November 9, 2015 Words: 4393

A software radio can defined as a reconfigurable or reprogrammable radio that can have different functionalities with the same hardware. Modulation or demodulation of a transmitted signal is performed in software on a digital signal processing platform. This differs from conventional radio where processing of the signal was usually accomplished in the analog domain by application-specific circuits.

A software defined radio is characterized by its adaptivity; its functionality can be changed by modifying or replacing the software, thus making it easy to upgrade or switch to new modes of operation with very minimal or no change to hardware at all. There is a big difference between a radio that uses software for some of its functions and a radio that can be completely redefined by modifying the software for different application needs, the latter is a software defined radio.

The rapid advancements in communication systems renders most of the hardware obsolete once new technologies come into place and thus the need for a software defined radio that easily adopts to the new technologies by modifying the software that runs it.

This thesis explores the implementation of a software defined radio that receives AM and FM signals

The problem

The vast day to day technological advancements creates the need for "future-proof" radios. If the radio functions carried out by the hardware can be implemented in software, new functions and modes of operations can easily be adopted by updating or replacing the software that runs it.

Most of the wireless communication protocols are based on radio, and have been undergoing rapid changes in search of cost-effective and reliable solutions to increasing consumer demands for fast and cheap communication. Cellphones, pagers, televisions, two-way radios, wireless LANs, and all other wireless devices use radio to communicate, thus the need to come up with a solution to bypass the costly affair of hardware changes and upgrades.

Having radios implemented in software enables the use of multiple waveforms using the same receiver hardware, with this one software radio can communicate with many different radios with only a change in software parameters. This creates interoperability amongst different systems.

Background

Radio

Radios are devices which transmit and receive signals over a distance by transmitting and receiving radio waves. Radio waves do not rely on a physical medium for propagation; this property of radios has made them essential for modern communication. Radios accomplish their function by radiating electromagnetic waves, which are modulated by some message signal on the transmission side and demodulating the received waves on the reception side. The message signal can be anything from music, played on a local radio station, to digital weather maps transmitted from a weather satellite.

Radios differ mainly in two areas. The first is concerned with the wavelength of

Electromagnetic energy that is radiated. This determines part of the electromagnetic spectrum the radio functions in. The second is concerned with how this electromagnetic wave, called the carrier, is modulated to allow a message signal to be transmitted. For two radios to be compatible, both of these have to match up. Commercial FM radio stations for example use the same modulation technique as traditional television broadcasts to transmit sound, but FM radios cannot play the sound from television broadcasts because the two systems operate in different parts of the radio spectrum. Similarly even though Bluetooth and 802.11b Wireless LAN devices operate in the same part of the radio spectrum (2.4GHZ ISM Band), they are not interoperable at the physical layer because they employ different modulation techniques.

Modulation

Modulation is a process of transmitting a signal whereby an message signal is encoded one of the characteristics of another signal (carrier signal) to produce another third signal (modulating signal) which matches the properties of the medium through which it is to be transmitted. The message signal is used to vary a carrier wave; changing the carrier's amplitude or frequency or phase. There are various modulation schemes, a few of them discussed below;

Amplitude Modulation- A voice signal's varying voltage is applied to a carrier frequency. The amplitude of the carrier frequency then changes according to the voice signal with carrier frequency unaltered. As illustrated in the figure below:

<1>

Fig 1. Amplitude Modulation

Fequency Modulation: Conveys information over a carrier wave by varying its frequency, unlike Amplitude modulation where the amplitude is varied. In frequency modulation the frequency of the carrier wave is varied according to the modulating signal as the amplitude remains constant.

<2>

Fig 2. Frequency Modulation

Super Heterodyne Receiver

Most conventional radios use the Superheterodyne receiver to receive and demodulate radio signal before playing them as Audio sounds. The superheteterodyne receiver is made up of three major parts, A local oscillator, a frequency mixer that mixes the received signal with the local oscillator signal, and a tuned amplifier. The design of a superheterodyne receiver is based on the process of mixing. RF mixers multiply the two signals together. If you multiply two signals together their output is a product of the the instantaneous levels of each of the input signals. Furthermore the output signal contains frequencies other than that of the input signals. New signals occur at frequencies of the sum and difference of the two input signals, Assuming the two input frequencies are at frequencies F1 and F2, then the new signals will occur at (F1+F2) and (F1-F2). The mixer has two inputs and one output, the received signal enters one input of the mixer and the other input is from a locally genrated signal (from the local oscillator). The output is a newly generated signal that passed through a fixed frequency intermadiate frequency amplifier(IF) and filter. Any downverted signals and are within the range of the passband of the IF amplifier will be passed and amplified to be passed on to the next stages, rest are rejected. Tuning is accomplished by varying the frequency of the local oscillator.

<3>

Fig 3. Block Diagram of Super Heterodyne Receiver

The figure below illustrates the block diagram of a basic superheterodyne receiver, once the signal has been downverted to the required frequency IF amplifier and filter passes it to the demodulator, where the signal is demodulated using analog- circuitry according using the modulation scheme selected.

In the next chapters of this thesis we discuss implementation of a PC based software defined radio. Chapter 2 discusses the receiver hardware design to downvert a RF signal to baseband. Chapter 3 and 4 we discuss how the baseband signal is fed to the PC sound card and the chapter 5 we discuss the necessary digital signal processing algorithms and implementation of a software defined radio.

2. Hardware

An ideal software radio requires the A/D converter to be as close as possible to the RF antenna, the rest is to design software to emaulate the various hardware tasks. This idea at the moment is quite impossible due to the frequency coverage of global telecommunications, this would require an A/D converter that works upto a range of Giga-sample/sec and beyond. Such a converter is available but very costly, thus the need to have a front-end, hardware that is able to select a desired portion of the RF spectrum and downvert this signal to a frequency that the A/D converter can sample.

A 16-bit PC sound card is used to sample the RF signal, a standard PC sound card samples at 44,100kHz. Therefore our signal bandwidth should be atleast half that frequency, in line with Nyquist sampling theorem; According to Harry Nyquist: to Accurately recover all the components of a periodic waveform, it is necessary to use a sampling frequency of at least twice the bandwidth of the signal being measured. That minimum sampling frequency is known as Nyquist theorem and is expressed as:

Where is the sampling frequency and is the bandwidth.

Sampling at a frequency below , an alias of the signal will be produced and appears alongside the original signal; Aliases can cause distortion, beat notes, and unwanted spurious images., also available in the markets are 24-bit soundcards that can have sampling rates of up to 96kHz.

Before sampling the signal using the sound card, the RF signal is to be converted to audio frequencies, the simplest way of achieving this while maintaining a wide dynamic range is using a direct conversion technique to translate the modulated signal from RF signal directly to baseband. To downvert the RF signal to a range that is within the A/D converter sampling rates, we adopt the Taylor quadrature product detector by Dan tayloe. Beauty of the detector lies in the simplicity in design and its high performance. Another strong point is that the entire circuit requires three or four IC's and few resistors and capacitors to build it

The fiigure below illustrates the concept of the basic tayloe detector

<4>

Fig. 4 Talyoe detector concept diagram

An incoming signal RF passes through the common resistor R and a commutating multiplexer to one of the 4 sampling capacitors C. It uses a 1:4 multiplexer which commutates at four time the desired frequency. Only a quarter of the input signal is experienced by each capacitor at the required detection frequnecy . This design is a switching integrator which produces only a difference frequency. The input resistor R and the detector capacitance C together act as an integrator averaging the signal through the quarter cycle in which it is passing through the capacitor.

<5>

Fig 5. Sine wave input to the detector

Figure 5 shows a sine wave that is at the same frequency as the detector, with the phasing alignment that will produce a maximum voltage on the first capacitor, zero voltage on the second and fourth capacitor and a maximum negative voltage on the fourth one.

The four capacitors sample at 00 ,900,1800 and 2700 respectively.

The 1800 and 2700 outputs carry the same information as the 00 and 900 outputs. Therefore the 00 and 1800 can summed differentially to produce an in-phase signal (I) and the 900 and 2700 to form a quadrature (Q) signal. A low noise op-amp can be used to differentially sum them as illustrated below:

<6>

Fig. 6 Op-amp to sum the outputs

Shifting the carrier frequency away from that of the sampling, the inverting phase values no longer remain DC values. The output frequency varies according to the difference between the carrier and the switch rotation frequency to provide a precise depiction of all the signal components coverted to baseband.

In this project a single balanced tayloe detector was designed built and tested. The schematic design was adopted from article series "A software defined radio for the masses part 1" by Gerald Youngblood. A great deal of modifications and changes were made since the design at the time of the article was neither implemented nor simulated. The article was written almost a decade ago, most of the IC's are discontinued and no longer in production.

A schematic diagram of the circuit is shown below:

Fig. 7 Schematic Diagram of the Tayloe detector

The circuit contents are listed in the table below:

Dual 1 0f 4 Multiplexer/De-multiplexer (SN74CBT3253)

1

Dual D-type positive edge triggered flip flop (74AC74)

1

Low Noise Operational Amplifier (LT1115)

2

Resistors

4

Capacitors

15

Inductor

1

BNC Connectors

2

5v and 12 V DC supply

2

Next we explain briefly regarding the three different IC's used in this projects:

Dual 1 0f 4 Multiplexer/De-multiplexer (SN74CBT3253)

In the initial Article the detector was implemented using PI5B3253, this is quite an old IC and very hard to source. Therefore the need to find a replacement IC that did exactly the same without any compromise on the gain. The SN74CBT3253 was found to be the best substitute. The IC was sourced from Farnell, (a local supplier). The IC comes as a surface mount package thus quite complicated to solder considering no previous background of Surface mount soldering, with the help of the technical staff at the laboratory the component was mounted properly.

The SN74CBT3253 from Texas instruments is a dual 1-of-4 high-speed TTL-compatible FET multiplexer/de-multiplexer.

<7>

Fig. 8 SN74CBT3253

The low on-state resistance of the switch allows connections to be made with minimal propagation delay.1OE, 2OE, S0, and S1 select the appropriate B output for the A-input data. This is a 1:4 de-multiplexer that switches the signal to each of the four sampling capacitors.

<8>

Fig. 9 Logic diagram of the 1:4 multiplexer

<9>

Fig. 10 Function table of the 1:4 Multiplexer

Dual D-type Flip-flop (74AC74)

The 74AC74 is a Dual D-type Flip-flop with Asynchronous clear and set inputs and Q and Q outputs. It is triggered by the positive edge of the clock pulse; Information is only passed to the outputs on the positive edge. Information is only transferred on the positive going edge of the clock pulse input. The 74AC74 is connected as divide by 4 Johnson counter in this case to provide the two-phase clock the the Multiplexer chip.

The IC contains two D-type flip flops the Q output of the first flip flop is connected to the (D) data input of the second flip-flop, while the Q output of the flip flop is connected back to the data input of the first flip flop.

<10>

Fig. 11 Dual D-type flip flop showing the connections

Low Noise Operational Amplifier (LT1115)

The LT1115 is low noise operational amplifier, since we are dealing with audio circuits we chose this IC to avoid having any unnecessary noise that would compromise audio output. The LT1115 is a very high performance op-amp it exhibits very low voltage noise and a high gain. The op-amp is used to differentially sum the 00 and 1800 together and the 900 and 2700 together.

<11>

Fig. 12 LT1115 Op-amp

How it works?

The Antenna impedance and the sampling capacitors together form a Resistor-Capacitor low pass filter, when each of the four switches is on; the sample represents the average voltage of the signal or the integral during that particular quarter cycle. Each capacitor form a R-C track and hold circuit, as the particular switch turns on , the capacitor will charge to the average value of the carrier during that one quarter cycle and will hold its value through the next three cycles.

Now differentially summing the 00 and 1800 using the Op-amp would give a DC voltage twice the value of the independently sampled signal when the carrier frequency and switch rotation frequency are equal, the same case for the 900 and 2700 output. By shifting the switching frequency away from the carrier frequency the inverting phases will no longer be DC levels, output frequency will vary according to the difference in frequency between the carrier and the switching frequencies to provide a precise representation of the signal downverted to baseband

The two op-amps give the (I) and (Q) outputs respectively. I and Q are discussed in detail in chapter 3.

A PCB (printed circuit board) design of the circuit was created using Orcad Capture, starting off with the schematic entry and moving on to layout. The PCB was fabricated in School Laboratory, All the components were then soldered on to the board including the multiplexer IC which is surface mounted.

A diagram of the PCB layout can be found in the appendix

Testing

Upon completion the hardware was tested using a signal generator and Oscilloscope. A 10 MHz sine wave was input as the carrier and 10.1 MHz square wave the D-type flip flop.

The D-type dual flip-flop is connected as a divide by four Johnson counter to provide the two phase clock to the de-multiplexer , therefore with the square wave input should give two square waves with 900 phase difference going into the demultiplexer chip, same results were obtained in testing as shown in the figure below:

Fig. 13 two-phase clock output from the D-type flip-flop

3. Quadrature Signal

Before we indulge further in Software part of the radio, we discuss a very fundamental theory that plays a big part of the demodulation process.

<12>

Fig. 14 Direct down conversion mixer

Looking at the figure 14 above, the direct down conversion mixer downverts the RF signal to baseband. The output is a signal varying in amplitude along a single axis as illustrated below:

<13>

Fig. 15 Output signal plotted on a single axis

The output is a representation of the magnitude of the signal at a point in time (t) but it provides no information regarding the phase of the signal. We can call this the in-phase signal (I), to effectively demodulate an amplitude modulated signal we only need to know the amplitude information. But for other modulation schemes such as phase modulation it is important to have the phase information in order to demodulate the signal.

Now consider having two identical receiver chains as shown in the figure below:

<14>

Fig. 16 Direct down conversion with 2 mixers

Looking at the Figure 16 above the circuit has a second mixer which takes the same filtered antenna signal as the first as well as having an identical local oscillator frequency feeding into it and identical filter on it output. The output from the second receiver chain apparently is the same as the first (we call this Quadrature or out of phase signal Q). But these two outputs are not the same looking closely at figure above, the second mixer has a 900 phase shift on it local oscillator frequency input. Therefore the audio signals produced by the two mixers will be equal in Amplitude but with a 900 difference in phase. Comparing the output of the two mixers on an oscilloscope would produce something like this:

Fig 17. Comparing the two outputs on an oscilloscope

The two signals can be plotted on an axis , having the (I) signal on the x-axis and the (Q) signal on the y-axis as shown below:

Fig. 18 signal representation on X-Y axis

To calculate the magnitude m(t) of the signal we use simple geometry theory of a right angle triangle, according Pythagorean theorem the hypotenuse of the right angle triangle is given by the summation of the squares of the two other sides []. Giving us the equation:

eqn 2

To calculate the magnitude m(t), we use simple geometry rules of a right angle triangle; according to Pythagorean theorem the hypotenuse of a right angles triangle is given by the summation of the squares of the two other sides, giving us the equation:

eqn 2

The phase of the signal can then be calculated by inverse tangent to give the equation:

eqn 3

Therefore obtaining the (I) and (Q) signal information we can demodulate any type of signal. With equation 2 we can directly demodulate AM and with equation 3 we can demodulate phase modulated signals..

With this we can down clearly understand why we have two Op-amps in the RF circuit that gives us the (I) & (Q) outputs.

The PC sound card used in this project is 16-bit stereo, on the line in channel we have two inputs, left and right. I and Q are input to the two inputs respectively. The output was connected to a stereo jack pin to enable easy plug-in of the RF hardware to any PC sound card.

Software

In a software defined radio most of the radio functionalities are defined in software, the hardware is only used to downvert the signal to baseband. The downverted signal is then input to the Line input and sampled using the soundcard. To carry out Digital signal processing on the PC, we need to sample the baseband signal coming in on the stereo inputs of the soundcard, they are captured as blocks of (I) and (Q) digitized data, they are then processed and returned to the soundcard output in pseudo real-time. This is known as full duplex communication. In this project software is mostly written in C++, to enable direct access to the soundcard we use DirectX,

DirectX

Microsoft Directx is a compilation of application programming interfaces, it consists of various libraries that enable handling of tasks related to multimedia. In this project we only use Directsound. Direct Sound which is a part of the Directx library is used in this project, it handles all tasks related to the soundcard such as passing audio the soundcard, recording, mixing and many other effects. Directsound acts as an interface between the program and the soundcard. All sounds in DirectSound are contained in buffers. These buffers can live in the main memory of the computer, or they can live in the memory on the sound card. They can hold data of various formats. And of course they can play the data they hold back through your sound hardware.

Direct sound uses two buffers to capture and playback data: DirectSoundCapture Buffer and DirectSoundBuffer

A Capture buffer reads audio from a capture object which represents the input device into a one dimension array with a specified wave format. It can be used as a streaming buffer which when filled up returns to a starting point and starts overwriting the samples or a static buffer which stops when the buffer is filled up.

A Directsound buffer is one dimension array that represents sound waves in a digital form. Data is laid out in specified Microsoft.DirectX.DirectSound.WaveFormat which specifies the format type, number of channels, number of samples per second and so on. Each byte is a part of a sample, and a sample represents the average amplitude of the sound waves position at that point in time.[] There are two types of buffers used for output: Primary and Secondary. These both send data to an output device, represented by a Microsoft.DirectX.DirectSound.Device object.

Primary Buffer :- There is only one Primary Buffer for a device, and the data it contains is exactly what will come out of that device. Accessing and changing the primary buffer is possible, but usually unnecessary. It's better to let DirectX handle the mixing, or you will have to take responsibility for outputting the buffer in real-time. You also lose the ability to use secondary buffers, and it prevents other applications from using buffers. To access the primary buffer, create a Microsoft.DirectX.DirectSound.Buffer object using a DirectSound Device with its cooperative level set to CooperativeLevel.WritePrimary.[]

Secondary Buffer :- The secondary buffer is a buffer of a specified size, and contains sound data ready to be output. An application can have as many secondary buffers as it wants, within memory limits. When you call a buffer's Play function the content is mixed into the primary buffer and output by DirectX. If called with the BufferPlayFlags.Looping flag set, it will play the buffer from start to finish over and over again until stopped. By continually writing new data into the buffer just ahead of the current play position it is possible to output audio that would not have fitted into memory, or that did not even exist when you started the buffer. This is known as streaming. You can either use a Microsoft.DirectX.DirectSound.Buffer, or a Microsoft.DirectX.DirectSound.SecondaryBuffer which inherits the previous and adds support for effects such as distortion, compression, reverb, and more.[]

The various basic codes for using directx and capturing input data is explained below;

Defining buffers and Directx objects

In the general section we declare all the variables to capture the inputs on the soundcard using directsoundcapture.

// DirectX variables

LPDIRECTSOUNDCAPTURE8 m_dsCap = NULL;

DSCCAPS m_dsCapCaps;

DSCBUFFERDESC m_dsCapBufDesc;

LPDIRECTSOUNDCAPTUREBUFFER m_dsCapBuf = NULL;

//WAVEFORMATEX m_dsWaveFormat = { WAVE_FORMAT_PCM, 1, 8000, 8000, 1, 8, 0 };

WAVEFORMATEX m_dsWaveFormat = { WAVE_FORMAT_PCM, 2, 44100, 176400, 4, 16, 0 };

//WAVEFORMATEX m_dsWaveFormat = { WAVE_FORMAT_PCM, 2, 22050, 88200, 4, 16, 0 };

// wFormatTag, nChannels, nSamplesPerSec, mAvgBytesPerSec, nBlockAlign, wBitsPerSample, cbSize

DSCBCAPS m_dsCapBufCaps;

Here we define the parameters to handle the input waveform, i.e the format, number of channels (since we have I and Q inputs and a stereo input we select two channels.) number of samples per second.

Testing the compatibility of the soundcard

if (!init_directsound_capture())

DialogBox(ghInstance, MAKEINTRESOURCE(IDD_MAIN), NULL, (DLGPROC) MainDlg);

else

MessageBox(NULL, "Initialization failed. This program requires DirectX and a sound card capable of capturing audio at 44KHz 16-bit stereo using DirectSoundCapture.", "Software Radio cannot be run!", MB_ICONERROR);

shutdown_directsound_capture();

return 0;

This is to test sound card hardware for compatibility. If failed the software does not run.

Creating DirectX events and setting up buffer

// Initialize DirectSoundCapture and all associated interfaces

int init_directsound_capture(void)

{

// char buf[2048];

// Create events

quitReqEvent = CreateEvent(NULL, TRUE, FALSE, NULL);

quitAckEvent = CreateEvent(NULL, TRUE, FALSE, NULL);

// Initialize COM

if (FAILED(CoInitialize(NULL)))

return 1;

// Create the DirectSoundCapture object

if (FAILED(DirectSoundCaptureCreate(NULL, &m_dsCap, NULL)))

return 1;

// Get device caps

m_dsCapCaps.dwSize = sizeof(m_dsCapCaps);

if (FAILED(m_dsCap->GetCaps(&m_dsCapCaps)))

return 1;

Creating capture buffer and start capturing

// Create capture buffer

m_dsCapBufDesc.dwSize = sizeof(DSCBUFFERDESC);

m_dsCapBufDesc.dwFlags = 0;

m_dsCapBufDesc.dwBufferBytes = m_dsWaveFormat.nAvgBytesPerSec * 4;

m_dsCapBufDesc.dwReserved = 0;

m_dsCapBufDesc.lpwfxFormat = &m_dsWaveFormat;

m_dsCapBufDesc.dwFXCount = 0;

m_dsCapBufDesc.lpDSCFXDesc = NULL;

if (FAILED(m_dsCap->CreateCaptureBuffer(&m_dsCapBufDesc, &m_dsCapBuf, NULL)))

return 1;

// Get capture buffer caps

m_dsCapBufCaps.dwSize = sizeof(DSCBCAPS);

if (FAILED(m_dsCapBuf->GetCaps(&m_dsCapBufCaps)))

return 1;

// Save the capture buffer length

capBufLen = m_dsCapBufCaps.dwBufferBytes;

// Start capturing!

m_dsCapBuf->Start(DSCBSTART_LOOPING);

return 0;

}

Figure illustrates how to start the capture buffer, the DSCBSTART_LOOPING command starts the capture buffer in the continous circular loop when it fills the buffer it triggers the directx event which will then start the DirectsoundBuffer which starts reading the data and passes to DSP code defined through the inbuffer(), which is then sent back to the outbuffer() ready to be filled onto the directsoundbuffer for playback.

Parsing the Stereo buffer into I and Q signal

One imperative condition for the captured signal to be processed digitally is that the data be available in I and Q inputs.(as explained in the previous section of this thesis). Since the sound card is stereo the input the left and right channels is separated as I and Q respectively, illustrated with code below:

// Extract the left channel from the mixed buffer, finding a peak value

j = 0;

maxamp = mixedBuffer[0];

for (i = 0; i < lockSamples; i += 2)

{

leftChannel[j] = mixedBuffer[i];

if (abs(leftChannel[j]) > maxamp)

maxamp = abs(leftChannel[j]);

j++;

}

// Extract the right channel from the mixed buffer, finding a peak value

j = 0;

for (i = 1; i < lockSamples; i += 2)

{

rightChannel[j] = mixedBuffer[i];

if (abs(rightChannel[j]) > maxamp)

maxamp = abs(rightChannel[j]);

j++;

}

Now that we have our signal split into (I) and (Q), we can now perform signal processing.