The Adaptive Noise Cancellation Computer Science Essay

Published: November 9, 2015 Words: 5781

In this section, noises associated with the respiratory signals are taken into account. Various noises such as motion artifact due to instruments, muscle contraction, electrode contact noise, power line interference, 50Hz interference present in the respiratory signal are first filtered out using adaptive filters. Least Mean Square and Recursive Least Square are the two main adaptive filter algorithms used for analysis. The Performance of adaptive filter is then evaluated with an Adaptive Neuro Fuzzy Inference System (ANFIS) .The results showed that the normalized LMS performs better when compared to other LMS algorithms with SNR improvement of 4.17 dB and MSE value of 0.0589. RLS provides least MSE value or 0.0105 but only with highest filter order. Quantitative analysis reveals that ANFIS out performs the normalized LMS and RLS algorithms. The result obtained indicates that ANFIS is a useful AI (Artificial Intelligence) technique to cancel the non linear interferences from the respiratory signal with very low mean square value of 0.0122.

1. INTRODUCTION

The monitoring of the respiratory signal is essential since various sleep related disorders like sleep apnea (breathing is interrupted during sleep), insomnia (inability to fall asleep), narcolepsy can be detected earlier and treated. Also breathing disorders like snoring, hypoxia (shortage of O2), hypercapnia (excess amount of CO2) hyperventilation (over breathing) can be treated. The respiratory rate for new born is 44 breathes/min for adults it is 10-20 breathes/min. Various noises affecting the respiratory signal are motion artifact due to instruments, muscle contraction, electrode contact noise, power line interference, 50HZ interference, noise generated by electronic devices, baseline wandering, electrosurgical noise. One way to remove the noise is to filter the signal with a notch filter at 50 Hz. However, due to slight variations in the power supply to the hospital, the exact frequency of the power supply might (hypothetically) wander between 47 Hz and 53 Hz. A static filter would need to remove all the frequencies between 47 and 53 Hz, which could excessively degrade the quality of the ECG since the heart beat would also likely have frequency components in the rejected range.

To circumvent this potential loss of information, an adaptive filter has been used. The adaptive filter would take input both from the patient and from the power supply directly and would thus be able to track the actual frequency of the noise as it fluctuates. Several papers have been presented in the area of biomedical signal processing where an adaptive solution based on the various algorithms is suggested. Performance study and comparison of LMS and RLS algorithms for noise cancellation in ECG signal. Application of LMS and its member algorithms to remove various artifacts in ECG signal is carried out. Mean square error behavior, convergence and steady state analysis of different adaptive algorithms are analyzed.

This work describes the concept of adaptive noise cancelling, a method of estimating signals corrupted by additive noise. An adaptive filtering method is proposed to remove the artifacts signals from respiratory signals. The real-time artifact removal is implemented by multi-channel Least Mean Square algorithm. The results show that the performance of the Normalised LMS algorithm is superior than conventional LMS algorithm, the performance of signed LMS and sign-sign LMS based realizations are comparable to that of the LMS based filtering techniques in terms of signal to noise ratio and computational complexity. The NLMS algorithm extends the gradient-adaptive learning rate approach to the case where the signals are non stationary. It is shown that the NLMS algorithm can work even for highly non stationary interference signals, where previous gradient-adaptive learning rate algorithms fail. The use of two simple and robust variable step-size approaches in the adaptation process of the Normalized Least Mean Square algorithm in the adaptive channel equalization is investigated.

3. NOISES IN RESPIRATORY SIGNALS

Methods of respiration monitoring fall into two categories. Devices such as spirometers and nasal thermocouples measure air flow into and out of the lungs directly. Respiration can also be monitored indirectly, by measuring body volume changes; transthoracic inductance and impedance plethysmographs, strain gauge measurement of thoracic circumference, pneumatic respiration transducers, and whole-body plethysmographs are examples of indirect techniques. When the doctors are examining the patient on-line and want to review the respiratory signal waveform in real-time, there is a good chance that the signal has been contaminated by baseline wander (BW), power line interference (PLI), muscle artifacts (MA) and electrode motion artifacts (EM) etc., mainly caused by patient breathing, movement, power line noise, bad electrodes and improper electrode site preparation. All these noises mask the tiny features of the signal and leads to false diagnosis. To allow doctors to view the best signal that can be obtained, we need to develop an adaptive filter to remove the artifacts in order to better obtain and interpret the respiratory signal data.

3.1 Motion Artifact

Motion artifact cause false alarms during patient monitoring, which can reduce clinician confidence in monitoring equipment alarms and, consequently, slow response time. When motion artifact is introduced to the system, the information is skewed. Motion artifact causes irregularities in the data. Motion artifact can be reduced by proper design of the electronic circuitry and set-up. The shape of the baseline disturbance caused by motion artifacts can be assumed to be a biphasic signal resembling one cycle of a sine wave. The peak amplitude and duration of the artifact are variables since the respiratory unit is a sensitive device, it can pickup unwanted electrical signals which may modify the actual respiratory signal.

3.2 Power line interference

Power line interference consists of 50Hz pickup and harmonics which can be modelled as sinusoids and combination of sinusoids. Characteristics which might need to be varied in a model of power line noise include the amplitude and frequency content of the signal. These characteristics are generally consistent for a given measurement situation and, once set, will not change during a detector evaluation. Power line interference is often a nuisance in bio potential measurements, mostly because of the long wires between the subject and the amplifier, the separation between the measurement points (electrodes), capacitive coupling between the subject (a volume conductor) and power lines, and the low amplitude of the desired signals. High-resolution measurements searching for potentials as small as 1 V further exacerbate the problem. It is a common interference source with low frequency and weak amplitude in signal detection and transmission.

3.3 Electrode Contact Noise

Electrode contact noise occurs due to the loss of contact between electrode and skin. The measurement of bioelectric events is exposed to various sources of noise. The reactions that take place at the electrode make the electrode itself a source of noise. Electrode contact noise can be modelled as a randomly occurring rapid baseline transition (step) which decays exponentially to the baseline value and has a superimposed 50 Hz component. This transition may occur only once or may rapidly occur several times in succession. Characteristics of this noise signal include the amplitude of the initial transition, the amplitude of the 50 Hz component and the time constant of the decay.

3.4 Baseline Drift

The wandering of baseline results from the gross movements of the patients or from mechanical strain on the electrode wires. If there is no proper application of jelly between the electrode and the skin, during that time also baseline wandering occurs. Respiration, muscle contraction, and electrode impedance changes due to perspiration or movement of the body are the important sources of baseline drift. The drift of the baseline with respiration can be represented as a sinusoidal component at the frequency of respiration. The amplitude and frequency of the sinusoidal component should be variables. The amplitude of the respiratory signal also varies by about 15 percent with the original signal. The variation could be reproduced by amplitude modulation of the respiratory by the sinusoidal component which is added to the baseline.

4. ADAPTIVE FILTERS

Consider the Wiener filtering problem within the context of non stationary processes. Specifically, let denote the unit sample response of the FIR Wiener filter that produces the minimum mean-square estimate of a desired process d(n),

(2.1)

In many respects, the design of a shift-varying(adaptive) filter is much more difficult than the design of a (shift-invariant) Wiener filter since, for each value of n, it is necessary to find the set of optimum filter coefficients, ,for k=0,1,…,p. However, the problem may be simplified considerably if we relax the requirement that minimize the mean-square error at each time n and consider, instead, a coefficient update equation of the form

(2.7)

Where is a correction that is applied to the filter coefficient at time n to form a new set of coefficient, , at time n+1. This update equation is the heart of the adaptive filters.

Adaptive filtering can be considered as a process in which the parameters used for the processing of signals changes according to some criterion. Usually the criterion is the estimated mean squared error or the correlation .The adaptive filters are time-varying since their parameters are continually changing in order to meet a performance requirement. In this sense, an adaptive filter can be interpreted as a filter that performs the approximation step on-line. Usually the definition of the performance criterion requires the existence of a reference signal that is usually hidden in the approximation step of fixed-filter design. The general set up of adaptive filtering environment is shown in Fig. 1, where n is the iteration number, x(n) denotes the input signal, y(n) is the adaptive filter output, and d(n) defines the desired signal. The error signal e(n) is calculated as d(n)-y(n). The error is then used to form a performance function or objective function that is required by the adaptation algorithm in order to determine the appropriate updating of the filter coefficients. The minimization of the objective function implies that the adaptive filter output signal is matching the desired signal in some sense.Two types of adaptive algorithms are mostly used in the communication systems. They are,

LMS-Least Mean Square.

RLS-Recursive Least Square.

4.1 LEAST MEAN SQUARE ALGORITHM

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal).It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. The steepest descent adaptive filter, which has a weight-vector update equation given by,

(3.1)

Practical limitation with this algorithm is that the expectation E{e(n) is generally unknown.Therefore,it must be replaced with an estimate such as the sample mean

(3.2)

Incorporating this estimate into the steepest descent algorithm, the update for becomes

(3.3)

A special case for above equation occurs if we use a one point sample mean (L=1)

E (3.4)

In this case, the weight vector update equation assumes a particularly simple form

(3.5)

and is known as the LMS algorithm. The simplicity of the algorithm comes from the fact that the update for the kth coefficient,

(3.6)

Requires only one multiplication and one addition (the value for µe(n) need only be computed once and may be used for all of the coefficients).Therefore ,an LMS adaptive filter having p+1 coefficients requires p+1 multiplications and (p+1)additions to update the filter coefficients. In addition, one addition is necessary to compute the error e(n)=d(n)-y(n) and one multiplication is needed to form the product µe(n).Finally ,p+1 multiplication and p additions are necessary to calculate the output y(n),of the adaptive filter. Thus, a total of 2p+3 multiplications and 2p+2 additions per output point are required.

The different variation in LMS algorithm includes,

Normalized LMS

Leaky LMS

Sign Error LMS

Sign Data LMS

Sign Sign LMS

An alternative, therefore, is to use the following modification to the NLMS algorithm:

(3.27)

where is some small positive number.

Compared with the LMS algorithm, the normalized LMS algorithm requires additional computation to evaluate the normalization term . However, if this term is evaluated recursively as follows

(3.28)

Then the extra computation involves only two operations, one addition, and one subtraction.

When the input process with to an adaptive filter has an autocorrelation matrix with zero Eigen values, the LMS adaptive filter has one or more modes that are undriven and undamped. Then,

E{un(k)}= u0(k) (3.29)

which does not decay to zero with n. Since it is possible for these undamped modes to become unstable, it is important to stabilize the LMS adaptive filter by forcing these modes to zero. One way to accomplish this is to introduce a leakage coefficient γ into the LMS algorithm as follows,

wn+1= (1-µγ)wn+µ e(n) x*(n) (3.30)

Where 0<γ<<1. The effect of this leakage coefficient is to force the filter coefficients to zero if either the error e(n) or the input x(n) become zero, and to any undamped modes of the system to zero.

Another set of simplifications to the LMS algorithm are found in the sign algorithms. In these algorithms, the LMS coefficient update equation is modified by applying the sign operator to either the error e(n),the data x(n),or the error and the data. For example, assuming that x(n)and d(n)are real-valued processes, the sign error algorithm is,

wn+1 = wn+ µ sgn{e(n)}x(n) (3.35)

Where,

Sgn{e(n)} = 1 ; e(n) > 0

0 ; e(n) = 0

-1 ; e(n) < 0

Note that the sign error algorithm may be viewed as the result of applying a two-level quantizer to the error. The computational requirements of the LMS algorithm may be simplified by using the sign of the data as follows,

wn+1=wn+µ e(n) sgn{x(n)} (3.36)

which is the sign data algorithm. Note that, unlike the sign error algorithm, the sign-data algorithm alters the direction of the update vector. As a result, the sign data algorithm is generally less robust than the sign-error algorithm. Quantizing both the error and the data leads to the sign-sign algorithm, which has a coefficient update equation given by

Wn+1=wn+µ sgn{e(n)} sgn{x(n)} (3.39)

In this algorithm, the coefficients wn(k) are updated by either adding or subtracting a constant µ. For stability, a leakage term is often introduced into the sign-sign algorithm giving an update equation of the form,

Wn+1=(1-µγ)wn+µ sgn{e(n)} sgn{x(n)} (3.40)

Generally, the sign-sign algorithm is slower than the LMS adaptive filter and has a larger excess mean square error.

4.2 RECURSIVE LEAST SQUARES ALGORITHM

The Recursive least squares (RLS) adaptive filter is an algorithm which recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. This is in contrast to other algorithms such as the least mean squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithm they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence. Let as reconsider the design of an FIR adaptive Wiener filter and find the filter coefficients,

Wn=[wn(0),wn(1),…..,wn(p)]T (4.4)

that minimize, at time n, the weighted least squares error

Ξ(n)=n-i|e(i)|2 (4.5)

Where 0<λ≤1 is an exponential weighting factor and

e(i)=d(i)-y(i)=d(i)-wnTx(i) (4.6)

Note that e(i) is the difference between the desired signal d(i) and the filtered output at time I ,using the latest set of filter coefficients, wn(k). Thus, in minimizing ξ(n) it is assumed that the weights wn are held over the entire observation interval[0,n].To find the coefficients that minimize ξ(n) we proceed exactly as we have done many times before by setting the derivative of ξ(n) with respect to wn*(k) equal to zero for k=0,1,……,p. Thus we have,

= n-ie(i) = -n-ie(i)x*(i-k) = 0 (4.7)

For k=0,1,…..,p. Incorporating two equations, yields,

n-i{ d(i)-(l)x(i-l)}x*(i-k)=0 (4.8)

Interchanging the order of summation and rearranging terms we have

(l)[n-ix(i-l)x*(i-k)]=n-id(i)x*(i-k) (4.9)

We may express these equations concisely in matrix form as follows;

Rx(n)wn=rdx(n) (4.10)

Where Rx(n) is a (p+1)(p+1) exponentially weighted deterministic autocorrelation matrix for x(n)

Rx(n)=n-ix*(i)xT(i) (4.11)

With x(i) the data vector

X(i)=[x(i),x(i-1),…..,x(i-p)]T (4.12)

And where rdx(n) is the deterministic cross correlation between d(n) and x(n),

rdx(n)=n-id(i)x*(i) (4.13)

these equations are referred to as the deterministic normal equations. we may express the minimum error in vector form as follows,

{ξ(n)}=||d(n)||λ2-rdxH(n)wn (4.14)

Where ||d(n)||2λ is the weighted norm of the vector d(n)=[d(n),d(n-1),….,d(0)]T..

Let P(n) denote the inverse of the autocorrelation matrix at time n,

P(n)=Rn-1(n) (4.15)

And define what is referred to as the gain vector, g(n),as follows:

g(n)= (4.16)

Incorporating these definitions we have,

P(n)=λ-1[P(n-1)-g(n)xT(n)P(n-1)] (4.17)

That the term multiplying x*(n) is P(n) and we have

g(n) = P(n)x*(n) (4.18)

thus the gain vector is the solution to the linear equations

Rx(n)g(n)=x*(n) (4.19)

the time update equation for the coefficient vector wn.with,

wn=P(n)rdx(n) (4.20)

the above equation can be written as,

wn=wn-1+α(n)g(n) (4.21)

where

α(n)=d(n)-wn-1Tx(n) (4.22)

is the difference between d(n) and the estimate of d(n) that is formed by applying the previous set of filter coefficients,wn-1,to the new data vector, x(n).This sequence, called the a priori error, is the error that would occur if the filter coefficients were not updated. The a posteriori error, on the hand, is the error that occurs after the weight vector is updated,

e(n)=d(n)-wnTx(n) (4.23)

when α(n) is small, the current set of filter coefficients are close to their optimal values and only a small correction needs to be applied to the coefficients. On the other hand, when α(n) is large, the current set of filter coefficients are not performing well in estimating d(n) and a large correction must be applied to update the coefficients.

One final simplification may be realized if we note that, in the evaluation of the gain vector g(n),and he inverse autocorrelation matrix P(n),it is necessary to compute the product

Z(n)=P(n-1)x*(n) (4.24)

Therefore, we may explicitly evaluate this filtered information vector and then use it in the calculation of both g(n) and P(n).The above equations form what is known as the exponentially weighted Recursive Least Squares(RLS) algorithm.

5. ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

The acronym ANFIS derives its name from Adaptive Neuro-Fuzzy Inference System. Using a given input/output data set, the toolbox function ANFIS constructs a fuzzy inference system (FIS) whose membership function parameters are tuned (adjusted) using either a back propagation algorithm alone or in combination with a least squares type of method. This adjustment allows your fuzzy systems to learn from the data they are modeling. Fuzzy inference systems incorporate human knowledge and perform inference and decision making. The basic idea of combining fuzzy systems and neural networks is to design an architecture that uses a fuzzy system to represent knowledge in an interpretable manner, in addition to possessing the learning ability of a neural network to optimize its parameters. ANFIS cancels out the interference and gives better performance even if the complexity of the signal is very high.

5.1 Adaptive Neural Fuzzy Inference System (ANFIS) Characteristics

Creates a fuzzy decision tree to classify the data into one of 2n (or pn) linear regression models to minimize the sum of squared errors (SSE).

SSE=j (5.1)

Where, ej is the error between the desired and the actual output. Fuzzy Logic has been widely used in the design and enhancement of a vast number of applications. It is conceptually simple and straightforward. However, its proper use is heavily dependent on expert knowledge, which may not always be available. The proper selection of the number, the type and the parameter of the fuzzy membership functions and rules is crucial for achieving the desired performance and in most situations, it is difficult. Yet, it has been done in many applications through trial and error. This fact highlights the significance of tuning fuzzy system. ANFIS are fuzzy Sugeno models put in the framework of adaptive systems to facilitate learning and adaptation. Such framework makes fuzzy logic more systematic and less relying on expert knowledge. There are many benefits to using ANFIS in pattern learning and detection as compared to linear systems and neural networks. These benefits are centered on the fact that ANFIS combines the capabilities of both neural networks and fuzzy systems in learning nonlinearities. Fuzzy techniques incorporate information sources into a fuzzy rule base that represents the knowledge of the network structure so that structure learning techniques can easily be accomplished. Moreover, ANFIS architecture requirements and initializations are fewer and simpler compared to neural networks, which require extensive trails and errors for optimization of their architecture and initializations.

5.2 ANFIS architecture

Figure 5.1: ANFIS Architecture

There are many benefits to using ANFIS in pattern learning and detection as compared to linear systems and neural networks. These benefits are centered on the fact that ANFIS combines the capabilities of both neural networks and fuzzy systems in learning nonlinearities. Moreover, ANFIS architecture requirements and initializations are fewer and simpler compared to neural networks, which require extensive trails and errors for optimization of their architecture and initializations. To present the ANFIS architecture, let us consider two-fuzzy rules based on a first-order Sugeno model,

Rule 1: if (x is A1) and (y is B1),

then (f2=p1x+q1y+r1)

Rule 2: if (x is A2) and (y is B2),

then (f2=p2x+q2y+r2)

one possible ANFIS architecture to implement these two rules is shown in Fig. 3. Note that a circle indicates a fixed node whereas a square indicates an adaptive node (the parameters are changed during training).

Layer 1: Calculate Membership Value for Premise Parameter

All the nodes in this layer are adaptive nodes; is the degree of the membership of the input to the fuzzy membership function (MF) represented by the node,

Output O1,i for node i=1,2

O1,i=(x2)

Output O1,i for node i=3,4

O1,i=(x2)

Layer 2: Firing Strength of Rule

The nodes in this layer are fixed (not adaptive).These are labeled to indicate that they play the role of a simple multiplier. The outputs of these nodes are given by

O2,I = (x1)(x2)

The output of each node is this layer represents the firing strength of the rule.

Layer 3: Normalize Firing Strength

Nodes in this layer are also fixed nodes. These are labeled N to indicate that these perform a normalization of the firing strength from previous layer. The output of each node in this layer is

given by,

O3,i = = for i=1,2.

Layer 4: Consequent Parameters

All the nodes in this layer are adaptive nodes. The output of

each node is simply the product

O4,i = fi =

Where pi, qi and ri are design parameters (consequent parameter since they deal with the then-part of the fuzzy rule).

Layer 5: Overall Output

This layer has only one node labeled Σ indicate that is performs the function of a simple summer. The output of this single node is given by,

O5,i = =

The ANFIS architecture is not unique. Some layers can be combined and still produce the same output. In this ANFIS architecture, there are two adaptive layers (1 and 4). Layer 1 has three modifiable parameters (ai, bi and ci ) pertaining to the input MFs. These parameters are called premise parameters. Layer 4 has also three modifiable parameters (pi, qi and ri ) pertaining to the first-order polynomial. These parameters are called consequent parameters.

5.3 Computations in ANFIS

The basic steps used in the computation of ANFIS are given below.

Generate an initial Sugeno-type FIS system using the matlab command genfis 1. It will go over the data in a crude way and find a good starting system.

Give the parameters like number of epochs, tolerance error, number of MF, type of MF for learning.

Start learning process using the command anfis and stop when goal is achieved or the epoch is completed. Anfis applies the least squares method and the back propagation gradient descent for identifying linear and nonlinear parameters respectively.

The evalfis command is used to determine the output of the FIS system for given input. In this paper, we have taken the reference signal as respiratory signal .That signal act as training pair for ANFIS training.

Figure 5.2: 2 input ANFIS architecture

5.4 NOISE CANCELLATION

The method used in this paper is adaptive noise cancellation (ANC) based on neuro fuzzy logic technique. ANC is a process by which the interference signal can be filtered out by identifying a non linear model between a measurable noise source and the corresponding immeasurable interference. This is an extremely useful technique when a signal is submerged in a very noisy environment. Usually, the noise is not steady; it changes from time to time. So the noise cancellation must be an adaptive process: it should be able to work under changing conditions, and be able to adjust itself according to the changing environment. The basic idea of an adaptive noise cancellation algorithm is to pass the corrupted signal through a filter that tends to suppress the noise while leaving the signal unchanged. As mentioned above, this is an adaptive process, which means it does not require prior knowledge of signal or noise characteristics. Figure shows noise cancellation with ANFIS filtering.

Figure 5.3: ANC implementation

The principle used for the elimination of artifacts is ANC. It is a process by which the interference signal can be filtered out by identifying a linear model between a measurable noise source (artifact) and the corresponding immeasurable interference.

Figure shows noise cancellation with ANFIS filtering. Here x(k) represents the respiratory signal which is to be extracted from the noisy signal, n(k) is the noise source signal. The noise signal goes through unknown nonlinear dynamics (f) and generates a distorted noise d(k) ,which is then added to x(k) to form the measurable output signal y(k).The aim is to retrieve x(k) from the measured signal y(k) which consists of the required signal x(k) plus d(k) , a distorted and delayed version of n(k) i.e. the interference signal. The function f(.) represents the passage dynamics that the noise signal n(k) goes through. If f(.) was known exactly, it would be easy to recover x(k) by subtracting d(k) from y(k) directly. However, f(.) is usually unknown in advance and could be time- varying due to changes in the environment. Moreover, the spectrum of d(k) may overlap with that of x (k) substantially, invalidating the use of common frequency domain filtering techniques. To estimate the interference signal d(k) ,we need to pick up a clean version of the noise signal n(k) that is independent of the required signal. However, we cannot access d(k) directly since it is an additive component of the overall measurable signal y(k) . In Figure 2, ANFIS is used to estimate the unknown interference d^(k) .When d^(k) and d(k) are close to each other, these two get cancelled and we get the estimated output signal x^(k) which is close to the required signal. Thus by this method, the noise is completely removed and the required signal is obtained.

6. SIMULATION RESULTS

This section presents the results of simulation using MATLAB to investigate the performance behaviors of various adaptive filter algorithms and ANFIS in non stationary environment with step sizes of 0.06 and filter order of 16. The principle means of comparison is the error cancellation capability of the algorithms which depends on the parameters such as step size, filter length and number of iterations. A random noise is added with respiratory signals. It is then removed using ANFIS and adaptive filter algorithms such as LMS, Sign LMS, Sign-Sign LMS, Signed Regressor, LLMS and NLMS. All Simulations presented are averages over 1975 independent runs.

Figure 6.1: Desired signal

This Figure 6.1 shows that the respiratory signal which is generated synthetically. This is the desired signal for the adaptive filter.

Figure 6.2: Input signal to the Adaptive filter

This Figure 6.2 shows that the noise is added with the respiratory signal which is generated synthetically. This is the input signal to the adaptive filter.

Figure 6.3 and Figure 6.4 shows the noise cancelled output respiratory signal and Mean squared error using LMS algorithms. A filter order of 16 and adaptive step size parameter (μ) of 0.06 are used for LMS. The merits of LMS algorithm is less consumption of memory and amount of calculation.

Figure 6.3: LMS output signal

Figure 6.4: Convergence rate of LMS

The table 6.1 provides the comparison of the mean square error and SNR (input and output) of LMS algorithms. In this table filter order is chosen as constant to find, which step size value the LMS algorithm gives the best result. For this, the value of MSE should be minimum and value of SNR output should be maximum. It is observed that for µ=0.06 the LMS algorithm gives the best result. The value of MSE and SNR output for that best step size value is 0.0592 and 7.6220 respectively. But there is always tradeoff between MSE and SNR. Hence choosing an algorithm depends on the parameter on which the system has more concern.

S. No

Order(M)

Step size (µ)

MSE

SNR(i/p)

(dB)

SNR(o/p)

(dB)

SNR improvement

(dB)

1.

16

0.004

0.1832

5.7406

5.0363

0.7043

2.

16

0.006

0.0880

3.7884

4.9252

1.1368

3.

16

0.008

0.0711

4.1372

6.1041

1.9669

4.

16

0.02

0.0663

4.1115

6.7688

2.6579

5.

16

0.04

0.0606

4.0886

7.3591

3.2705

6.

16

0.06

0.0592

4.0348

7.6220

3.5872

7.

16

0.08

0.0647

3.8506

7.3956

3.5450

8.

16

0.1

0.1184

3.9477

5.6278

1.6801

Table 6.1: Determination of best step size value

The table 6.2 also provides the comparison table of the mean square error of various types of LMS to find best filter order. For that the best step size value µ=0.06 which is taken from Table 6.1 is kept as constant and varies the filter order value. From this table it is infer that the filter order value 16 is giving the best result.

Table 6.2 Calculation of best filter order

S. No

Order(M)

Step size(µ)

MSE

SNR(i/p)

(dB)

SNR(o/p)

(dB)

SNR improvement

(dB)

1.

2

0.06

0.0954

4.0034

4.6999

0.6965

2.

4

0.06

0.0762

3.8527

6.0385

2.1858

3.

8

0.06

0.0760

3.6628

6.2085

2.5457

4.

16

0.06

0.0592

4.0348

7.6220

3.5872

5.

32

0.06

1.0871

3.8839

1.0253

2.8586

Figure 6.7 and Figure 6.8 shows the noise cancelled output respiratory signal and Mean squared error using NLMS algorithms. A filter order of 16 and adaptive step size parameter (β) of 0.06 are used for NLMS. In the LMS algorithm, the correction that is applied to wn is proportional to the input vector x (n). Therefore when x(n) is large, the LMS algorithm experiences a problem with gradient noise amplification. With the normalization of LMS step size ||x(n)||2 in the NLMS algorithm, this noise amplification problem is diminished. The NLMS algorithm bypasses the problem of noise amplification that occurs when ||x(n)|| becomes too small. The NLMS algorithm simplifies the selection of step size to ensure that the coefficients converge.

Figure 6.7: NLMS output signal

Figure 6.8: Convergence rate of NLMS

The table 6.3 shows the comparison of resulting mean square error while eliminating noise from the respiratory signals using various adaptive filter algorithms with step size µ=0.06, β=1(NLMS) and filter order=16. Observing all cases, it is infer that LLMS has the minimum MSE value of 0.0568 and LMS has the maximum SNR output value of 7.6220.

S. No

Algorithm

Order(M)

Step size(µ)

MSE

SNR(i/p)

(dB)

SNR(o/p)

(dB)

SNR improvement

(dB)

1.

LMS

16

0.06

0.0592

4.0348

7.6220

3.5872

2.

NLMS

16

1 (β)

0.0589

3.9943

8.1676

4.1733

3.

SLMS

16

0.06

0.0749

4.0394

6.8734

2.8340

4.

SSLMS

16

0.06

0.4416

3.9664

2.1068

1.8596

5.

SRLMS

16

0.06

0.2089

3.8297

3.8635

0.0338

6.

LLMS

16

0.06

0.0568

3.8662

6.5061

2.6399

Table 6.3: Comparison of various LMS algorithms

Figure 6.15 and Figure 6.16 shows the noise cancelled output respiratory signal and Mean squared error using RLS algorithms. A filter order of 128, exponential weighting or forgetting factor lambda (1) and initial value delta (.001) are used for RLS. For wide-sense stationary process the RLS algorithm converges much more rapidly.

Figure 6.15: RLS output signal

Figure 6.16: Convergence rate of RLS

The Table 6.4 shows that comparison of MSE of various adaptive algorithms (LMS and RLS) with the filter order of 16 and step size value of 0.06. This table infers that the LLMS is better than the RLS. But for higher filter order the RLS algorithm converges fast.

S. No

Algorithm

Order(M)

Step size(µ)

MSE

1

NLMS

16

1

0.0589

2

RLS

16

0.06

0.1299

3

RLS

Higher order(256)

0.06

0.0105

Table 6.4: Comparison of LMS with RLS

NOISE CANCELLATION USING ANFIS

The parameters used for ANFIS training are:

Number of nodes: 21

Number of linear parameters: 12

Number of nonlinear parameters: 12

Total number of parameters: 24

Number of training data pairs: 1975

Number of checking data pairs: 0

Number of fuzzy rules: 4

Figure 6.17: ANFIS output signal

The figure 6.17 shows the output signal. It is observed from the figure that the magnitude of the noise is very much reduced.

SIMULATION ENVIRONMENT

Start training ANFIS ...

1 0.654308

2 0.646884

3 0.641479

4 0.636811

5 0.632486

Step size increases to 0.220000 after epoch 5.

6 0.628457

7 0.62439

8 0.62076

9 0.617658

Step size increases to 0.242000 after epoch 9.

10 0.616479

Designated epoch numbers reached --> ANFIS training completed at epoch 10.

The Table 6.5 shows that comparison of MSE with that of adaptive algorithms and ANFIS. It shows that MSE value of the estimated respiratory signal and convergence time is less when ANFIS technique is used. Also SNR is better for the same technique.

S.No

Type

MSE

1

NLMS(Adaptive algorithm)

0.0589

2

ANFIS

0.0122

Table 6.5: Comparison of adaptive filtering algorithms with A

CONCLUSION AND FUTURE WORK

This study has revealed useful properties of the LMS and the RLS algorithms and ANFIS in case of adaptive noise cancellation. It has been found that the RLS algorithm generally performs better irrespective of the nature of the signal and the noise. The RLS is particularly useful in the case of signals where abrupt changes of amplitude or frequency may occur such as DC noises. But this better-quality performance comes at a price: The RLS takes more time to compute, especially when the filter length is large. But change in filter length doesn't have too much effect on the convergence behavior of the RLS. For the LMS, this increase is quite substantial. It can be stated that the RLS algorithm should be preferred over the LMS for adaptive noise cancellation unless the computation time is a matter of great concern. But in this paper leaky LMS is compared with ANFIS. Quantitative analysis reveals that ANFIS out performance the leaky LMS. The result obtained indicates that ANFIS is a useful AI (Artificial Intelligence) technique to cancel the non linear interferences from the respiratory signal.

The future work includes the optimization of algorithms for all kinds of noises and to use the optimized one in the implementation of DSP Microcontroller and LABVIEW that estimates the respiratory signal.