A Seismic Data Analysis Computer Science Essay

Published: November 9, 2015 Words: 3693

INTRODUCTION:

The study of under Earth's by calculating vibrations on the Earth's surface. Seismology can either be passive or active. The passive is by just listening to vibrations caused by earthquakes and volcanic activity where as the active one is by using small explosive charges to send vibrations into the ground. Today, most of the seismograms are registered digitally to make interpretation easily by the computers. Seismograms are essential for calculating earthquakes using the Richter scale. High fidelity digital accelerometers are used to obtain the ground motion data during earthquakes. Analogue one could be compared with digital one. Analogue one is considerably cheaper to manufacture and maintain. The strenuous behavior of structures during earthquakes understands by data from the former needs to be processed. The digital one is being expensive to maintain. The benefit of digital instruments is that the determinations of accurate results as compared to analogue one for usage in earthquake studies.

An accelerometer is used on a seismograph to record acceleration as utility of time. The registered data is really the response of the instrument to ground motion before precise motion of the ground itself. Moreover, the data registered is certainly noisy due to several factors like ocean waves, heavy traffic, piling and wind in equally low and high frequency range. In addition to the above is the truth that the largest part of the historical data that exist today is from records of analogue instruments of unidentified characteristics or questionable reliability. It is therefore very important to route the registered information by means of digital filtering techniques to recover the data that define ground motion to the extent feasible.

Earthquake engineering has developed a lot recently and most of the complex designs now use special earthquake protective elements either just in the foundation (base isolation) or distributed throughout the structure. The interpretation depends on the structure of the building or an object which is considered for the interpretation. There are various parameters that are to be kept in notice while analyzing the seismic data for these structures. Analyzing these types of structures requires specialized explicit finite element computer code.

Structural interpretation methods can be divided into the following five categories.

Equivalent Static Interpretation

Response Spectrum Interpretation

Linear Strenuous Interpretation

Non-linear Static Interpretation

Non-linear Strenuous Interpretation

Equivalent Static Interpretation

This method of interpretation explains a series of forces acting on a building to represent the effect of earthquake ground motion, typically defined by a seismic design response spectrum. It assumes that the building responds in its fundamental mode. For this to be true, the building must be low-rise and must not twist significantly when the ground moves. The response is read from a design response spectrum, given the natural frequency of the building (either calculated or defined by the building code). To account for effects due to "yielding" of the structure, many codes apply modification factors that reduce the design forces.

Response Spectrum Interpretation

This approach mainly depends up on the multiple modes of response of a building to be taken into account (in the frequency domain). This method is applicable for many building codes for all except very simple or complex structures. Computer interpretation can be used to determine these modes for a structure. The response of a structure can be defined as a unification of multiple modes in a vibrating string which correspond to the "harmonics". While analyzing the seismic data for each and every mode, a response is obtained from the design spectrum (which is based on the modal frequency and the modal mass).These two values are then combined to provide an estimate of the total response of the structure. The result of a response spectrum analysis using the response spectrum from a ground motion is typically different from that which would be calculated directly from a linear strenuous analysis using that ground motion directly, since phase information is lost in the process of generating the response spectrum.

Linear Strenuous Interpretation

The Static procedures are not applicable for higher mode effects. Therefore we have to go for high end interpretation procedures like Linear Strenuous Interpretation. The advantage of these linear strenuous procedures with respect to linear static procedures is that higher modes can be considered. The seismic input is modeled using either modal spectral interpretation or time history interpretation. In both the cases, the corresponding internal forces and displacements are determined using linear elastic interpretation. In this Interpretation the applicability decreases with increasing nonlinear behavior. In linear strenuous interpretation, the response of the structure to ground motion is calculated in the time domain, and all phase information is therefore maintained. Only linear properties are assumed.

Non-linear Static Interpretation

The Non-linear static interpretation is used to reduce the unreliability and instability. In general, the linear procedures are applicable for the structures which remain almost elastic for the level of ground motion or uniform distribution of nonlinear counter throughout the structure. As the performance implies great inelastic demands, the unreliability of the linear procedures also increased. Here the procedure requires high level of stability to avoid unintended performance. Therefore, procedures incorporating inelastic interpretation can reduce the unreliability and stability.

The Non-linear static approach is also known as "pushover" interpretation. Here a pattern of forces is applied to a structural model that includes non-linear properties (such as steel yield), and the total force is plotted against a reference displacement to define a capacity curve. This can then be combined with a demand curve (typically in the form of an acceleration-displacement counter spectrum (ADRS)). This essentially reduces the problem to a single degree of freedom system.

Non-linear Strenuous Interpretation

Nonlinear strenuous interpretation uses the unification of ground motion records with a detailed structural model, therefore is capable of producing results with relatively low unreliability. In nonlinear strenuous interpretation, the detailed structural model subjected to a ground-motion record produces estimates of component deformations for each degree of freedom in the model and the modal counters are combined using schemes such as the square-root-sum-of-squares. This approach is the most rigorous, and is required by some building codes for buildings of unusual configuration or of special importance.

ABSTRACT

The main objective of this project is the implementation of TLS (Total Least Square) algorithm and to discuss a relatively straight forward approach in the context of a system identification problem. This project describes the correction or recovery of the original ground motion acceleration time histories from accelerometer digital records. It deals specifically with the situation where the recording accelerometer instrument is unknown. Several adjustments are to be done for raw accelerogram data. To perform this operation a device called Adaptive filter can be used .In performing the correction of accelerogram data MATLAB code plays an important role .The project also discusses the order in which the implementation of the TLS algorithm should be applied. Total least squares is also known as errors in variables, rigorous least squares, or orthogonal regression. This least squares data modeling technique take observational errors on both dependent and independent variables into account. It can be applied to both linear and non-linear models.

The total least squares (TLS) method is used to identify the unknown system (instrument) that must be used to de-convolute the registered time histories. After comparing and contrasting this method with the recursive least squares method (RLS) and a standard second order, single-degree-of-freedom, idealized instrument de-convolution it is proved that TLS provide a reasonable estimate of its characteristics from just the registered historical data without any assumed information about the instrument.

LITERATURE REVIEW

Adaptive filter:

An adaptive filter is a filter that self-adjusts its relocate utility according to an optimizing algorithm. Because utilization of the complexity of the optimizing algorithms, most adaptive filters are digital filters that perform digital signal processing and adapt their performance based on the input signal. By way of disparity, a non adaptive filter has static filter coefficients (which together form the relocate utility).For a few applications, adaptive coefficients are requisite since a few parameters of the much loved processing business (for illustration, the properties of a few sound signals) are not known in move on. In these situations it is common to employ an adaptive filter, which uses feedback to refine the values of the filter coefficients and hence its frequency retort. Generally speaking, the adjusting process engross the utilization of a cost utility, which is a measure for optimum performance of the filter (for example, minimizing the sound constituent of the input), to feed an algorithm, which determines how to modify the filter coefficients to minimize the cost on the next iteration.

The two most widely used adaptive techniques are Least Mean-square (LMS) and the Recursive Least-Square (RLS). Least Mean Square (LMS) and Recursive Least-Square (RLS) algorithms are mostly used adaptive techniques for instrument correction of the accelerogram data. In order to reduce the cost function from one variant of LMS to another, LMS algorithm is used to acclimate the filter coefficients at every repetitive step. When compared to the RLS method, the LMS algorithms are ease to implement and have no complex operations like matrix. With the TLS technique the, the squared error is minimized with an absolute dependence on the values of the data itself i.e. x(n). This shows that for various sets of signal data, we will get various filter coefficients, even if the history of the data sequences considered are the same. This clearly shows that a RLS minimized data set produce a set of filter coefficients which will be optimal for a given set of data, instead of being statistically optimal over some particular event. It is therefore considered that an RLS adaptive technique is the best suited to non-stationary seismic events.[16,17]

Data correction techniques based on minimizing some cost function of Least Squares Error do not require instrument characteristics information [1, 3, 7, 8] and relies only on the registered accelerogram data. This is the great advantage with which deconvolution of seismic data is possible. Many techniques for data correction assume a second order SDOF instrument model which is deconvolved with instrument counter to obtain an estimate of actual ground motion. [3] Discusses the development of recursive least squares (RLS) algorithm for system identification. De-noising is done by means of stationary wavelet transformation (SWT) rather than high pass filtering to reduce artifacts.

Chanerley et al. in [9] estimates the bispectra of seismic data using a nonlinear model. We have to estimate the parameters of an approximate linear model first using linear predictive coding and then the transformation of linear model to time domain to remove linear component from the data is done. The authors in [10] describe implementation of Total Least Squares (TLS) algorithm minimizing || Ax - b|| where A is the corrected data matrix and b is the error vector. This difference in TLS is called partial TLS and is investigated in [12]. The TLS algorithm assumes that the part of data obtained from the instrument is known exactly while the rest is noisy. PTLS was used for denoising and correcting the baseline error without any frequency selective filters. SWT was used instead of band-pass filters in order to get a better ground motion data estimate [7]. The recursive least squares (RLS) algorithm was used for evolving an inverse filter, which is applied to deconvolve the seismic data[3]. RLS was considered to be the preferred choice over LMS when instrument data isn't available in [8]and as the resulting adaptive filter is optimal for the given set of data rather than for an ensemble average. It also converges faster compared to LMS algorithm [3]. Chanerley and Alexander in [11] apply Lp optimization with iterative least squares technique to determine the sensitivity of the algorithm to bursts of short-duration large-amplitude noise which typically unstable instrument operation.

The correction techniques are used to digitize the analog seismic data registered by analogue instruments. They also correct the data for instrument characteristics, detrend and denoise, and resample to an appropriate sampling rate.

Attempts have been made to devise correction procedure for accelerogram data in 1970 by Trifurnac et al. [13]. Here a low-pass filtered is used initially then the corrected instrument data will be passed on to the high-pass filtering to get rid of baseline error. He used an FIR filter (Ormbsy), which is known for phase distortion. Converse [14] had introduced a computer program (BAP) in 1991, which involved in resampling by interpolation to 600Hz. Here initially the baseline error was removed and then the instrument correction will be done. Thereafter the data will be passed through a high-pass bi-directional IIR Butterworth filter for denoising. BAP will run the algorithm segment wise on the data rather than whole record at a time which will make the computing easy [1].

There are several steps involved in correction procedures. Some of them are resampling, baseline correction, instrument correction, filtering & phase correction, decimation are described in [1]. Resampling is done in order to make the accelerogram data sampled evenly. As per the latest technology even high sampling rates can be conveniently handled without segmentation. Even then the strenuous response of instruments is reflected in seismic data. Hence, a correction is necessary in order to have a better estimate of actual ground motion. Filtering plays a major role eliminating the external source noise. Decimation technique is used to down sample the frequency in order to reduce the processing time. Decimation involves in down sampling and low pass filtering, which prevents aliasing.

The slightest signify squares (LMS) algorithms regulate the filter coefficients to minimize the cost utility. Compared to recursive slightest squares (RLS) algorithms, the LMS algorithms do not involve any matrix operations. So, the LMS algorithms need fewer computational assets and memory than the RLS algorithms. The implementation of the LMS algorithms also is less complicated than the RLS algorithms. TLS may not be as robust as the QR-RLS in securing the instrument response. However, the eigenvalue spread of the input correlation matrix, or the correlation matrix of the input signal, might affect the convergence speed of the resulting adaptive filter.

This paper explores instrument correction of accelerogram data using a Total Least Square Algorithm.

ALGORITHM

Total Least Squares algorithm

Total least squares algorithm, which is also known as errors in variables or rigorous least squares, or orthogonal regression, is a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It can be used in both linear and non-linear models because it is a generalization of Deming regression.

Total Least Squares (TLS) algorithm is an addition of the usual Least Squares method, which allows dealing also with uncertainties on the sensitivity matrix.

Brief Introduction on Least Squares

In Least squares (LS) algorithm the objective function may be expressed as a sum of squares. Such problems have a natural relationship to distances in Euclidean geometry, and the solutions may be calculated analytically using the tools of linear algebra.

Regression

Least Squares regression is the most fundamental form of LS optimization problem. Suppose you have a set of measurements, yn,xn collected for different parameter values. The LS regression problem is to find:

We rewrite the expression in terms of column N-vectors as:

Now we describe three ways of obtaining the solution. The traditional (non-linear-algebra) approach is to use calculus. If we set the derivative of the expression with respect to p equal to zero and solve for p, we get:

Technically, one should verify that this is a minimum (and not a maximum or saddle point) of the expression. But since the expression is a sum of squares, we know the solution must be a minimum

A second method of obtaining the solution comes from considering the geometry of the problem in the N-dimensional space of the data vector. We seek a scale factor, p, such that the scaled vector is as close as possible (in a Euclidean-distance sense) to Geometrically, we know that the scaled vector should be the projection of onto the line in the direction of

Thus, the solution for p is the same as below.

A third method of obtaining the solution comes from the so-called orthogonality principle. The concept is that the error vector for the optimal p should be perpendicular to

Solving for p gives the same result as below.

Total Least Squares (Orthogonal) Regression

In classical least-squares regression, errors are defined as the squared distance from the data points to the fitted function, as measured along a particular axis direction. But if there is not a clear assignment of "dependent" and "independent" variables, then it makes more sense to measure errors as the squared perpendicular distance to the fitted function. The drawback of this formulation is that the fitted surfaces must be subspaces (lines, planes, hyper planes).

Suppose one wants to fit the N-dimensional data with a subspace (line/plane/hyper plane) of dimensionality N − 1. The space is conveniently defined as containing all vectors perpendicular to a unit vector and the optimization problem may thus be expressed as:

where M is a matrix containing the data vectors in its rows.

Performing a Singular Value Decomposition (SVD) on the matrix M allows us to find the

solution more easily. In particular, let with U and V orthogonal, and S diagonal

with positive decreasing elements. Then

Since V is an orthogonal matrix, we can modify the minimization problem by substituting the vector which has the same length as :

The matrix is square and diagonal, with diagonal entries Because of this, the expression being minimized is a weighted sum of the components of which must be greater than the square of the smallest (last) singular value, :

where we have used the constraint that is a unit vector in the last step. Furthermore, the expression becomes an equality when the standard basis vector associated with the Nth axis [verify].

We can transform this solution back to the original coordinate system to get a solution for

which is the Nth column of the matrix V . In summary, the minimum value of the expression occurs when we set equal to the column of V associated with the minimal singular value.

The formulation can easily be augmented to include a shift of origin. That is, suppose we wish to fit the data with a line/plane/hyper plane that does not necessarily pass through the origin:

where is a column vector of ones. For a given the optimal solution for u0 is easily found to be where is a vector whose components are the average of each column of M

Suppose we wanted to fit the data with a line/plane/hyper plane of dimension N−2? We could first find the direction along which the data vary least, project the data into the remaining (N − 1)-dimensional space, and then repeat the process. Because V is an orthogonal matrix, the secondary solution will be the second column of V (i.e., the column associated with the second-largest singular value). In general, the columns of V provide a basis for the data space, in which the axes are ordered according to variability. We can solve for a vector subspace of any desired dimensionality in which the data are closest to lying.

The total least squares problem may also be formulated as a pure (unconstrained) optimization problem using a form known as the Rayleigh Quotient:

The length of the vector doesn't change the value of the fraction, so one typically solves for a unit vector. As above, this fraction takes on values in the range and is equal to the minimum value when the first column of the matrix V.

The TLS problem is an extension of the LS problem in the domain of uncertainty of the sensitivity matrix. The TLS solutions are less stable than the usual LS ones, the profound reason being that while the LS problem has always a solution, the TLS problem doesn't. The reason why the TLS is a suitable method of analysis of linear problems (and nonlinear, with an iterative approach) is that it gives the possibility to explore the validity of the model used, giving a measure of the change in the sensitivity matrix needed to have the best solution. Therefore the TLS can give the scientist hints for recognizing mistakes in the model, while this in the case of LS is forced to work. The TLS is therefore more suitable since conceptually more correct in the cases where the model used is extremely empirical and it's a priori scientific motivations are weak.

CONCLUSION

This dissertation "Interpretation of seismic data using TLS Algorithm" yields the results that the TLS algorithm is a useful tool for correcting seismic data when instrument parameters are not known. All that is required is the original recording from the seismograph and the algorithm can then produce the inverse filter with which we can de-convolve the instrument response. The algorithm was tested using data from various instruments and found that the TLS may not be as robust as the QR-RLS in securing the instrument response. The inverse FIR filter plots shown are credible responses and explain the utility of the approach. In fact the TLS performance in some cases has been better than that of the QR-RLS and the 2nd order SDOF with a standard filter, because it reflects the anti-alias filter whose details were in this case available in the record. In general the TLS algorithm demonstrates that it can be used effectively to deconvolve the instrument response from the seismic data, in particular where the instrument parameters are either not known or not available. Considerably the TLS algorithm requires a large amount of memory when working with large data sets and in double precision.

The TLS approach is an advanced to LS approach. The TLS solutions are less stable than the usual LS solutions. The reason why the TLS is a suitable method for interpretation of linear and nonlinear problems is that, it gives the flexibility to find the validity of the model used. This is done by providing a measure of manipulating the values of the sensitivity matrix to obtain the best solution. Therefore the TLS helps scientists in identifying the mistakes in the model where this is not the case in LS.