Analysis Of The Mahalanobis Taguchi System Accounting Essay

Category: Accounting

Profit goes to the company that can accelerate the Product Development Process. Traditional Quality Engineering defined quality as being within specifications, zero defects, or customer satisfaction. These definitions of quality, however, do not relate quality to cost. Dr. Genichi Taguchi, the father of Quality Engineering, introduced to the US in the 1980's his approach for achieving robustness in product design. Taguchi's systematic approach to quality focuses on product functionality or product performance in the hands of the customer.

The Taguchi System of Quality Engineering contains four steps to ensure robust product performance including

1) product parameter design,

2) tolerance design,

3) process parameter design, and

4) on-line quality control.

The first and most important step in the Taguchi System of Quality Engineering (TSQE) is product parameter design which is the most important action to design a robust system. The concept of product parameter design is that the final product is robust or insensitive to the incoming variations or noises.

Robust design is achieved through a three step process

1) Define the objective,

2) Define the feasible option, and

3) Select the best option to meet the objective.

Taguchi Methods

Taguchi Methods is a system of cost-driven quality engineering that emphasizes the effective application of engineering strategies rather than advanced statistical techniques. It includes both upstream and shop-floor quality engineering. Upstream methods efficiently use small-scale experiments to reduce variability and find cost-effective, robust designs for large-scale production and the marketplace. Shop-floor techniques provide cost-based, real-time methods for monitoring and maintaining quality in production.

Taguchi Methods allow a company to rapidly and accurately acquire technical information to design and produce low-cost, highly reliable products and processes. Its most advanced applications allow engineers to develop flexible technology for the design and production of families of high quality products, greatly reducing research, development, and delivery time. In general, the farther upstream a quality method is applied, the greater leverage it produces on the improvement, and the more it reduces the cost and time. Most typical applications of Taguchi Methods thus far have centered around two main areas:

1) Improving an existing product

2) Improving a process for a specific product

Tremendous additional benefits can be derived from improving the robustness of generic technology (in R&D) so that it is applicable to a family of present and future products and processes. This application, called Robust Technology Development, is currently being practiced by only a few leading companies worldwide. Farther downstream, Taguchi's methods for what he terms "on-line" quality control (Manufacturing Process Control) can achieve a more cost-effective process control. Taguchi Methods require a new way of thinking about product development. These methods differ from others in that the methods for dealing with quality problems center on the design stage of product development, and express quality and cost improvement in monetary terms. The key to competitive leadership is the timely introduction of high quality products at the right price. Achieving maximum efficiency and effectiveness in the research and development process is critical to this effort.

Â

It was this competitive crisis in manufacturing during the 1970's and 1980's that gave rise to the modern quality movement, leading to the introduction of Taguchi methods to the U.S. in the 1980's. While Deming's approach deals with management and Taguchi's is a system of design engineering, the two philosophies share a common goal: to increase quality.It was mentioned before that Taguchi's philosophy is continually evolving. The ever-changing nature of the Taguchi methods is a natural and necessary extension of the concept called "Kaizen" by the Japanese. Simply defined, Kaizen means improvement, but it is more than that. It means ongoing improvement, collectively involving managers and workers. Taguchi methods seek to improve quality and, in light of Kaizen, are themselves subject to continual change and improvement.Â

Loss functions

Loss functions in statistical theory

Traditionally, statistical methods have relied on mean-unbiased estimators of treatment effects: Under the conditions of the Gauss-Markov theorem, least squares estimators have minimum variance among all mean-unbiased estimators. The emphasis on comparisons of means also draws (limiting) comfort from the law of large numbers, according to which the sample means converge to the true mean. Fisher's textbook on the design of experiments emphasized comparisons of treatment means.

Gauss proved that the sample-mean minimizes the expected squared-error loss-function (while Laplace proved that a median-unbiased estimator minimizes the absolute-error loss function). In statistical theory, the central role of the loss function was renewed by the statistical decision theory of Abraham Wald. However, loss functions were avoided by Ronald A. Fisher

Taguchi's use of loss functions

Taguchi knew statistical theory mainly from the followers of Ronald A. Fisher, who also avoided loss functions. Reacting to Fisher's methods in the design of experiments, Taguchi interpreted Fisher's methods as being adapted for seeking to improve the mean outcome of a process. Indeed, Fisher's work had been largely motivated by programmer to compare agricultural yields under different treatments and blocks, and such experiments were done as part of a long-term programme to improve harvests.

However, Taguchi realized that in much industrial production, there is a need to produce an outcome on target, for example, to machine a hole to a specified diameter, or to manufacture a cell to produce a given voltage. He also realized, as had Walter A. Shewhart and others before him, that excessive variation lay at the root of poor manufactured quality and that reacting to individual items inside and outside specification was counterproductive.

He therefore argued that quality engineering should start with an understanding of quality costs in various situations. In much conventional industrial engineering, the quality costs are simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to consider cost to society. Though the short-term costs may simply be those of non-conformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build in safety margins. These losses are externalities and are usually ignored by manufacturers, which are more interested in their private costs than social costs. Such externalities prevent markets from operating efficiently, according to analyses of public economics. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to the tragedy of the commons), and that by working to minimize them, manufacturers would enhance brand reputation, win markets and generate profits.

Such losses are, of course, very small when an item is near to negligible. Donald J. Wheeler characterized the region within specification limits as where we deny that losses exist. As we diverge from nominal, losses grow until the point where losses are too great to deny and the specification limit is drawn. All these losses are, as W. Edwards Deming would describe them, unknown and unknowable, but Taguchi wanted to find a useful way of representing them statistically. Taguchi specified three situations:

Larger the better (for example, agricultural yield);

Smaller the better (for example, carbon dioxide emissions); and

On-target, minimum-variation (for example, a mating part in an assembly).

The first two cases are represented by simple monotonic loss functions. In the third case, Taguchi adopted a squared-error loss function for several reasons:

It is the first "symmetric" term in the Taylor series expansion of real analytic loss-functions.

Total loss is measured by the variance. As variance is additive (for uncorrelated random variables), the total loss is an additive measurement of cost (for uncorrelated random variables).

The squared-error loss function is widely used in statistics, following Gauss's use of the squared-error loss function in justifying the method of least squares.

Reception of Taguchi's ideas by statisticians

Though many of Taguchi's concerns and conclusions are welcomed by statisticians and economists, some ideas have been especially criticized. For example, Taguchi's recommendation that industrial experiments maximize some signal-to-noise ratio (representing the magnitude of the mean of a process compared to its variation) has been criticized widely.

Taguchi's rule for manufacturing

Taguchi realized that the best opportunity to eliminate variation is during the design of a product and its manufacturing process. Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has three stages:

System design

Parameter design

Tolerance design

System design (primary design, functional design or concept design): This step involves the development of a prototype design which meets customer requirements, and the determination of materials, parts, components, assembly system, manufacturing technology, etc. The key emphasis here is on using the best available technology at the lowest cost to meet customer requirements obtained through quality function deployment (QFD). System design can play an important role in reducing sensitivity to noise factors as well as in reducing manufacturing costs.

Parameter design (secondary design): A parameter is the design variable or control factor which affects a product's functional characteristics. In this step, we determine the levels (values) of design variables (control factors) that minimize the effect of noise factors on the product's quality, minimize the manufacturing cost, and get the mean quality of the product on target. In order to find the optimum levels, fractional factorial designs using tables of orthogonal arrays are often used because there are too many experimental combinations to be tested.

Tolerance design (tertiary design): During parameter design, we assume that low-grade components and materials which allow some tolerances for noise factors will be used, while minimizing the sensitivity to noise and the quality variation. Tolerance design applies if the reduction in quality variation achieved by parameter design is insufficient. In tolerance design, a trade-off is made between reduction in the quality variation and increase in manufacturing cost. That is, we selectively specify higher-grade parts, materials or components to reduce tolerances in the order of their cost effectiveness. Since many choices are possible, experimental designs using tables of orthogonal arrays can also be effectively used in tolerance design.

Design of experiments

Taguchi developed his experimental theories independently. Taguchi read works following R. A. Fisher only in 1954. Taguchi's framework for design of experiments is idiosyncratic and often flawed, but contains much that is of enormous value. He made a number of innovations.

Outer arrays

Taguchi's designs aimed to allow greater understanding of variation than did many of the traditional designs from the analysis of variance (following Fisher). Taguchi contended that conventional sampling is inadequate here as there is no way of obtaining a random sample of future conditions. In Fisher's design of experiments and analysis of variance, experiments aim to reduce the influence of nuisance factors to allow comparisons of the mean treatment-effects. Variation becomes even more central in Taguchi's thinking.

Taguchi proposed extending each experiment with an "outer array" (possibly an orthogonal array); the "outer array" should simulate the random environment in which the product would function. This is an example of judgmental sampling. Many quality specialists have been using "outer arrays".Later innovations in outer arrays resulted in "compounded noise." This involves combining a few noise factors to create two levels in the outer array: First, noise factors that drive output lower, and second, noise factors that drive output higher. "Compounded noise" simulates the extremes of noise variation but uses fewer experimental runs than would previous Taguchi designs.

Management of interactions

Interactions, as treated by Taguchi

Many of the orthogonal arrays that Taguchi has advocated are saturated arrays, allowing no scope for estimation of interactions. This is a continuing topic of controversy. However, this is only true for "control factors" or factors in the "inner array". By combining an inner array of control factors with an outer array of "noise factors", Taguchi's approach provides "full information" on control-by-noise interactions, it is claimed. Taguchi argues that such interactions have the greatest importance in achieving a design that is robust to noise factor variation. The Taguchi approach provides more complete interaction information than typical fractional factorial designs, its adherents claim.

Followers of Taguchi argue that the designs offer rapid results and that interactions can be eliminated by proper choice of quality characteristics. That notwithstanding, a "confirmation experiment" offers protection against any residual interactions. If the quality characteristic represents the energy transformation of the system, then the "likelihood" of control factor-by-control factor interactions is greatly reduced, since "energy" is "additive".

Inefficiencies of Taguchi's designs

Interactions are part of the real world. In Taguchi's arrays, interactions are confounded and difficult to resolve. Statisticians in response surface methodology (RSM) advocate the "sequential assembly" of designs: In the RSM approach, a screening design is followed by a "follow-up design" that resolves only the confounded interactions that are judged to merit resolution. A second follow-up design may be added, time and resources allowing, to explore possible high-order univariate effects of the remaining variables, as high-order univariate effects are less likely in variables already eliminated for having no linear effect. With the economy of screening designs and the flexibility of follow-up designs, sequential designs have great statistical efficiency. The sequential designs of response surface methodology require far fewer experimental runs than would a sequence of Taguchi's designs.

Analysis of experiments

Taguchi introduced many methods for analyzing experimental results including novel applications of the analysis of variance and minute analysis.

Robust Design

Robust Design method, also called the Taguchi Method, pioneered by Dr. Genichi Taguchi, greatly improves engineering productivity. By consciously considering the noise factors (environmental variation during the product's usage, manufacturing variation, and component deterioration) and the cost of failure in the field the Robust Design method helps ensure customer satisfaction. Robust Design focuses on improving the fundamental function of the product or process, thus facilitating flexible designs and concurrent engineering. Indeed, it is the most powerful method available to reduce product cost, improve quality, and simultaneously reduce development interval.

Why Use Robust Design Method?

Over the last five years many leading companies have invested heavily in the Six Sigma approach aimed at reducing waste during manufacturing and operations. These efforts have had great impact on the cost structure and hence on the bottom line of those companies. Many of them have reached the maximum potential of the traditional Six Sigma approach. What would be the engine for the next wave of productivity improvement?

Brenda Reichelderfer of ITT Industries reported on their benchmarking survey of many leading companies, "design directly influences more than 70% of the product life cycle cost; companies with high product development effectiveness have earnings three times the average earnings; and companies with high product development effectiveness have revenue growth two times the average revenue growth." She also observed, "40% of product development costs are wasted!"

These and similar observations by other leading companies are compelling them to adopt improved product development processes under the banner Design for Six Sigma. The Design for Six Sigma approach is focused on 1) increasing engineering productivity so that new products can be developed rapidly and at low cost, and 2) value based management.

Robust Design method is central to improving engineering productivity. Pioneered by Dr. Genichi Taguchi after the end of the Second World War, the method has evolved over the last five decades. Many companies around the world have saved hundreds of millions of dollars by using the method in diverse industries: automobiles, xerography, telecommunications, electronics, software, etc.

Classification of Parameters

In the basic design process, a number of parameters can influence the quality characteristic or response of the product. These can be classified into the following three classes of a product/process design. The response for the purpose of optimization in the robust design is called the quality characteristic. The different parameters, which can influence this response, are described below.

Signal Factors: These are parameters set by the user to express the intended value for the response of the product. Example- Speed setting of a fan is a signal factor for specifying the amount of breeze. Steering wheel angle - to specify the turning radius of a car.

Noise Factors: Parameter which cannot be controlled by the designer or parameters whose settings are difficult to control in the field or whose levels are expensive to control are considered as Noise factors. The noise factors cause the response to deviate from the target specified by the signal factor and lead to qality loss.

Control Factors: Parameters that can be specified freely by the designer. Designer has to determine best values for these parameters to result in the least sensitivity of the response to the effect of noise factors.

The levels of noise factors change from unit to unit, one environment to another and from time to time. Only the statistical characteristics (mean & variance) can be known or specified. The noise factors cause the response to deviate from the target specified by the signal factor and lead to quality loss.

The Noise factors can again be classified in to three-

(a) External: The environment, the load, human error

(b) Unit to unit variation: Variation in the manufacturing process

(c) Deterioration: As time passes, the performance deteriorates (aging related) The robust design addresses all these different types of Noise factors. For a product or process with multiple functions, different noise factors can affect different quality characteristics.

Tasks to be performed in Robust Design

A great deal of engineering time is spent in generating about how different design parameters affect performance under different usage conditions. Robust design methodology serves as an "amplifier" - that is it enables an engineer to generate information needed for decision making with less than half the experimental effort. There are two important tasks to be performed in robust design, which can be considered as the main tools used in the process of achieving robustness.

Measurement of quality during design & development. A leading indicator of quality by which the effects of changing a particular design parameter on the performance can be evaluated.

Efficient experimentation to find dependable information about the design parameters, so that design changes during manufacturing & customer use can be avoided. Also the information should be obtained with minimum time & resources. The estimated effects of design parameters must be valid even when other parameters are changed during the subsequent design efforts or when dimensions of related subsystems changed. This can be achieved by employing the signal to noise ratio to measure the quality & orthogonal arrays to study many design parameters simultaneously.

Signal-to-noise-Ratio

 The signal-to-noise concept is closely related to the robustness of a product design. Robustness has to do with a product's ability to cope with variation and is based on the idea that quality is a function of good design. A robust design or product delivers a strong "signal". It performs its expected function and can cope with variations ("noise"), both internal and external.

Since a good manufacturing process will be faithful to a product design, robustness must be designed into a product before manufacturing begins. According to Taguchi, if a product is designed to avoid failure in the field, then factory defects will be simultaneously reduced. This is one aspect of Taguchi Methods that is often misunderstood. There is no attempt to reduce variation, which is assumed to be inevitable, but there is a definite focus on reducing the effect of variation. "Noise" in processes will exist, but the effect can be minimized by designing a strong "signal" into a product.

This is antithetical to "Zero Defects" policy which has been prevalent in American manufacturing. Under Zero Defects, strict on-line controls are imposed on manufacturing processes in order to minimize losses in the factory. The idea is, an effort to minimize process failure in a factory will lead to minimization of product failure in the field. Quality losses are seen in terms of costs incurred in the factory due to products that cannot be shipped, costs of rework, etc. A product whose components exhibit wide variations within spec and is shipped, but then fails to perform its function properly under varied conditions in the field is not considered a loss. For Taguchi, such a product would be loss.

The dimensionless signal-to-noise ratio is used to measure controllable factors that can have such a negative effect on the performance of a design. It allows for the convenient adjustment of these factors. Provided that a process is consistent, adjustments can be conveniently made using the signal-to-noise ratio to achieve the desired target.

Orthogonal Arrays

 Given that a maximized signal to noise ratio is crucial, how do companies go about this. Most world class companies follow a three step process.

They define and specify the objective selecting or developing the most appropriate signal and estimating the concomitant noise.

They define feasible options for the critical design values, such as dimensions and electrical characteristics.

They select the option that provides the greatest robustness or the greatest signal to noise ratio.

Sounds simple, right? It really isn't. It has been said that in order to optimize the steering mechanism of a car using this method a set of 13 design variables must be analyzed. If you used the conventional method of comparing each set variables to each other, you would have to make 1,594,323 experimental iterations to observe every possible combination. Clearly this is not acceptable in today's market place. What then can be done to reduce the total number of iterations necessary. Sir Ronald Fisher developed the solution: Orthogonal Arrays. "The orthogonal array can be thought of as a distillation mechanism through which the engineers experiment passes." (Ealey, 1988) The array allows the engineer to vary multiple variables at one time and obtain the effects which that set of variables has on the average and the dispersion. From this the engineer can track large numbers of variables and determine:

The contribution of individual quality influencing factors in the product design stage.

Gain the best, or optimum condition for a process, or a product, so that good quality characteristics can be sustained.

Approximate the response of the product design parameters under the optimum conditions.

The benefits being abundantly clear, the next question that comes should come to mind is "How do I use this powerful tool?" While it is not possible to cover OA's in too much detail in this paper, the key points for constructing an OA can be identified. First and foremost, one must remember what the main objective is: to determine the optimal condition of a system of variables. The procedure for using OA's can be broken down into seven main steps. These are as follows.Â

Identify the main function

The experimenter needs to first determine what the primary role of the system is. This can be cooling air from 95C to 10C, accelerating a car from 0 to 60 mph, producing high speed IC chips, or allowing beam deflection of no more than a tenth of an inch. Each of these may have several parameters within which they must operate. For instance, cost, size, weight, speed, etc.

Â

Identify the noise factor

Once the main factors have been established, the noise factors must be determined. Noise factors are uncontrollable due to their nature or the cost of controlling. Obviously a refrigeration system would operate well if the environmental temperature did not exceed 50C or so. However, maintaining a home, or industrial setting is ver costly and is not ideal for working conditions of employees. Some likely noise factors are external vibration, cost, temperature, environmental conditions, material quality and manufacturing quality.

Â

Identify the quality characteristics to be observed

There generally a few factors which are to be optimized such as footprint size, cost, efficiency etc. Each of these must be clearly identified and an objective function established. Once the function is established the objective is to optimize this function. Keep in mind that the engineer is generally not concerned with the specific values yielded by each experiment, but rather in distilling the effects of each which each of the various settings has on the system as a whole.

Â

Identify the control factors and alternatives levels

For each factor two or three levels or settings may need to be observed. For instance a slightly rich and slightly lean fuel to air ratio in an automobile engine, or min or max input voltage in an IC circuit, or even variation in soil condition for the placement of a foundation. It is important to identify at least the high an low values taking into consideration the noise, and to have as few levels as possible.

Â

Design the matrix experiment and the define data analysis

Having determined the levels for the control factors, the proper OA for use must be determined for both the main factors and the noise factors. OA's are identified according to the number of configurations and levels which can be accommodated. Table I identifies the common OA's with their factors and levels and the equivalent number of individual experiments

 Orthogonal Array

Factors and Levels

No. of Experiments

L4

3 Factors at 2 levels

8

L8

7 Factors at 2 levels

128

L9

4 Factors at 3 levels

81

L16

15 Factors at 2 levels

32,768

L27

13 Factors at 3 levels

1,594,323

L64

21 Factors at 4 levels

4.4 X 1012

 TABLE I: Common Orthogonal Arrays With Number Of Equivalent Full Factorial Experiments Given In The Right Column.Â

The noise and the control array can then be combined to form a simulation algorithm which allows the experimenter to study the control factors against the noise factors. Â

Conduct the matrix experiment

Now the actual experiment must be conducted. While it is possible to conduct actual physical experiments, this is often very costly. Hence, many manufactures opt to use mathematical models which closely approximate the system parameter. In this way a controlled matrix experiment can be conducted with little cost.

Analyze the data to determine the optimum levels of control factors

Once all of the data has been collected an analysis of the mean (ANOM) or analysis of variables (ANOVA) can be used to determine the optimal signal to noise ratio and thus the optimized design parameters for the system.

Steps in Robust Design

The detailed steps in robust design are explained here. The experimentation procedure is highlighted below.

Three phases of experimental design:

Planning Stage

areas of concern, objective

select response or quality characteristic

identify control and noise factors

select factor levels

select appropriate experimental design

identify interactions and assign factors to experimental set-up

Conducting Stage

Conduct tests as prescribed in experimental set-up

Analysis Phase

Analyze and interpret results

Conduct confirmation experiments

A review and analysis of the Mahalanobis-Taguchi system.

The Mahalanobis-Taguchi system (MTS) is a relatively new collection of methods proposed for diagnosis and forecasting using multivariate data. The primary proponent of the MTS is Genichi Taguchi, who is very well known for his controversial ideas and methods for using designed experiments. The MTS results in a Mahalanobis distance scale used to measure the level of abnormality of "abnormal" items compared to a group of "normal" items. First, it must be demonstrated that a Mahalanobis distance measure based on all available variables on the items is able to separate the abnormal items from the normal items. If this is the case, then orthogonal arrays and signal-to-noise ratios are used to select an "optimal" combination of variables for calculating the Mahalanobis distances. Optimality is defined in terms of the ability of the Mahalanobis distance scale to match a prespecified or estimated scale that measures the severity of the abnormalities. In this expository article, we review the methods of the MTS and use a case study based on medical data to illustrate them. We identify some conceptual, operational, and technical issues with the MTS that lead us to advise against its use.

In stage 1, the variables that define the "healthiness" of an item are identified. Data are collected on the healthy or normal group. As described later, the variables are standardized and the MDs calculated for the normal items. These values define the "Mahalanobis space" used as a frame of reference for the MTS measurement scale.

We refer to the variables collected on each item to determine its "healthiness" as [V.sub.i], i = 1, 2,..., p. We denote by [V.sub.ij] the observation of the ith variable on the jth item, i = 1, 2,..., p, j = 1, 2,..., m. Thus the p X 1 data vectors for the normal group are denoted by [v.sub.j], j = 1, 2,..., m.

Each individual variable in each data vector is standardized by subtracting the mean of the variable and dividing by its standard deviation, with both statistics calculated using data on the variable in the normal group. Thus we have the standardized values

[Z.sub.ij] = ([V.sub.ij] - [bar.V.sub.i]) / [S.sub.i], i = 1, 2,..., p, j = 1, 2,..., m, (1)

where

[bar.V.sub.i] = [m.summation over (j=1)] [V.sub.ij] / m

and

[S.sub.i] = [square root of ([m.summation over (j=1)] ([V.sub.ij] - [bar.V.sub.i])[.sup.2] / (m - 1))].

Next, the values of the MDs, M[D.sub.j], j = 1, 2,..., m, are calculated for the normal items using

M[D.sub.j] = (1/p)[z.sub.j.sup.T][S.sup.-1][z.sub.j], (2)

where [z.sub.j.sup.T] = [[Z.sub.1j], [Z.sub.2j],..., [Z.sub.pj]] and S is the sample correlation matrix calculated as

S = 1/(m - 1) [m.summation over (j=1)] [z.sub.j][z.sub.j.sup.T].

Taguchi and Rajesh (2000) stated that the M[D.sub.j] values in (2) have an average value of unity. For this reason, they also refer to the Mahalanobis space as the unit space.

In stage 2, abnormal items must be selected. There is no uncertainty incorporated into the MTS regarding the status of each item used for determining the MTS measurement scale. As in discriminant analysis, it is assumed that each item is known to be either normal or abnormal.

The MDs of the abnormals with data vectors denoted by [v.sub.j], j = m + 1, m + 2,..., m + t are calculated after the variables are standardized using the normal-group means and standard deviations. Thus we have M[D.sub.j], j = m + 1, m + 2,..., m + t, with M[D.sub.j] defined in (2), where the ith element of [z.sub.j] in (2). [Z.sub.ij], is calculated using (1), for i = 1, 2,..., p and j = m + 1, m + 2,..., m + t.

According to the MTS, the resulting MD scale is good if the M[D.sub.j] values for the abnormal items are higher than those for the normal items.

In stage 3, OAs and S/N ratios are used to identify the most useful set of variables. An OA is a design matrix that contains the levels of various factors in the runs of an experiment to investigate the effects of the variables on a response of interest. Each factor of the experiment is assigned to a column of the OA, and the rows of the matrix correspond to the experimental runs. The MTS has p factors in the experiment, each with two levels. The level of a factor signifies the inclusion or exclusion of a variable in the MTS analysis. The p factors are assigned to the first p columns of the OA, with the other columns ignored. Thus the OA selected must initially have at least p columns. Each row of the OA determines which variables are included in any given experimental run. For each of these runs, the MD values are calculated for the abnormals as in stage 2, but using only the indicated variables. These MD values are then used to calculate the value of a S/N ratio, which becomes the response for the run.

Many different S/N ratios are used in Taguchi's analysis of designed experiments. These are defined in such a way that larger S/N ratio values are preferred. One option mentioned in the MTS is to use Taguchi's larger-is-better S/N ratio, defined as

-10 log[(1/t) [m+1.summation over (j=m+1)] (1/[M[D.sub.j]])[.sup.2]],

CONCLUSIONS

An overview of the Taguchi method has been presented and the steps involved in the method were briefly described. Overall, the Taguchi method is a powerful tool which can offer simultaneous improvements in quality and cost. Furthermore, the method can aid in integrating cost and engineering functions through the concurrent engineering approach required to evaluate cost over the experimental design. The Taguchi method emphasizes pushing quality back to the design stage, seeking to design a product/process which is insensitive or robust to causes of quality problems. It is a systematic and efficient approach for determining the optimum experimental configuration of design parameters for performance, quality, and cost. Principal benefits include considerable time and resource savings; determination of important factors affecting operation, performance and cost; and quantitative recommendations for design parameters which achieve lowest cost, high quality solutions.

The robust design is applied on a process of polysilicon deposition on thin wafers. The process set up schematic is shown in fig-16 . Saline & Nitrogen gas are introduced at one end and pumped out at the other. The Saline gas pyrolizes and a polysilicon layer is deposited on top of the oxide layer on the wafers. Two carriers each carrying 25 wafers can be placed inside the reactor at a time so that polysilicon is simultaneously deposited on 50 wafers. The problems observed were (i) too many surface defects and (ii) too large a thickness variation. So robust design methodology is adopted to improve the performance or quality of the process.

The objective here is to achieve a uniform thickness & minimize the surface defects. Based on the expertise, the non uniform thickness & surface defects are caused by

ï€­ï€ ï€ The variations in the parameters involved in the chemical reaction associated with the deposition process Concentration gradient along the length of the reactor.

ï€­ï€ ï€ Flow pattern ( direction & speed ) of the gases need not be the same in all positions

ï€­ï€ ï€ Temperature variation along the length

The goal in optimization for thickness is to minimize variance while keeping the mean on target. This is a constrained optimization problem, which can be very difficult to solve. When a scaling factor ( a factor that increases thickness proportionally at all points on the wafers) exist, the problem can be simplified greatly.

Here the deposition time is a scaling factor (ie) thickness = deposition rate x deposition time The deposition rate may vary from one wafer to next, or from one position to another, due to noise factors. However the thickness at any point is proportional to the deposition time. So maximize the signal to noise ratio and adjust the deposition time so that mean thickness is on target.