Parametric Optimization Of Microdrilling Operation Using Taguchi Method Engineering Essay

Published: November 21, 2015 Words: 5093

Drilling is one of the most fundamental machining technologies and is moving toward high precision/high speed applications for productivity enhancement. One notable drilling technology, micro-hole drilling, is becoming increasingly more prominent in a variety of precision industries, such as the production of automotive fuel injection nozzles, watch and camera parts, medical needles, air bearing, etc. Especially, its applications in the electronics and computer industries are rapidly expanding. It is mainly used in machining of printed circuit boards (PCB) and IC masking. The increase in the degree of integration demands improved technologies for the manufacture of smaller holes with larger aspect ratios for higher density circuit boards. Furthermore, the increasing competition in micro part developments puts an additional impetus on micro-hole manufacturing technologies.

Microdrilling refers to the drilling of holes less than 0.5 mm (0.020 in). Drilling of holes at this small diameter presents greater problems since coolant fed drills cannot be used and high spindle speeds are required. High spindle speeds that exceed 10,000 RPM also require the use of balanced tool holders. Micro-electric discharge machining (micro-EDM) has evolved as one of the prominent processes to generate high-aspect-ratio and accurate micro-structures in many industrial applications. It is shown that a depth of 5.0 mm can be achieved by a 200 μm diameter tool electrode while controlling the regular process parameters, but beyond this length, the process is governed by a number of derived phenomena such as secondary sparking, debris accumulation, etc. instead of the regular processing parameters. The optimum depth of the hole that could be achieved with a good accuracy i.e. a minimum oversize lies between 2.5 and 5.0 mm, the largest depth that could be achieved was 8.33 mm. The highest aspect ratio achieved in this experiment was 15.63.

CHAPTER 2

LITERATURE SURVEY

2. LITERATURE SURVEY

This chapter outlines some of the recent reports published in literature on microdrilling with special emphasis on Taguchi method.

Tokarev, Lopez, Lazare et al.[10] developed an analytical model of multipulse excimer laser drilling in polymers. It was shown that the adequate account of the mechanism of radiation propagation and absorption inside the keyhole was important for the good agreement of theory and experiment. The controlling factors of drilling were revealed. Keyhole profile and depth versus incident infuence were calculated for top-hat beam. The matching conditions for laser ifluence, parameters of optical scheme and material parameters were derived in an explicit analytical form, allowing to produce deep narrow keyholes with practically parallel side walls and aspect ratios as high as 300-600.

Tsao,Hocheng et al. [11] predicted and evaluated the thrust force and surface roughness in drilling of composite material using candle stick drill. The approach was based on Taguchi method and the artificial neural network. The experimental results indicated that the feed rate and the drill diameter were the most significant factors affecting the thrust force, while the feed rate and spindle speed contribute the most to the surface roughness. The objective of their study was to establish a correlation between the feed rate, spindle speed and drill diameter with the induced thrust force and surface roughness in drilling composite laminate.

Tsao, Hocheng et al. [12] also predicted and evaluated the delamination factor used in twist drill, candle stick drill and saw drill. The approach was based on Taguchi's method and the analysis of variance (ANOVA). An ultrasonic C-Scan was used to examine the delamination of carbon fiber-reinforced plastic (CFRP) laminate. The experiments were conducted to study the delamination factor under various cutting conditions. The experimental results indicated that the feed rate and the drill diameter were recognized to make the most significant contribution to the overall performance. The objective was to establish a correlation between feed rate, spindle speed and drill diameter with the induced delamination in a CFRP laminate. The correlation was obtained by multi-variable linear regression and compared with the experimental results.

Gaitonde, Karnik, Paulo Davim et al. [13] presented the methodology of Taguchi optimization method for simultaneous minimization of delamination factor at entry and exit of the holes in drilling of SUPERPAN DECOR (melamine coating layer) MDF panel. The delamination in drilling of MDF was found to affect the aesthetical aspect of the final product and hence it was essential to select the best combination values of the drilling process parameters to minimize it.The experiments were carried out as per L9 orthogonal array with each experiment performed under different conditions of feed rate and cutting speed. The analysis of means (ANOM)was performed to determine the optimal levels of the parameters and the analysis of variance (ANOVA) was employed to identify the level of importance of the machining parameters on delamination factor. The investigations revealed that the delamination can be effectively reduced in drilling of MDF materials by employing the higher cutting speed and lower feed rate values.

Kishore, Tiwari, Dvivedi, Singh et al. [14] found that drilling in composite materials was often required to facilitate the assembly of the parts to get the final product. However they noticed that drilling induced damage drastically affects the residual tensile strength of the drilled components. They investigated and studied the effect of the cutting speed, the feed rate, and the drill point geometry on the residual tensile strength of the drilled unidirectional glass fiber reinforced epoxy composite using the Taguchi method and suggested the optimal conditions for maximum residual tensile strength.

CHAPTER 3

MICRODRILLING

3.1 MICRODRILLING:-

Microdrilling is characterized not just by small drills but also a method for precise rotation of the microdrill and a special drilling cycle. In addition, the walls of a microdrilled hole are among the smoothest surfaces produced by conventional processes. This is largely due to the special drilling cycle called a peck cycle. The smallest microdrills are of the spade type. The drills do not have helical flutes as do conventional drills and this makes chip removal from the hole more difficult. Drills with a diameter of 50 micrometers and larger can be made as twist drills. Drills smaller than this are exclusively of the spade type because of the difficulty in fabricating a twist drill of this size.

Microdrills are typically made of either cobalt steel or micrograin tungsten carbide. The steel drills are less expensive and easier to grind but are not as hard or strong as the tungsten carbide drills. The drill point angle is based on the material to be drilled. The normal point angle is 118 degrees and 135 degrees is used for hard materials. The larger included point angle provides more strength at the drill point.

The recommended speeds and feeds for microdrilling are as varied as the materials which can be drilled. Microdrilling is not generally a high speed process since dwelling of the drill at the bottom of the hole can cause hardening of the work piece leading to increased drilling forces. For most metals, typical spindle speeds are in the 2000 to 4000 rpm range and feeds are in the range of a micrometer per revolution, or so. Care must be taken when drilling plastics to avoid melting of the material which can lead to adhesion of the plastic to the drill. This can cause drill breakage or poor sidewall smoothness.

The applicability of microdrilling as a complementary process with features produced by lithography and electroplating has been investigated. A cross section of a copper microgear made by lithography is shown. The average roughness of the hub wall is 0.4 micrometers. A microdrilled hole in the same material gave a roughness of 0.15 micrometers over a much longer bore length. Microdrilling can also be used to augment lithography for mesoscopic (millimeter and larger) sized components. Often parallelism of deep holes is of concern. To determine typical values for parallelism of microdrilled holes, glass fibers were inserted into a number of holes drilled with a very slow starting sequence. This is necessary to ensure the drill does not walk on the surface of the part and that the hole axis aligns with the undeflected axis of rotation of the drill. Holes with a length-to-diameter ratio of 8 were drilled at 4000 rpm. The three-dimensional misalignment of the inserted fibers was measured to be 0.08 degrees (1.5 milliradians), which included skewing of the fiber in the hole due to oversize of the hole which was estimated to be 0.5 micrometers.

Microdrilling has one major disadvantage because of the drill geometry. Because of the drill point, a flat-bottomed hole cannot be produced. If one is attempting to produce cylindrical cavities in a micromold, there must be a relatively thick plating base under the mold material, or the structural substrate of the mold could act as the plating base. To fully develop the diameter of the hole, projected onto a plane perpendicular to the drilling direction, requires the drill point to extend 30% of the drill diameter beyond the depth of the fully developed hole. For holes in the 100 micrometer region, requires a thick plating base to be deposited.

Factors affecting Microdrilling :

1.2.1 Vibration and sound:

Vibration is widely used for condition monitoring of rotating machinery. However, vibration has not been used to the same extent in tool condition monitoring, probably because as a method it is rather sensitive to noise which is present in cutting processes. The advantages of vibration measurement include ease of implementation and the fact that no modifications to the machine tool or the work piece fixture are required. However, the disadvantages reported in the literature include dependency of the vibration signals on workpiece material, cutting conditions and machine structure. Vibration is measured both in the transverse and axial direction. The vibration signals are considered to contain reliable features for monitoring drill wear and breakage for the following reasons: the vibrating drill length in the transverse and axial modes does not change during drilling, thus maintaining a rather constant mode frequency; the natural frequencies of the transverse and axial modes of the work piece- drill system are basically insensitive to drill cross-sectional size, thus simplifying monitoring for a wide range of drill sizes; vibrations in the directions Y and Z are influenced by the torque and thrust force which are the major excitation sources in drilling. However, quite a number of factors influence how the mechanical vibration is transferred and how it takes place at the different frequencies. A higher frequency range from 0.5 to 40 kHz for vibration measurements has been tested with very thin drills. The reason for looking at this kind of frequency range is that the rotational natural frequencies fall into that range since for a drill of 1 mm diameter the natural frequency could be about 25 kHz and for a drill of 3 mm diameter it could be about 7 kHz .

1.2.1 Acoustic emission and ultrasonic vibration:

The use of ultrasonic vibrations (UEs) in the frequency range from 20 to 80 kHz for tool breakage detection in various metal cutting processes including drilling has been tested. The practicality of using ultrasonic vibrations is explained when compared to other vibration techniques. Acoustic emission (AE) is seen to suffer from severe attenuation and multi-path distortion caused by bolted joints commonly found in machine tool structures and restricting the mounting location of the AE transducer to somewhere very near the tool or workpiece. The lower frequency signal used for UE analysis does not suffer such severe attenuation and distortion, and so the transducer can be placed fairly far from the chip forming zone. In the low vibration frequency range, i.e. below 20 kHz, structural modes are prominent. A common strategy is to compare the amplitudes of several frequency bands in this range. Particular variation in the relative strengths of vibration in these bands indicates process abnormalities such as tool breakage or tool wear. This method shares the advantage of remote transducer placement with the UE method but unfortunately is much more sensitive to machine and tooling variations. Since structural modes change in complex ways with machine movement, loading, temperature, and tooling, this approach generally must be tuned empirically each time that the process is changed. In contrast, in the frequency range used for UE analysis the structural modes are so closely spaced that they form a so-called pseudo-continuum. There are no individual resonances to shift out of the analysis band with machine movement, loading, and so on.

1.2.3 Spindle motor and feed drive current:

Spindle motor current is in principle a measure of the same feature as torque, i.e. they both enlighten how much power is used in the cutting process and they both also advise about the dynamics of cutting. It is fair to claim that torque is a more sensitive way to measure than is the spindle motor current since the torque sensor is located close to the cutting tool and e.g. the dynamics of the electric motor do not influence it to the same extent that they influence the current measurement. However, measuring torque is more complicated than measuring the current of the spindle motor and therefore the measurement of the current has also been widely tested and used . It is impossible to successfully apply these measurements as tool-monitoring methods, stopping the machining after the increase in one or several signals above a particular limit value before actual tool failure. However, the measurements can be used for tool-breakage detection where the machining operation is interrupted after tool breakage.

CHAPTER 4

TAGUCHI METHOD

1.3 TAGUCHI METHOD

1.3.1 Design of experiments:

A well planned set of experiments, in which all parameters of interest are varied over a specified range, is a much better approach to obtain systematic data. Mathematically speaking, such a complete set of experiments ought to give desired results. Usually the number of experiments and resources (materials and time) required are prohibitively large. Often the experimenter decides to perform a subset of the complete set of experiments to save on time and money! However, it does not easily lend itself to understanding of science behind the phenomenon. The analysis is not very easy (though it may be easy for the mathematician/statistician) and thus effects of various parameters on the observed data are not readily apparent. In many cases, particularly those in which some optimization is required, the method does not point to the BEST settings of parameters. A classic example illustrating the drawback of design of experiments is found in the planning of a world cup event, say football. While all matches are well arranged with respect to the different teams and different venues on different dates and yet the planning does not care about the result of any match (win or lose)!!!! Obviously, such a strategy is not desirable for conducting scientific experiments (except for co-ordinating various institutions, committees, people, equipment, materials etc.).

1.3.2 Taguchi Method :

Dr. Taguchi of Nippon Telephones and Telegraph Company, Japan has developed a method based on "ORTHOGONAL ARRAY" experiments which gives much reduced "variance" for the experiment with "optimum settings" of control parameters. Thus the marriage of Design of Experiments with optimization of control parameters to obtain BEST results is achieved in the Taguchi Method. "Orthogonal Arrays" (OA) provide a set of well balanced (minimum) experiments and Dr. Taguchi's Signal-to-Noise ratios (S/N), which are log functions of desired output, serve as objective functions for optimization, help in data analysis and prediction of optimum results.

Taguchi Method treats optimization problems in two categories,

[A] STATIC PROBLEMS :

Generally, a process to be optimized has several control factors which directly decide the target or desired value of the output. The optimization then involves determining the best control factor levels so that the output is at the the target value. Such a problem is called as a "STATIC PROBLEM".

This is best explained using a P-Diagram which is shown below ("P" stands for Process or Product). Noise is shown to be present in the process but should have no effect on the output! This is the primary aim of the Taguchi experiments - to minimize variations in output even though noise is present in the process. The process is then said to have become ROBUST.

[B] DYNAMIC PROBLEMS :

If the product to be optimized has a signal input that directly decides the output, the optimization involves determining the best control factor levels so that the "input signal / output" ratio is closest to the desired relationship. Such a problem is called as a "DYNAMIC PROBLEM".

This is best explained by a P-Diagram which is shown below. Again, the primary aim of the Taguchi experiments - to minimize variations in output even though noise is present in the process- is achieved by getting improved linearity in the input/output relationship.

[A] STATIC PROBLEM (BATCH PROCESS OPTIMIZATION) :

----------------------------------------------------------------------------

There are 3 Signal-to-Noise ratios of common interest for optimization of Static Problems;

(I) SMALLER-THE-BETTER:

n = -10 Log10 [ mean of sum of squares of measured data ]

This is usually the chosen S/N ratio for all undesirable characteristics like " defects " etc. for which the ideal value is zero. Also, when an ideal value is finite and its maximum or minimum value is defined (like maximum purity is 100% or maximum Tc is 92K or minimum time for making a telephone connection is 1 sec) then the difference between measured data and ideal value is expected to be as small as possible. The generic form of S/N ratio then becomes,

n = -10 Log10 [mean of sum of squares of {measured - ideal} ]

(II) LARGER-THE-BETTER:

n = -10 Log10 [mean of sum squares of reciprocal of measured data]

This case has been converted to SMALLER-THE-BETTER by taking the reciprocals of measured data and then taking the S/N ratio as in the smaller-the better case.

(III) NOMINAL-THE-BEST :

Square of mean

n = 10 Log10 -----------------

variance

This case arises when a specified value is MOST desired, meaning that neither a smaller nor a larger value is desirable.

Examples are;

(i) Most parts in mechanical fittings have dimensions which are nominal-the-best type.

(ii) Ratios of chemicals or mixtures are nominally the best type.

e.g. Aqua regia 1:3 of HNO3: HCL

Ratio of Sulphur, KNO3 and Carbon in gun powder

(iii) Thickness should be uniform in deposition /growth /plating /etching...

[B] DYNAMIC PROBLEM (TECHNOLOGY DEVELOPMENT) :

In dynamic problems, we come across many applications where the output is supposed to follow input signal in a predetermined manner. Generally, a linear relationship between "input" "output" is desirable.

For example : Accelerator peddle in cars,

volume control in audio amplifiers,

document copier (with magnification or reduction)

various types of mouldings

etc.

There are 2 characteristics of common interest in "follow-the-leader" or "Transformations" type of applications,

(i) Slope of the I/O characteristics

(ii) Linearity of the I/O characteristics (minimum deviation from the best-fit straight line)

The Signal-to-Noise ratio for these 2 characteristics have been defined as;

(I) SENSITIVITY {SLOPE}:

The slope of I/O characteristics should be at the specified value (usually 1). It is often treated as Larger-The-Better when the output is a desirable characteristics (as in the case of Sensors, where the slope indicates the sensitivity).

n = 10 Log10 [square of slope or beta of the I/O characteristics]

On the other hand, when the output is an undesired characteristics, it can be treated as Smaller-the-Better.

n = -10 Log10 [square of slope or beta of the I/O characteristics]

(II) LINEARITY (LARGER-THE-BETTER) :

Most dynamic characteristics are required to have direct proportionality between the input and output. These applications are therefore called as "TRANSFORMATIONS". The straight line relationship between I/O must be truly linear i.e. with as little deviations from the straight line as possible.

Square of slope or beta

n = 10 Log10 ----------------------------

variance

Variance in this case is the mean of the sum of squares of deviations of measured data points from the best-fit straight line (linear regression).

Taguchi method is a scientifically disciplined mechanism for evaluating and implementing improvements in products, processes, materials, equipment, and facilities. These improvements are aimed at improving the desired characteristics and simultaneously reducing the number of defects by studying the key variables controlling the process and optimizing the procedures or design to yield the best results.

The method is applicable over a wide range of engineering fields that include processes that manufacture raw materials, sub systems, products for professional and consumer markets. In fact, the method can be applied to any process be it engineering fabrication, computer-aided-design, banking and service sectors etc. Taguchi method is useful for 'tuning' a given process for 'best' results.

Taguchi proposed a standard 8-step procedure for applying his method for optimizing any process,

8-STEPS IN TAGUCHI METHODOLOGY:

Step-1: IDENTIFY THE MAIN FUNCTION, SID EFFECTS, AND FAILUR MODE

Step-2:IDENTIFY THENOISE FACTORS, ESTING CONDITIONS, AND QUALITY CHARACTERISTICS

Step-3: IDENTIFY THE OBJECTIVE FUNCTION TO BE OPTIMIZED

Step-4: IDENTIFY THE CONTROL FACTORS AND THEIR LEVELS

Step-5: SELECT THE ORTHOGONAL ARRAY MATRIX EXPERIMENT

Step-6: CONDUCT THE MATRIX EXPERIMENT

Step-7: ANALYZE THE DATA, PREDICT THE OPTIMUM LEVELS AND PERFORMANCE

Step-8: PERFORM THE VERIFICATION EXPERIMENT AND PLAN THE FUTURE ACTION

CHAPTER 5

PRINCIPAL COMPONENT ANALYIS

What is a Principal Component?

How principal components are computed. Technically, a principal component can be defined as a linear combination of optimally-weighted observed variables. In order to understand the meaning of this definition, it is necessary to first describe how subject scores on a principal component are computed.

In the course of performing a principal component analysis, it is possible to calculate a score for

each subject on a given principal component. For example, in the preceding study, each subject would have scores on two components: one score on the satisfaction with supervision component, and one score on the satisfaction with pay component. The subject's actual scores on the seven questionnaire items would be optimally weighted and then summed to compute their scores on a given component.

Below is the general form for the formula to compute scores on the first component extracted

(created) in a principal component analysis:

C1 = b 11(X1) + b12(X 2) + ... b1p(Xp)

where

C1 = the subject's score on principal component 1 (the first component extracted)

b1p = the regression coefficient (or weight) for observed variable p, as used in creating principal component 1

Xp = the subject's score on observed variable p.

For example, assume that component 1 in the present study was the "satisfaction with

supervision" component. You could determine each subject's score on principal component 1 by

using the following fictitious formula:

C1 = .44 (X1) + .40 (X2) + .47 (X3) + .32 (X4) + .02 (X5) + .01 (X6) + .03 (X7)

In the present case, the observed variables (the "X" variables) were subject responses to the seven job satisfaction questions; X1 represents question 1, X2 represents question 2, and so forth. Notice that different regression coefficients were assigned to the different questions in computing subject scores on component 1: Questions 1- 4 were assigned relatively large regression weights that range from .32 to 44, while questions 5 -7 were assigned very small weights ranging from .01 to .03. This makes sense, because component 1 is the satisfaction with supervision component, and satisfaction with supervision was assessed by questions 1- 4. It is therefore appropriate that items 1- 4 would be given a good deal of weight in computing subject scores on this component, while items 5 -7 would be given little weight. Obviously, a different equation, with different regression weights, would be used to compute subject scores on component 2 (the satisfaction with pay component). Below is a fictitious illustration of this formula:

C2 = .01 (X1) + .04 (X2) + .02 (X3) + .02 (X4) + .48 (X5) + .31 (X6) + .39 (X7)

The preceding shows that, in creating scores on the second component, much weight would be given to items 5 -7, and little would be given to items 1- 4. As a result, component 2 should account for much of the variability in the three satisfaction with pay items; that is, it should be strongly correlated with those three items. At this point, it is reasonable to wonder how the regression weights from the preceding equations are determined. The SAS System's PROC FACTOR solves for these weights by using a special type of equation called an eigenequation. The weights produced by these eigenequations are optimal weights in the sense that, for a given set of data, no other set of weights could produce a set of components that are more successful in accounting for variance in the observed variables. The weights are created so as to satisfy a principle of least squares similar (but not identical) to the principle of least squares used in multiple regression. Later, this chapter will show how PROC FACTOR can be used to extract (create) principal components. It is now possible to better understand the definition that was offered at the beginning of this section. There, a principal component was defined as a linear combination of optimally weighted observed variables. The words "linear combination" refer to the fact that scores on a component are created by adding together scores on the observed variables being analyzed. "Optimally weighted" refers to the fact that the observed variables are weighted in such a way that the resulting components account for a maximal amount of variance in the data set.

Number of components extracted. The preceding section may have created the impression that, if a principal component analysis were performed on data from the 7-item job satisfaction questionnaire, only two components would be created. However, such an impression would not be entirely correct. In reality, the number of components extracted in a principal component analysis is equal to the number of observed variables being analyzed. This means that an analysis of your 7-item questionnaire would actually result in seven components, not two. However, in most analyses, only the first few components account for meaningful amounts of variance, so only these first few components are retained, interpreted, and used in subsequent analyses (such as in multiple regression analyses). For example, in your analysis of the 7-item job satisfaction questionnaire, it is likely that only the first two components would account for a meaningful amount of variance; therefore only these would be retained for interpretation. You would assume that the remaining five components accounted for only trivial amounts of variance. These latter components would therefore not be retained, interpreted, or further analyzed.

Characteristics of principal components. The first component extracted in a principal component analysis accounts for a maximal amount of total variance in the observed variables. Under typical conditions, this means that the first component will be correlated with at least some of the observed variables. It may be correlated with many. The second component extracted will have two important characteristics. First, this component will account for a maximal amount of variance in the data set that was not accounted for by the first component. Again under typical conditions, this means that the second component will be correlated with some of the observed variables that did not display strong correlations with component 1.

The second characteristic of the second component is that it will be uncorrelated with the first component. Literally, if you were to compute the correlation between components 1 and 2, that correlation would be zero. The remaining components that are extracted in the analysis display the same two characteristics: each component accounts for a maximal amount of variance in the observed variables that was not accounted for by the preceding components, and is uncorrelated with all of the preceding components. A principal component analysis proceeds in this fashion, with each new component accounting for progressively smaller and smaller amounts of variance (this is why only the first few components are usually retained and interpreted). When the analysis is complete, the resulting components will display varying degrees of correlation with the observed variables, but are completely uncorrelated with one another.

SUMMARY :

Every experimenter develops a nominal process/product that has the desired functionality as demanded by users. Beginning with these nominal processes, he wishes to optimize the processes/products by varying the control factors at his disposal, such that the results are reliable

and repeatable (i.e. show less variations).

In Taguchi Method, the word "optimization" implies "determination of BEST levels of control factors". In turn, the BEST levels of control factors are those that maximize the Signal-to-Noise ratios. The Signal-to-Noise ratios are log functions of desired output characteristics. The experiments, that are conducted to determine the BEST levels, are based on "Orthogonal Arrays", are balanced with respect to all control factors and yet are minimum in number. This in turn implies that the resources (materials and time) required for the experiments are also minimum.

Taguchi method divides all problems into 2 categories - STATIC or DYNAMIC. While the Dynamic problems have a SIGNAL factor, the Static problems do not have any signal factor. In Static problems, the optimization is achieved by using 3 Signal-to-Noise ratios - smaller-the-better, LARGER-THE-BETTER and nominal-the-best. In Dynamic problems, the optimization is achieved by using 2 Signal-to-Noise ratios - Slope and Linearity.