Introduction:
Fault diagnosis is the logical component of failure analysis; appropriately, its domain is that of the logical fault, or simply fault, which is an abstract representation of how an element in a defective circuit misbehaves. A description of the behavior and assumptions about the nature of a logical fault is referred to as a fault model.
Many early diagnostic systems used a simple matching process, in which the signature of a fault candidate would either have to match exactly the circuit's fault signature, containing every error-carrying vector and output, or would have to be a subset thereof. As diagnostic techniques matured, the matching process became more flexible; a good example of a simple generalization is known as the partial-intersection operation that ranks matches by the size of intersection. Matching algorithms employed by diagnostic techniques are often essential in translating from abstract fault models to defects, or from targeted fault models to untargeted faults, or to handle the vagaries of faulty circuit behavior.
The following sections describe previous approaches taken to the problem of fault diagnosis. As indicated above, most traditional (cause-effect) techniques involve two primary elements: a fault model, and a comparison or matching algorithm. The approaches described are primarily organized by the fault model used: stuck-at, bridging, or another model. Each technique is presented with a description of the matching algorithm used for diagnosis construction. Subsequent sections discuss other techniques that are not as easily categorized by fault model and matching algorithm.
The first steps towards bridging fault diagnosis retained the legacy of stuck-at signatures, using these readily-available fault descriptions to approximate or identify bridging fault behavior. Many simple approaches merely compared stuck-at signatures to the observed behavior, and implicated the (single) nodes which most closely matched.
In this paper ,a selected SoC core is tested using Random testing and IEEE 1500 standard testing . The Random testing again classified into Anti-Random Testing(ART) , Pseudo Random Testing (PRT) , and Reed-Solomon Codes for Testing . These Random testing support the struck-at faults and bridge faults. The IEEE 1500 standard Testing has two compliance levels as IEEE 1500 unwrapped and IEEE 1500 wrapped.in this work , the IEEE 1500 unwrapped is used to test the selected core.
Antirandom testing is a process of generating new test patterns need to be selected so that it is an different as possible from all the previous patterns applied as a random input to a selected core , Antirandom is one kind of variation of pure random testing. Then the error detection and correction codes are used in data communications and huge data storages applications can generate appropriate code words with appropriate minimum Hamming Distance. In general for the purpose of testing a circuit we select random testing because of its merits as typically easy and quick to implement. Even though the random testing is having merits , most of the condition, the random test inputs do not have an associated explicit expected test return value. Based on this condition, the random testing generates number of core failure. Here , failure means either hang or crash could be created in SoC core. From several research work studies describe pure random testing gives less effective at discovering faults than other testing methodologies. Related to hardware testing, Anti-Random testing generates an input set of random values .But there is no possible to mach one another of these generated random value. If exactly same type of test patters are applied to a testing circuits means it will expose similar types of bugs. To have higher level of test performance an input set that contains values that are very different from each other . This Anti-random testing does not require any internal implementation about the test circuit ,hence it is otherwise called black-box testing.
To get eight different input patterns we need 3-bits . different possibilities are :{000, 001, 010, 011, 100, 101, 110, 111}. Sometimes it is not executable hence we generate AR test set which contains four inputs. As usual the initial test rate is empty i.e., {0,0,0}. We take next input that is entirely different from first input i.e., {1,1,1}. Using unique technique we choose next input as {0,0,1}. We have to select final test set value (which contains different functions in it) from the remaining five sets. We cannot select the following value containing same functions ( eg., 010: 3 + 2 + 3 = 8, 101: 2 + 1 + 2 = 5). We have to select the value like follow( eg., 110: 3 + 2 + 1). This is to enumerate the text pattern. This method is used in my testing because of its small input bit size or domain in software. Related to Antirandom the following definitions of the terms used and examine construction of Antirandom sequences.
Binary Antirandom Sequences
Definition: Antirandom test sequence (ATS) is a test sequence such that a test ti is chosen
such that it satisfies some criterion with respect to all tests t0, t1, ... ti_1 applied before. In this
paper we use two specific criteria introduced below.
Definition: Distance is a measure of how different two vectors ti and t j are. Here we use two
measures of distance defined below.
Definition: Hamming Distance (HD) is the number of bits in which two binary vectors
Definition: Cartesian Distance (CD) between two vectors,A={ and B={} is given by:
CD(A,B)=
If all the variables in the two vectors are binary , then equation 1 can be written as:
CD(A,B)
=
=
Definition: Total Cartesian Distance (TCD) for any vector is the sum of its Cartesian distances with respect to all previous vectors.
Definition:Maximal Distance Antirandom Test Sequence (MDATS) is test sequence such that each test and each of to maximum, i.e.
Depends upon the choices of ti values gives the maximum of TD(ti) .The MHDATS and MCDATSs are designed with Hamming distance and Cartesian distance.
Pseudorandom Test Method
The pseudorandom method will be generalized for testing both linear and nonlinear circuits. ,the results of the pseudorandom method will be compared with the Volterra Kernel Coefficients used to test both kinds of circuits. Maximum length sequence of length is represented by L.
L = 2m - 1.
Where m is an integer denoting the order of the sequence.
The output of an Linear Time Invariant (LTI) system is
y(K)= x(k)* h(k)
where x(k) is the input signal and h(k) is the impulse response of the system. The input/output cross-correlation
Øxy(k)=y(k)*x(-k)
= h(k)*(x(k)*x(-k))
=h(k) * Øxy(k)
ð Øxy(k) ≈h(k) if Øxy(k) ≈ δ(k)
For the cross-correlation operation in the case of a discrete sequence is defined by the following way:
Øxy(k)=1/L∑L-1j-0 x(j-k) y(j) where the element of x(k) are the all +/- 1.
Some kinds of non-linear system the simple Hammerstein model is not suitable this condition is called strong nonlinear system . such a situation , Volterra series of modeling technique can be used to test strong nonlinear system using the following equation:
Where x and y are respectively the input and output of the system , y is the nonlinearity order, M is the memory of the system , and hr(m1,m2,…,mr) represents a coefficient of the rth- order Volterra Kernel hr.
Reed-Solomon Codes For Testing
Though it is not implemented in this paper some what little bit details about the Reed-Solomon codes for testing is explained in this list.In a Random Testing Reed-Solomon Testing (RST) is another one error correction codes test generation because of its minimum Hamming distance properties as another test generation scheme. The main difference between ART and RST is that the former is exploiting the sequence in which the vectors are applied as all combinations could be applied and in the RST only a fraction of the input combinations are code words. The RST codes has the advantage that algorithms and hardware designs for generation are already available [10].
IEEE 1500 Standard Testing
IEEE 1500 is a standard under development which intends to amend ease of test reuse and test integration with respect to the core-based system-on-chip(SoC). The subject paper proposes evolving the wrapper cell design for SoC testing used in the IEEE 1500 standard for digital embedded cores. The digital cores used in the study were constructed from ISCAS 85 combinational and ISCAS 89 sequential benchmark circuits. The wrapper that separates the core under test from other cores is assumed to be IEEE 1500- compliant. The test access mechanism plays an important role in transporting the test patterns to the desired core and the core responses to the output pin of the SoC. The faults are injected using a fault simulator that generates tests for the core. The cores and test access mechanism are described using VHDL. The test access mechanism (TAM) provides the connection between the test sources, cores, and test sinks, which is crucial in any SoC design. The outcome is the fault coverage of all the cores being tested.
Statistics for the selection of C8800
Among several combinationalbenchmark circuits C880 is chosen because of the following reasons .C880 has 383 gates, 60 inputs, 26 outputs and only 16 number of test cycles required.
Bus Functions: Connected with fig 2 the following functions of I/O buses are tabulated in Table 1
Tools Developments
For the Anti-random and Pseudo-Random testing a special software tool is developed and used to simulate the selected benchmark circuits in the software testing lab and for the IEEE 1500 standard a wrapper for the selected bench circuit and TAM also developed and used in VLSI and Embedded lab at University of SASTRA , Thanjavur.
Testing combinational circuits for Struck at fault:
While using Pseudo-Random testing method {antirandom2.pdf}
Table 1: Stuck-at Fault coverage.
Conclusion
Modern experimental for SoC testing techniques have been reviewed in this paper. The three main topics pertaining to modal testing; IEEE 1500 std testing ,Anti-Random testing , and pseudo Random Testing techniques have been applied and the IEEE 1500 std provides higher efficient than Anti-Random testing and under worst case of results were produced in Pseudo Random technique.
Reference:
10. W. A. Geisel, Tutorial on Reed-Solomon Error Correction Coding, NASA Technical
Memorandum 102162, 1990.