DFT Materials and Notes for Tetramax

This blog is to note the basic DFT knowledge and one simple summary to Tetramax. This could not used as one complete tutorial for DFT, but one reference. 


  • Testing
Testing after an IC is fabricated is necessary because not every die one the wafer will work (fault-free) due to process variations and errors.
Testing is done twice; the first is before packaging (wafer stage testing) so that only fault-free dice are packaged, and the second is done after packaging to ensure a fault-free IC.
Testing cost can contribute handsomely to the total cost by as much as 60% because this cost does not reduce reaily with scale of production. Therefore reducing this cost is an issue that can be achieved by a better design (good testability-is to quantify how easy/difficult a design can be tested) and/or an effective set of test vectors.


• Testing Stages
– Test Generation
testability measurement
generate efficient test vectors using simple fault model
special structures designed to improve testability
– Test Evaluation
evaluate the performance of test generated (fault‐cover)
fault simulation
– Test Application
ATE (automated‐test‐equipment)
hardware limitation of ATE

TESTABILITY is an important conception in the testings. Many approaches exist but all are based on similar concept.
To test a module embedded with in a large circuit, one must make sure that inputs to this module are easily controllable through Primary Input (PI) can readily be observed at Primary Output (PO) with the rest of the circuit activities being transparent. Those inputs somehow can each be given a quantity called controllability. Likewise, those outputs can each be given a quantity called ovsevability. Testability of a node is the product of its controllability and observability. The aggregate of all node testabilities gives the total testability metric for a circuit. It is very important to note that this metric can only be used to evaluate the variants of the same design.

  • Fault model
From a fabrication process engineer's viewpoint, the possible failure modes of a circuit are numerous: an open metal wiring interconnection, shorted metal interconnections, incomplete opening of contact holes, a device source or drain node shorted to the device gate due to misaligned or overetched contacts, crystalline or dielectric defects resulting in anomalous device currents, and so on. The too numerous to possibility is simply be addressed. The alternative is to make use of a fault model which is how physical faults will expressed themselves logically.

Stuck‐at Fault Model: For this model, it is also assumed that only a single fault occurs in a circuit to simplify algorithms in automated test generation and test evaluation. This is acceptable because detecting all single faults will always cover all fault scenarios no matter the circuit has only a single fault or multiple faults, and the location of a fault is of no consequence.

Multiple Faults: The multiple stuck‐fault model is a straightforward extension of the single stuck‐fault model in which several lines can be simultaneously stuck. This model is necessary when fault masking is considered. This is the condition that a fault will be masked by an occurrence of another fault. As a result, this fault cannot be detected if considered individually.

Bridging Faults:  Bridging faults are caused by shorts between two (or more) normally  unconnected signal lines. Since the lines involved in a short become  equipotential, all of them have the same logic value. For shorted line i, we  have to distinguish between the value one could actually observe on i and the  value of i as determined by its source. Unlike in the single stuck‐fault model, in  this model there is no need to distinguish between a stem and its fanout  branch, since they always have the same values.

AC Faults: Faults which are performance related cause a device to malfunction at rated clock frequencies. They are not likely to be detected until system‐level functional testing. For an example, a shorted interconnection between gate and source nodes of the external depletion‐mode load device in a push‐pull circuit will increase the circuit's logic transition 0 to 1 delay considerably; the logic output value is nevertheless correct.

  • AUTOMATIC TEST PATTERN GENERATION, ATPG
D‐ALGORITHM (which is developed by Roth @ IBM in 1967)
Algorithm to sensitize a single path to propagate a fault to a primary output does not guarantee to find a test vector even though one does exist.
Procedure:
1. assign D value to the given fault
2. set up inputs to develop this fault
3. set up activity vector A which contains numbers of gates having D value at
their inputs
4. take the gate with the smallest number from vector A
5. set up inputs to propagate the D value on this gate
6. go back to step 3 but stop as soon as a D value appears at a primary output

  • Fault simulation
Conceptually, fault simulation is to detect what a fault does to a logic network. Therefore, a logic simulator can be used to perform fault
simulation by holding a faulty node to a permanent logic level. Thisobviously is inefficient because
1. a complete simulation run is necessary to detect the effect of each faultynode.
2. a fault that has no effect on the network is also simulated.
One use of fault simulation is to evaluate a test set. The measurement is called fault coverage which is the ratio of the number of faults detected by the test set to the total number of simulated faults. This measurement of course is valid only to the fault model used. The fault coverage greatly influences the quality of the shipped product .



The parallel fault simulator works like an event driven logic simulator but with a difference. In the fault simulator, logic values of a node resulted from several faults in consideration are stored in a data WORD together with the normal logic value of a faultless circuit. The fault simulator always operates on a data word so that when a node is evaluated, fault effect on this node due to a number of faults is also evaluated in parallel. For an example, a computer which has word length of n can simulate (n‐1) faults in parallel.

Here is one question: Why the word length of n could be just to simulation (n-1) faults in parallel. It could be explained by this example.


The passes number could be performed for each test vectors.

Run time of parallel fault simulator is found to be a function of the cube of the number of gates in the circuit. Run time can be greatly reduced if detected faults are periodically removed from fault simulation.



CONCURRENT FAULT SIMULATION

Concurrent fault simulation requires less calculation then parallel fault simulation, and only one pass of the circuit is necessary. The trick is there being a fault list associated with each gate in the circuit. A fault list has entries of only those faulty circuits which cause state difference either at the inputs or output of the gate concerned. Each entry includes the description of the fault and the input and output states resulted from this fault.


  • STRUCTURED DESIGN TECHNIQUE FOR TESTABILITY
PRINCIPLE OF SCAN PATH: To add a scan path through the storage elements by connecting them in series as in a shift register. In addition, a multiplexer is added at the data input of each storage element to select between the normal input and the scan input.




In normal operation, SCAN SELECT is off. If SCAN SELECT is on, the storage elements together form a shift register. By clocking into the shift register the appropriate sequence at the SCAN DATA IN, the storage elements can be set to any desirable values. Similarly, if SCAN SELECT is first off and then the next state is clocked into the storage elements, these values can be observed at the SCAN DATA OUT by clocking the shift register while the SCAN SELECT is on. As a result, the two combinational logic blocks can be tested in isolation and test generation becomes that of a combinational circuit.



RANDOM ACCESS SCAN: The storage elements in a random access scan design are individually addressable. Once the storage element is selected, it can be cleared or set under the common CLEAR and PRESET signals.
Random access scan does not contain a scan path as such, but it falls into other principles of a scan design.
To reduce the number of test pins, the SCAN DATA OUTPUTs of the addressable latches are wired‐AND together. Since a single addressable latch is only selected at any one time, other latches are deactivated and their SCAN DATA OUTPUTs will set high.
As a result, the wired‐AND output reflects only the changes in the output of the selected latch.
Random access scan can sometimes reduce  testing time because not every storage  element has to be set up at the same time
and to be observed at the same time for each test vector.



LEVEL SENSITIVE SCAN DESIGN (LSSD): In LSSD, there is no multiplexer at the data input of the storage elements as in scan path. Instead, selectivity is effected implicitly in a special purpose storage device called Polarity‐Hold Shift Register Latch (SRL) and the help of a special clocking scheme.


IDDQ TEST: Quiescent Current, IDDQ, Test is a measure of steady‐state current consumed by a CMOS IC as a mean to detect physical defects such as, bridging, open faults, spot defects and gateoxide shorts, that are not properly modeled by stuck‐faults
alone. This is possible because a CMOS IC consumes no current (apart from leakage) at steady‐state.

IDDQ testing is a method for enhancing  the quality of IC tests by measuring the power supply current of a CMOS circuit. Defect free CMOS circuits draw very low levels of current during a quiescent state. IDDQ levels are typically an order of magnitude higher in the presence of a silicon defect. IDDQ testing targets physical defects that create a conduction path from the power supply to ground and result in excessive current draw.







评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值