[previous] [next] [contents]    Short circuit detection among full BST interconnects

As soon as the open fault detection phase is concluded, and since the EXTEST instruction is already present in the IRs, the first vector for short-circuit fault detection may be shifted in. Since all input pins belonging to two short-circuited interconnects will see the same voltage level, it is reasonable to expect that their BS cells will capture the same logic value. Test vector generation for detection of short-circuit faults is therefore based on the principle of driving opposite logic values to the interconnects under test. For interconnects with multiple driving pins (and since the previous step has shown that no open faults are present), only one of them will be selected to drive the interconnect.

Short-circuit faults are harder to deal with, not only because of their potentially destructive effect, but also because the number of possible faults is much larger than open or stuck-at faults. The number of possible short-circuit faults among N nodes is given by 2N – (N+1), growing exponentially with the dimension of the circuit. It is true that some short-circuit faults will be less likely than others (a short-circuit among all interconnects is highly improbable), but this fact is hardly relevant for test vector generation purposes. On the other hand, complete short-circuit fault detection will be possible if opposite logic values are applied to all possible interconnect pairs, since any short-circuits involving 3 or more interconnects will be detected as well. This fact brings our attention to the distinction between fault detection and fault diagnosis, the former being much simpler to achieve.

A large number of papers have been published describing test vector generation algorithms for detection of short-circuit faults, trading off number of vectors for diagnostic resolution. The first of these algorithms was proposed in 1974 by W. H. Kautz (not for boundary scan boards, which at the time did not exist) and uses the binary partition principle to guarantee complete fault detection with a minimum number of test vectors (and with minimum diagnostic resolution, as will be expected).

Binary partition consists of starting with a test vector that applies a logic 0 to half the interconnects and a 1 to the other half, so that any short circuit between interconnects in different halves will be detected. However, and since short-circuits among interconnects in the same half will escape detection, the procedure is again repeated, but now further dividing each half in two halves, and so on until the last partition, which leads to single interconnects. The application of this algorithm to the very simple case of a circuit with 8 interconnects is shown in figure 1.
 

Figure 1: The binary partition algorithm for short-circuit detection.

The binary partition algorithm guarantees complete short-circuit fault detection with the smallest number of test vectors, given by é log2(N)ù , where N is the number of interconnects and é Xù (ceiling of X) represents the smallest integer not lower than X (10 test vectors will therefore be able to provide 100% short-circuit fault detection in a board with 1.000 interconnects).