July 28, 2014

Timing Analysis: Graph Based v/s Path Based

Hello folks! In this post, I'm gonna talk about the difference between two commonly used Static Timing Analysis methodologies, namely- the Graph Based Analysis and the Path Based Analysis.

I shall explain the difference with help of an example, shown below:

Now, we have two slews- fast and slow. In Graph Based Analysis, the worst slew propagation is ON, and the timing engine computes the worst case delays of all standard cells assuming the worst case slew for all the inputs of a gate. For example, assuming we need to compute the gate delays while doing setup analysis in a graph-based methodology for the path from FF1 to FF2:
  • The delay of the A-> Z (output) arc of the OR gate (in brown) would be computed assuming the real slew slew, i.e. slew at pin A. 
  • However, the slew that will be propagated to the output pin of the OR gate would be the worst slew, which in this case would be computed taking into account the load at the output of the OR gate and slew at B.
  • Similarly, the delay of NAND gate (in blue) would be computed using the propagated slew coming from the previous stage i.e. the slew at pin B, but the slew that is propagated to the output would be according to the worst input slew, in this case slew at A.
  • And so on and so forth..
While performing hold analysis in a graph-based methodology, the situation reverses, the the delays of all cells would be computed assuming the best propagated slews (fast slews) for all nodes along the timing path!

This method of timing analysis is faster and uses lower memory footprint because the engine has to simple keep a tab of worst propagated slews for every pin in the design. This surely is pessimistic but again faster and therefore does not encumber the optimization tool by bounding the problem. For example, for the OR gate, the slew propagated to it's output is the worst slew, therefore the delays of subsequent gates after the OR gate could be pessimistic. The Path-based analysis comes to the rescue at some cost.

In Path-based analysis, the tools takes into account the actual slew for the arcs encountered while traversing any particular timing path. For example for the path shown above from FF1 to FF2, the arcs encountered are- A-> Z for OR gate; B-> Z for NAND gate; B-> Z for XOR gate and A-> Z for the inverted AND gate.

The tool would therefore consider the actual slews and this dispenses with the unnecessary pessimism!

Why not use PBA instead of GBA? Who's stopping us?
The answer is the run-time and memory foot-print. Since, PBA needs to compute the delays of standard cells in cognizance with the particular timing path, it incurs a penalty on the run-time to compute the delays, as opposed to GBA where the worse propagated slew was being used to compute the delays. In a nutshell, PBA is more accurate at the cost of run-time. 

Typically, design engineers tend to use GBA for majority of the analysis. However, for the paths with a small violation (maybe of the order of 10s of ps) may be waived off by running PBA for the top-critical paths when the tape-out of the design is impending. One might argue that the extra effort spent in optimizing many other paths might have been saved had we used PBA earlier. And it is true! But like any engineering problem, there exists a trade-off and one needs to take a call between fixing the timing and a potential risk of delaying the tape-out! 


July 23, 2014

Small Delay Defect Testing

Small Delay Defect Testing is an important step in ATPG testing towards realizing the strategic goal of zero DPPM (Defective Parts Per Million). 

What is SDD? And why is it needed?  
With shrinking technology nodes, the silicon is becoming increasingly susceptible to manufacturing defects like- stuck-at faults, transition faults etc. Variations in PVTs and OCVs make the silicon even more vulnerable to failure. While in Stuck-at capture, we test the device for manufacturing defects like- shorts and open; in At-Speed testing the device is tested for transition faults at the functional frequency. 

Small Delays are any subtle variations in the delay of standard cells due to OCVs. These small delays (when accumulated) have the potential to fail the timing of the critical paths at the rated frequency. The testing mechanism deployed to test the faults arising due to these small delays is referred to as Small Delay Defect testing. 

Sounds more like ATPG-Atspeed, right? Then, where lies the difference? The difference lies in the intent. While At-Speed Testing, the intent of DFT is to target fault simulation for each node by hook or by crook! With the focus of modern ATPG tools being on pattern reduction and hence the test time, it tries to target each node via the most convenient path, which is typically the shortest path.

Consider the below use-case. 




Path 3 is the shortest path to target the node X. ATPG-Atspeed would take Path 3 to generate the patterns in order to test node X. As evident from above, Path 1 is the most timing critical path and therefore is more probable to violate timing on silicon. SDD targets such paths!

I have one question. And would request the readers to pour-in their view regarding it:
  • Small Delay Defect is traditionally done for setup-violations. But let's say, in case of significant clock skew between any two interacting flops, even hold timing would be critical. Can one possibly use something similar to target hold violations due to small delay as well?