VAR logo English
Home page Services Past achievements Contact Site
map
Page d'accueil Services Réalisations précédentes Contact
Français


Microelectronics and Wafer Scale Integration (WSI)

  • Reduced size
  • Reduced weight
  • Reduced assembly time
    • due to the concurrent pattern transfer of the photolithographic process
  • Reduced power requirements
    • due to not needing to drive so many wide, off-chip, signal lines
  • Reduced heat dissipation
    • due to reduced power requirements
  • Increased operating speed
    • due to reduced size
  • Increased in-circuit communication speeds
    • due to reduced size
  • Increased connection reliability
  • Better parameter matching between the circuit components
  • Better parameter tracking between the circuit components

The advantages of implementing complete circuits on the semiconductor (as integrated circuits) rather than as discrete devices soldered together on the printed circuit board are manifold.

The advantages of implementing large scale integrated (LSI) circuits on the semiconductor rather than as small scale integrated (SSI) circuits soldered together on the printed circuit board are then the same. Thus, there is a constant drive towards larger and larger-scale integration. The ultimate aim would be to be able to use the area of the whole semiconductor wafer for laying out the system. This is the goal of wafer scale integration (WSI). But even then, the drive would continue, to use larger and larger wafer diameters.

Perhaps there could now be a revived case for adopting WSI, in the delay between 450mm wafers taking over from 300mm wafers.

Unfortunately, there are several limiting factors on the maximum size of an integrated circuit:

  • Photolithographic limits
    • due to the need to align multiple layers, to very fine tolerances, from one corner of the die, across to those at the other extremities
  • Power distribution limits
    • across the full area of the circuit, from limited entry points
  • Heat dissipation limits
    • across the full area of the circuit
  • Communication speed limits
    • due to the speed of light limit, and
    • due to synchronisation problems in synchronous logic

Integrated circuit yield

In many ways, the most serious of these limits is the first one (the use of ball grid arrays, and asynchronous logic, can be used to address the other problems). It manifests itself particularly in the parameter called yield. If the area of the chip is made too large, the alignment tolerances will be exceeded at the extremities, and parts of the circuit will fail to work. Just a single fault is sufficient to render the complete chip useless. Similarly, reducing the feature size (to enable more transistors to be fabricated within the same chip area) causes the tolerances to be tightened, and hence the chances of faults to be increased, even within a small area. Indeed, to a first approximation, the yield is exponentially dependant on the product of the integrated circuit area and the defect density for the given fabrication technology and feature size.

This is, in fact, the main reason why wafer scale integration, despite being initially researched in the 1970s, is still just a distant goal. The photolithographic, and associated, technology has been improving over those decades, but so has the technology to handle ever-larger semiconductor wafers.

Fault tolerance

One way of achieving larger integrated circuit areas is to allow faults within the integrated circuit to be tolerated. This is possible if redundancy is employed: so that parts of the circuit that are discovered as being faulty can be switched out, and replaced by spare parts that were fabricated at the same time as the original circuit.

However, this introduces a completely new set of trade-offs (Shute 1988):

  • Increasing the degree of redundancy increases the yield, but reduces the harvest (the proportion of working circuitry that is eventually used in the device, and not left as idle spare parts).
  • Decreasing the granularity of the replacement strategy (such as replacing memory faults on a cell-by-cell basis, rather than row-by-row, column-by-column, or block-by-block) might appear to increase the yield on a first analysis. But decreasing the granularity increases the amount of extra switching circuitry, which means increasing the area of semiconductor that has to be allocated for this use, and which is itself prone to faults.

Intuitive derivation of the Poisson Model

The probability, on testing an integrated circuit, chosen at random on a newly fabricated wafer, and of finding it to be working is the same as the yield, Y, of those integrated circuits (the number that are working divided by the total number that were fabricated).

We would expect that, in general, large integrated circuits are more likely to contain faults than smaller ones. Indeed, if the number of defects per unit area, D, is small, we would expect the probability of any given integrated circuit containing one of these defects to be directly proportional to its area, A. Halving the defect denisity would then halve the probability still further; so, when the values are very small, we would expect the probability of an integrated circuit to contain a defect to be equal to D.A, and hence the probability of it to be functioning correctly, to be equal to 1–D.A (that is, Y=1–D.A). More generally, we would expect Y to be a monotonically rising function of D and A, Y(D,A), but only becoming a linear function at the low values suggested above.

If we have a large integrated circuit, such as a microprocessor, composed of two major components, such as a block for the CPU and a block for the memory, we would expect the yields of those two components, Y1 and Y2, to be a function of D and their respective areas A1 and A2. The yield of the whole microprocessor device will then be the product of the separate values (since this is the probability that both one component will be found to be working and that of the other component), while the area of the new device is the sum of the separate values.

Y(D,A1+A2) = Y(D,A1).Y(D,A2)

This suggests that the yield function, Y(D,A), is an exponential one. The argument of an exponential function must be dimensionless, as in Y(D,A)=bk.D.A, which can be rewritten as Y(D,A)=ek.ln(b).D.A

As noted earlier, when the values of D and A are very small, the number of faults on the wafer, of area Aw, would be about D.Aw, and distributed so thinly as to be little likelihood of more than one falling within any given integrated circuit. But one fault is all that is required to wreck an integrated circuit completely. Thus, for small values, D.Aw also represents the number of faulty chips on the wafer. Since there are about Aw/A chips fabricated on the wafer (because A is small), the proportion of faulty chips to chips fabricated would tend to D.A, and the yield function, Y(D,A), would thus tend to 1–D.A. Considering the Taylor expansion of the exponential function, and comparing this to the expression for small values of D.A, the overall yield function, at least to a first approximation, turns out to need the constant, k.ln(b), to be –1, and hence the final expression to be:

Y(D,A)=e–D.A.

This is the Poison model for integrated circuit yield. It does assumes, somewhat simplistically, that defects are points of zero size, occurring completely independently of each other, with equal impact regardless of where they occur in the integrated circuit. Over the years, other models, such as Seeds, Murphy and Bose-Einstein, have been proposed to address this over simplification.

Top of this page Home page Services Past achievements Contact Site
map
Page d'accueil Services Réalisations précédentes Contact
© Malcolm Shute, Valley d'Aigues Research, 2006-2016