Popis: |
Time-critical neural network applications that require fully parallel hardware implementations for maximal throughput are considered. The rich array of technologies that are being pursued is surveyed, and the analog CMOS VLSI medium approach is focused on. This medium is messy in that limited dynamic range, offset voltages, and noise sources all reduce precision. The authors examine how neural networks can be directly implemented in analog VLSI, giving examples of approaches that have been pursued to date. Two important application areas are highlighted: optimization, because neural hardware may offer a speed advantage of orders of magnitude over other methods; and supervised learning, because of the widespread use and generality of gradient-descent learning algorithms as applied to feedforward networks. > |