You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network ...
Though there are several books on the Singapore economy, none have focused on the time series-based investigations. This book tries to address that gap and attempts to add to what we know from studies in the descriptive tradition. It is a compendium of twenty of the author's academic studies on the Singapore economy which have appeared previously as journal papers, book chapters, and feature articles. The papers share a common methodology of social scientific enquiry viz., time series econometrics, and are divided into three parts: macroeconomy, business cycles and forecasting. Each part brings together empirical essays that deal with particular aspects of these related fields. The book will be of interest to economists, policy-makers and students seeking a quantitatively informed understanding of the Singapore economy.
1.1 Background There are many paradigmatic statements in the literature claiming that this is the decade of parallel computation. A great deal of research is being de voted to developing architectures and algorithms for parallel machines with thousands, or even millions, of processors. Such massively parallel computers have been made feasible by advances in VLSI (very large scale integration) technology. In fact, a number of computers having over one thousand pro cessors are commercially available. Furthermore, it is reasonable to expect that as VLSI technology continues to improve, massively parallel computers will become increasingly affordable and common. However, despite the significant progress made in the field, many funda mental issues still remain unresolved. One of the most significant of these is the issue of a general purpose parallel architecture. There is currently a huge variety of parallel architectures that are either being built or proposed. The problem is whether a single parallel computer can perform efficiently on all computing applications.
Over the past few years, the demand for high speed Digital Signal Proces sing (DSP) has increased dramatically. New applications in real-time image processing, satellite communications, radar signal processing, pattern recogni tion, and real-time signal detection and estimation require major improvements at several levels; algorithmic, architectural, and implementation. These perfor mance requirements can be achieved by employing parallel processing at all levels. Very Large Scale Integration (VLSI) technology supports and provides a good avenue for parallelism. Parallelism offers efficient sohitions to several problems which can arise in VLSI DSP architectures such as: 1. Intermediate data ...
"The main theme of the 1988 workshop, the 18th in this DARPA sponsored series of meetings on Image Understanding and Computer Vision, is to cover new vision techniques in prototype vision systems for manufacturing, navigation, cartography, and photointerpretation." P. v.
Parallelism in problems of low- and medium-level image processing and pattern recognition is the subject of this book. It covers the investigation of parallelism in algorithms and in fundamental methods of image processing and pattern recognition. Based on this, new concepts for parallel architectures are derived and their performance is evaluated. Different hardware structures such as SIMD, MIMD, data flow machines, transputer systems, neural networks and interconnection networks are described, including high-speed VLSI-implementations. Additional topics covered include software aspects and image processing systems.