You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The ability of parallel computing to process large data sets and handle time-consuming operations has resulted in unprecedented advances in biological and scientific computing, modeling, and simulations. Exploring these recent developments, the Handbook of Parallel Computing: Models, Algorithms, and Applications provides comprehensive coverage on a
This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science.
This book constitutes the refereed proceedings of the 24th International Conference on the Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2004, held in Chennai, India, in December 2004. The 35 revised full papers presented together with 5 invited papers were carefully reviewed and selected from 176 submissions. The papers address a broad variety of current issues in software science, programming theory, systems design and analysis, formal methods, mathematical logic, mathematical foundations, discrete mathematics, combinatorial mathematics, complexity theory, automata theory, and theoretical computer science in general.
This book constitutes the thoroughly refereed post-proceedings of the 15th International Workshop on Languages and Compilers for Parallel Processing, LCPC 2002, held in College Park, MD, USA in July 2002. The 26 revised full papers presented were carefully selected during two rounds of reviewing and improvement from 32 submissions. All current issues in parallel processing are addressed, in particular memory-constrained computation, compiler optimization, performance studies, high-level languages, programming language consistency models, dynamic parallelization, parallelization of data mining algorithms, parallelizing compilers, garbage collection algorithms, and evaluation of iterative compilation.
The Sixth SIAM International Conference on Data Mining continues the tradition of presenting approaches, tools, and systems for data mining in fields such as science, engineering, industrial processes, healthcare, and medicine. The datasets in these fields are large, complex, and often noisy. Extracting knowledge requires the use of sophisticated, high-performance, and principled analysis techniques and algorithms, based on sound statistical foundations. These techniques in turn require powerful visualization technologies; implementations that must be carefully tuned for performance; software systems that are usable by scientists, engineers, and physicians as well as researchers; and infrastructures that support them.
None