You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for representing large probabilistic databases, by decomposing them into tuple-independent tables, block-in...
How do you approach answering queries when your data is stored in multiple databases that were designed independently by different people? This is first comprehensive book on data integration and is written by three of the most respected experts in the field. This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. Data integration is the problem of answering queries that span multiple data sources (e.g., databases, web pages). Data integration problems surface in multiple contexts, including enterprise information integration, query processing on the Web, coordination between government agencies and collaboration between scientists. In some cases, data integration is the key bottleneck to making progress in a field. The authors provide a working knowledge of data integration concepts and techniques, giving you the tools you need to develop a complete and concise package of algorithms and applications.
This book presents the thoroughly refereed post-workshop proceedings of the International Workshop on the Web and Databases, WebDB'98, held in conjunction with EDBT'98 in Valencia, Spain, in March 1998. The 13 revised full papers presented were selected during two rounds of reviewing from initially 37 submissions. The book is divided into sections on Internet programming: tools and applications, integration and access to Web data, hypertext views on databases, and searching and mining the Web.
The volume of natural language text data has been rapidly increasing over the past two decades, due to factors such as the growth of the Web, the low cost associated with publishing, and the progress on the digitization of printed texts. This growth combined with the proliferation of natural language systems for search and retrieving information provides tremendous opportunities for studying some of the areas where database systems and natural language processing systems overlap. This book explores two interrelated and important areas of overlap: (1) managing natural language data and (2) developing natural language interfaces to databases. It presents relevant concepts and research question...
This book constitutes the refereed proceedings of the 13th International Colloquium on Structural Information and Communication Complexity, SIROCCO 2006, held in Chester, UK, July 2006. The book presents 24 revised full papers together with three invited talks, on topics in distributed and parallel computing, information dissemination, communication complexity, interconnection networks, high speed networks, wireless and sensor networks, mobile computing, optical computing, autonomous robots, and related areas.
None
None