You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Keine ausführliche Beschreibung für "Natural Language Processing and Speech Technology" verfügbar.
Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book “Modelling, Learning and Processing of Text-Technological Data Structures” deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.
The promise of the Semantic Web is that future web pages will be annotated not only with bright colors and fancy fonts as they are now, but with annotation extracted from large domain ontologies that specify, to a computer in a way that it can exploit, what information is contained on the given web page. The presence of this information will allow software agents to examine pages and to make decisions about content as humans are able to do now. The classic method of building an ontology is to gather a committee of experts in the domain to be modeled by the ontology, and to have this committee agree on which concepts cover the domain, on which terms describe which concepts, on what relations ...
Labelling data is one of the most fundamental activities in science, and has underpinned practice, particularly in medicine, for decades, as well as research in corpus linguistics since at least the development of the Brown corpus. With the shift towards Machine Learning in Artificial Intelligence (AI), the creation of datasets to be used for training and evaluating AI systems, also known in AI as corpora, has become a central activity in the field as well. Early AI datasets were created on an ad-hoc basis to tackle specific problems. As larger and more reusable datasets were created, requiring greater investment, the need for a more systematic approach to dataset creation arose to ensure in...
Principles & practice.