You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
Analogical Modeling (AM) is an exemplar-based general theory of description that uses both neighbors and non-neighbors (under certain well-defined conditions of homogeneity) to predict language behavior. This book provides a basic introduction to AM, compares the theory with nearest-neighbor approaches, and discusses the most recent advances in the theory, including psycholinguistic evidence, applications to specific languages, the problem of categorization, and how AM relates to alternative approaches of language description (such as instance families, neural nets, connectionism, and optimality theory). The book closes with a thorough examination of the problem of the exponential explosion, an inherent difficulty in AM (and in fact all theories of language description). Quantum computing (based on quantum mechanics with its inherent simultaneity and reversibility) provides a precise and natural solution to the exponential explosion in AM. Finally, an extensive appendix provides three tutorials for running the AM computer program (available online).
About half a century ago, AI pioneers like Marvin Minsky embarked on the ambitious project of emulating how the human mind encodes and decodes meaning. While today we have a better understanding of the brain thanks to neuroscience, we are still far from unlocking the secrets of the mind, especially when it comes to language, the prime example of human intelligence. “Understanding natural language understanding”, i.e., understanding how the mind encodes and decodes meaning through language, is a significant milestone in our journey towards creating machines that genuinely comprehend human language. Large language models (LLMs) such as GPT-4 have astounded us with their ability to generate...
This book consitutes the refereed proceedings of the First International Workshop on Machine Learning held in Sheffield, UK, in September 2004. The 19 revised full papers presented were carefully reviewed and selected for inclusion in the book. They address all current issues in the rapidly maturing field of machine learning that aims to provide practical methods for data discovery, categorisation and modelling. The particular focus of the workshop was advanced research methods in machine learning and statistical signal processing.
This book constitutes the refereed proceedings of the 14th European Conference on Machine Learning, ECML 2003, held in Cavtat-Dubrovnik, Croatia in September 2003 in conjunction with PKDD 2003. The 40 revised full papers presented together with 4 invited contributions were carefully reviewed and, together with another 40 ones for PKDD 2003, selected from a total of 332 submissions. The papers address all current issues in machine learning including support vector machine, inductive inference, feature selection algorithms, reinforcement learning, preference learning, probabilistic grammatical inference, decision tree learning, clustering, classification, agent learning, Markov networks, boosting, statistical parsing, Bayesian learning, supervised learning, and multi-instance learning.
Biography of Walter Daelemans, currently Professor at University of Antwerp, previously Researcher, lecturer, professor at Tilburg University and Researcher, lecturer, professor at Tilburg University.
Adapting BLOOM to a new language: A case study for the Italian Pierpaolo Basile, Lucia Siciliani, Elio Musacchio, Marco Polignano, Giovanni Semeraro U-DepPLLaMA: Universal Dependency Parsing via Auto-regressive Large Language Models Claudiu Daniel Hromei, Danilo Croce, Roberto Basili Investigating Text Difficulty and Prerequisite Relation Identification Chiara Alzetta Italian Linguistic Features for Toxic Language Detection in Social Media Leonardo Grotti Publishing the Dictionary of Medieval Latin in the Czech Lands as Linked Data in the LiLa Knowledge Base Federica Gamba, Marco Carlo Passarotti, Paolo Ruffolo
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.