You may have to Search all our reviewed books and magazines, click the sign up button below to create a free account.
The emergence of data science, in recent decades, has magnified the need for efficient methodology for analyzing data and highlighted the importance of statistical inference. Despite the tremendous progress that has been made, statistical science is still a young discipline and continues to have several different and competing paths in its approaches and its foundations. While the emergence of competing approaches is a natural progression of any scientific discipline, differences in the foundations of statistical inference can sometimes lead to different interpretations and conclusions from the same dataset. The increased interest in the foundations of statistical inference has led to many p...
The theory of belief functions, also known as evidence theory or Dempster-Shafer theory, was first introduced by Arthur P. Dempster in the context of statistical inference, and was later developed by Glenn Shafer as a general framework for modeling epistemic uncertainty. These early contributions have been the starting points of many important developments, including the Transferable Belief Model and the Theory of Hints. The theory of belief functions is now well established as a general framework for reasoning with uncertainty, and has well understood connections to other frameworks such as probability, possibility and imprecise probability theories. This volume contains the proceedings of the 2nd International Conference on Belief Functions that was held in Compiègne, France on 9-11 May 2012. It gathers 51 contributions describing recent developments both on theoretical issues (including approximation methods, combination rules, continuous belief functions, graphical models and independence concepts) and applications in various areas including classification, image processing, statistics and intelligent vehicles.
An interview with Professor Yaoting Zhang / Qiwei Yao and Zhaohai Li -- Significance level in interval mapping / David O. Siegmund and Benny Yakir -- An asymptotic Pythagorean identity / Zhiliang Ying -- A Monte Carlo gap test in computing HPD regions / Ming-Hui Chen [und weitere] -- Estimating restricted normal means using the EM-type algorithms and IBF sampling / Ming Tan, Guo-Liang Tian and Hong-Bin Fang -- An example of algorithm mining: covariance adjustment to accelerate EM and Gibbs / Chuanhai Liu -- Large deviations and deviation inequality for kernel density estimator in L[symbol]-distance / Liangzhen Lei, Liming Wu and Bin Xie -- Local sensitivity analysis of model misspecification...
Modern financial management is largely about risk management, which is increasingly data-driven. The problem is how to extract information from the data overload. It is here that advanced statistical and machine learning techniques can help. Accordingly, finance, statistics, and data analytics go hand in hand. The purpose of this book is to bring the state-of-art research in these three areas to the fore and especially research that juxtaposes these three.
The principal aim of this book is to introduce to the widest possible audience an original view of belief calculus and uncertainty theory. In this geometric approach to uncertainty, uncertainty measures can be seen as points of a suitably complex geometric space, and manipulated in that space, for example, combined or conditioned. In the chapters in Part I, Theories of Uncertainty, the author offers an extensive recapitulation of the state of the art in the mathematics of uncertainty. This part of the book contains the most comprehensive summary to date of the whole of belief theory, with Chap. 4 outlining for the first time, and in a logical order, all the steps of the reasoning chain assoc...
Clustering remains a vibrant area of research in statistics. Although there are many books on this topic, there are relatively few that are well founded in the theoretical aspects. In Robust Cluster Analysis and Variable Selection, Gunter Ritter presents an overview of the theory and applications of probabilistic clustering and variable selection, synthesizing the key research results of the last 50 years. The author focuses on the robust clustering methods he found to be the most useful on simulated data and real-time applications. The book provides clear guidance for the varying needs of both applications, describing scenarios in which accuracy and speed are the primary goals. Robust Clust...
Bayesian statistics is a dynamic and fast-growing area of statistical research and the Valencia International Meetings provide the main forum for discussion. These resulting proceedings form an up-to-date collection of research.
None