Der Artikel ist weiterhin als ^^OTHERCONDITION^^ verfügbar.
Numerical Bayesian Methods Applied to Signal Processing
-10 %

Numerical Bayesian Methods Applied to Signal Processing

Sofort lieferbar | Lieferzeit:3-5 Tage I

Unser bisheriger Preis:ORGPRICE: 277,13 €

Jetzt 249,42 €*

Alle Preise inkl. MwSt. | zzgl. Versand
William J. Fitzgerald
562 g
242x162x20 mm

This volume is concerned with the processing of signals that have been sampled and digitized. It presents algorithms for the optimization, random simulation and numerical integration of probability densities for applications of Bayesian inference to signal processing.
1 Introduction.- 2 Probabilistic Inference in Signal Processing.- 2.1 Introduction.- 2.2 The likelihood function.- 2.2.1 Maximum likelihood.- 2.3 Bayesian data analysis.- 2.4 Prior probabilities.- 2.4.1 Flat priors.- 2.4.2 Smoothness priors.- 2.4.3 Convenience priors.- 2.5 The removal of nuisance parameters.- 2.6 Model selection using Bayesian evidence.- 2.6.1 Ockham's razor.- 2.7 The general linear model.- 2.8 Interpretations of the general linear model.- 2.8.1 Features.- 2.8.2 Orthogonalization.- 2.9 Example of marginalization.- 2.9.1 Results.- 2.10 Example of model selection.- 2.10.1 Closed form expression for evidence.- 2.10.2 Determining the order of a polynomial.- 2.10.3 Determining the order of an AR process.- 2.11 Concluding remarks.- 3 Numerical Bayesian Inference.- 3.1 The normal approximation.- 3.1.1 Effect of number of data on the likelihood function.- 3.1.2 Taylor approximation.- 3.1.3 Reparameterization.- 3.1.4 Jacobian of transformation.- 3.1.5 Normal approximation to evidence.- 3.1.6 Normal approximation to the marginal density.- 3.1.7 The delta method.- 3.2 Optimization.- 3.2.1 Local algorithms.- 3.2.2 Global algorithms.- 3.2.3 Concluding remarks.- 3.3 Integration.- 3.4 Numerical quadrature.- 3.4.1 Multiple integrals.- 3.5 Asymptotic approximations.- 3.5.1 The saddlepoint approximation and Edgeworth series.- 3.5.2 The Laplace approximation.- 3.5.3 Moments and expectations.- 3.5.4 Marginalization.- 3.6 The Monte Carlo method.- 3.7 The generation of random variates.- 3.7.1 Uniform variates.- 3.7.2 Non-uniform variates.- 3.7.3 Transformation of variables.- 3.7.4 The rejection method.- 3.7.5 Other methods.- 3.8 Evidence using importance sampling.- 3.8.1 Choice of sampling density.- 3.8.2 Orthogonalization using noise colouring.- 3.9 Marginal densities.- 3.9.1 Histograms.- 3.9.2 Jointly distributed variates.- 3.9.3 The dummy variable method.- 3.9.4 Marginalization using jointly distributed variates.- 3.10 Opportunities for variance reduction.- 3.10.1 Quasi-random sequences.- 3.10.2 Antithetic variates.- 3.10.3 Control variates.- 3.10.4 Stratified sampling.- 3.11 Summary.- 4 Markov Chain Monte Carlo Methods.- 4.1 Introduction.- 4.2 Background on Markov chains.- 4.3 The canonical distribution.- 4.3.1 Energy, temperature and probability.- 4.3.2 Random walks.- 4.3.3 Free energy and model selection.- 4.4 The Gibbs sampler.- 4.4.1 Description.- 4.4.2 Discussion.- 4.4.3 Convergence.- 4.5 The Metropolis-Hastings algorithm.- 4.5.1 The general algorithm.- 4.5.2 Convergence.- 4.5.3 Choosing the proposal density.- 4.5.4 Relationship between Gibbs and Metropolis.- 4.6 Dynamical sampling methods.- 4.6.1 Derivation.- 4.6.2 Hamiltonian dynamics.- 4.6.3 Stochastic transitions.- 4.6.4 Simulating the dynamics.- 4.6.5 Hybrid Monte Carlo.- 4.6.6 Convergence to canonical distribution.- 4.7 Implementation of simulated annealing.- 4.7.1 Annealing schedules.- 4.7.2 Annealing with Markov chains.- 4.8 Other issues.- 4.8.1 Assessing convergence of Markov chains.- 4.8.2 Determining the variance of estimates.- 4.9 Free energy estimation.- 4.9.1 Thermodynamic integration.- 4.9.2 Other methods.- 4.10 Summary.- 5 Retrospective Changepoint Detection.- 5.1 Introduction.- 5.2 The simple Bayesian step detector.- 5.2.1 Derivation of the step detector.- 5.2.2 Application of the step detector.- 5.3 The detection of changepoints using the general linear model.- 5.3.1 The general piecewise linear model.- 5.3.2 Simple step detector in generalized matrix form.- 5.3.3 Changepoint detection in AR models.- 5.3.4 Application of AR changepoint detector.- 5.4 Recursive Bayesian estimation.- 5.4.1 Update of position.- 5.4.2 Update given more data.- 5.5 Detection of multiple changepoints.- 5.6 Implementation details.- 5.6.1 Sampling changepoint space.- 5.6.2 Sampling linear parameter space.- 5.6.3 Sampling noise parameter space.- 5.7 Multiple changepoint results.- 5.7.1 Synthetic step data.- 5.7.2 Well log data.- 5.8 Concluding Remarks.- 6 Restoration of Missing
This book is concerned with the processing of signals that have been sam pled and digitized. The fundamental theory behind Digital Signal Process ing has been in existence for decades and has extensive applications to the fields of speech and data communications, biomedical engineering, acous tics, sonar, radar, seismology, oil exploration, instrumentation and audio signal processing to name but a few [87]. The term "Digital Signal Processing", in its broadest sense, could apply to any operation carried out on a finite set of measurements for whatever purpose. A book on signal processing would usually contain detailed de scriptions of the standard mathematical machinery often used to describe signals. It would also motivate an approach to real world problems based on concepts and results developed in linear systems theory, that make use of some rather interesting properties of the time and frequency domain representations of signals. While this book assumes some familiarity with traditional methods the emphasis is altogether quite different. The aim is to describe general methods for carrying out optimal signal processing.

Kunden Rezensionen

Zu diesem Artikel ist noch keine Rezension vorhanden.
Helfen sie anderen Besuchern und verfassen Sie selbst eine Rezension.