Skip to main content

Posts

Showing posts from April, 2022

Seminar 28 April @ 12 pm

  On arbitrarily underdispersed discrete distributions Date: 28 April 2022, Thursday Time: 12pm AEDT Contact the organizer: Andriy Olenko a.olenko@latrobe.edu.au Speaker: Dr Alan Huang, UQ Abstract: We review a range of generalized count distributions, investigating which (if any) can be arbitrarily underdispersed, i.e., its variance can be arbitrarily small compared to its mean. A philosophical implication is that models failing this criterion perhaps should not be considered a “statistical model” according to the extendibility criterion of McCullagh (2002). Four practical implications will be discussed. We suggest that all generalizations of the Poisson distribution be tested against this property. Zoom meeting link: Join from a PC, Mac, iOS or Android: https://latrobe.zoom.us/j/98357628534 Or iPhone one-tap (Australia Toll):  +61280152088,98357628534# Or Telephone:     Dial: +61 2 8015 2088     Meeting ID: 983 5762 8534     International numbers available: https://l

Seminar 22 April @4pm

Twisted: Improving particle filters by learning modified paths Date: 22 April 2022, Friday Time: 4pm AEDT Speaker: Dr Joshua Bon (QUT) Abstract: Particle filters, and sequential Monte Carlo (SMC) more generally, operate by propagating weighted samples (or particles) through a sequence of distributions. Such a sequence is characterised by a Feynman-Kac model (or path measure) and chosen for the given inferential task at hand. One can also define twisted Feynman-Kac models which preserve the inferential target but provide a more efficient sequence of distributions (or path) for the SMC algorithm to use. Optimally twisted models define perfect Monte Carlo samplers and are therefore an important concept for SMC algorithms. We investigate how to learn and use twisted Feynman-Kac models in situations where the original model involves difficult or intractable transition dynamics. This extends existing work which relies on twisting the model analytically. We achieve twisting via Monte Carlo a

Seminar 14 April @ 4 pm

Statistical analysis of machine learning methods Date :  Thursday, 14 April 2022 Time:   4 pm   - 5 pm Speaker:    Professor Johannes Schmidt-Hieber Abstract: Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The talk surveys this field and describes future challenges. Zoom Link:  Please contact Yanrong Yang ( yanrong.yang@anu.edu.au ) to obtain the zoom link for this seminar.

Seminar 12 April @ 16:00 (UTC) 3:00 (AEDT)

  Statistical Challenges in Stellar Parameter Estimation from Theory and Data. Date: Tuesday, 12 April 2022   Time: 16:00 (UTC) 3:00 (AEDT)   Speaker:  Josh Speagle (Toronto University, Canada) Contact the organizer: Andriy Olenko a.olenko@latrobe.edu.au Abstract:  Understanding how the Milky Way fits into the broader galaxy population requires studying the properties of other galaxies as well as our own. While it is possible to observe the structure of other galaxies directly, understanding the structure of our own Galaxy from within requires inferring the 3-D positions, velocities, and other properties of billions of stars. In this talk, I will discuss some of the statistical challenges in inferring stellar parameters from modern photometric surveys such as Gaia and SDSS, focusing in particular on issues with existing theoretical stellar models, the complex nature of parameter uncertainties, and scalability to large datasets. I will then describe some ongoing work trying t

Seminar 7 April @ 11 am

                 Online Estimation for Functional Data Date :  Thursday, 7 April 2022 Time:   11 am  - 12 noon Speaker:  Professor Fang Yao Abstract: Functional data analysis has attracted considerable interest, and is facing new challenges of the increasingly available data in streaming manner. In this work, we propose a new online method to dynamically update the local linear estimates of mean and covariance functions of functional data, which is the foundation of subsequent analysis. The kernel-type estimates can be decomposed into two sufficient statistics depending on the data-driven bandwidths. We propose to approximate the future optimal bandwidths by a dynamic sequence of candidates and combine the corresponding statistics across blocks to make an updated estimation. The proposed online method is easy to compute based on the stored sufficient statistics and current data block. Based on the asymptotic normality of the online mean and covariance function estimates, the relative e

Seminar @2pm 8 April

Detection boundaries for sparse gamma scale mixture models Time: 2-3PM 8 April  Location: Carslaw 829 (University of Sydney) or Zoom at https://uni-sydney.zoom.us/j/87417817957  Speaker: Michael Stewart - University of Sydney Mixtures of distributions from a parametric family are useful for various statistical problems, including nonparametric density estimation, as well as model-based clustering. In clustering an enduringly difficult problem is choosing the number of clusters; when using mixtures models for model-based clustering this corresponds (roughly) to choosing the number of components in the mixture. The simplest version of this model selection problem is choosing between a known single-component mixture, and a "contaminated" version where a second unknown component is added. Due to certain structural irregularities, many standard asymptotic results from hypothesis testing do not apply in these "mixture detection" problems, including those relating to p