UNSW Seminar: Matthew K. Tam (UniMelb)

Title: Splitting Algorithms for Training GANs

Speaker: Matthew Tam (University of Melbourne)

Date: Thu, 14/05/2020 – 11:05am

Venue: Zoom meeting (connection details here)

Abstract: Generative adversarial networks (GANs) are an approach to fitting generative models over complex structured spaces. Within this framework, the fitting problem is posed as a zero-sum game between two competing neural networks which are trained simultaneously. Mathematically, this problem takes the form of a saddle-point problem; a well-known example of the type of problem where the usual (stochastic) gradient descent-type approaches used for training neural networks fail. In this talk, we rectify this shortcoming by proposing a new method for training GANs that has both: (i) theoretical guarantees of convergence, and (ii) does not increase the algorithm’s per iteration complexity (as compared to gradient descent). The theoretical analysis is performed within the framework of monotone operator splitting.

UNSW Seminar: Santiago Badia, Monash University

Title: Gridap: Grid-based PDE approximations in Julia
Speaker: Santiago Badia, Monash University
Date: Tue, 21/04/2020 – 11:00am
Venue: Zoom link: https://monash.zoom.us/j/579915360
Abstract: We present Gridap, a novel computational framework for the grid-based approximation of PDEs in the Julia programming language. The main motivation behind this library is to provide an easy-to-use framework for the development of complex PDE solvers in a flexible style, close to interpreted languages like Python, without sacrificing the performance of compiled languages. 

Further details: https://www.maths.unsw.edu.au/seminars/2020-04/gridap-grid-based-pde-approximations-julia

UNSW Seminar: Lindon Roberts, ANU

Title: Derivative-free optimisation for least-squares problems
Speaker: Lindon Roberts, Australian National University
Date: Thu, 16/04/2020 – 11:05am
Venue: RC-4082, The Red Centre, UNSW (Request for remote connect via Zoom: a.schaeffer (at) unsw.edu.au)
Abstract: Least-squares problems (such as parameter estimation) are ubiquitous across quantitative disciplines. Optimisation algorithms for solving such problems are numerous and well-established. However, in cases where models are computationally expensive, black box, or noisy, classical algorithms can be impractical or even fail. Derivative-free optimisation (DFO) methods provide an alternative approach which can handle these settings. In this talk, Lindon will introduce a derivative-free version of the classical Gauss-Newton method, discuss its theoretical guarantees and software implementation, and describe applications of this technique to parameter estimation of global climate models and image reconstruction.

UNSW Seminar: Boris Kashin, Russian Academy of Science

Title: Some theorems on the restriction of operator to coordinate subspace and Ф-widths estimates
Speaker: Boris Kashin, Steklov Mathematical Institute of Russian Academy of Science
Date: Tue, 17/03/2020 – 11:05am
Venue: RC-4082, The Red Centre, UNSW (Request for remote connect: qlegia (at) unsw.edu.au)
Abstract: Theorems of different nature concerning the restriction of operator to coordinate subspace and their application in Analysis and Approximation theory will be discussed.

RMITOpt Seminar: Matthew Tam, University of Melbourne

Speaker: Dr Matthew Tam, School of Mathematics and Statistics at the University of Melbourne

Title: Algorithms derived from dynamical systems

Date and Time: Friday, March 13th, 3.00pm – 4.00pm, 2020.

Location: AGR Building 15, level 03, room 10 (Request for remote Zoom connect andy.eberhard (at) rmit.edu.au)

Abstract: The study of continuous time dynamical systems associated with iterative algorithms for solving optimisation problems has a long history which can be traced back at least to 1950s. The relationship between the continuous and discrete versions of an algorithm provides a unifying perspective which gives insights into their behaviour and properties. In this talk, I will report on new algorithms for solving minmax problems which were discovered by exploiting this connection.

UNSW Seminar: Vladimir Temlyakov, Russian Academy of Science

Title: Remarks on numerical integration, discrepancy, and diaphony

Speaker: Vladimir Temlyakov, Steklov Mathematical Institute of Russian Academy of Science

Date: Tue, 10/03/2020 – 11:05am

Venue: RC-4082, The Red Centre, UNSW (Request for remote connect: qlegia (at) unsw.edu.au)

Abstract: The goal of this talk is twofold. First, we present a unified way of formulating numerical integration problems from both approximation theory and discrepancy theory. Second, we discuss some upper and lower bounds for recently developed new type of discrepancy — the smooth discrepancy.

RMITOpt Seminar: Patrick Johnston, Rutgers Business School

Speaker: Dr. Patrick Johnstone, MSIS Department of the Rutgers Business School.

Title: Projective Splitting: A New Breed of First-Order Proximal Algorithms

Date and Time: Friday, February 28th, 3.30– 4.30pm, 2020 (Talk & Q/A)

Location: AGR Building 15, level 03, room 10 (Request for remote connect andy.eberhard (at) rmit.edu.au)

Abstract: Projective splitting is a proximal operator splitting framework for solving convex optimization problems and monotone inclusions. Unlike many operator splitting methods, projective splitting is not based on a fixed-point iteration. Instead, at each iteration a separating hyperplane is constructed between the current point and the primal-dual solution set. This gives more freedom in terms of stepsize selection, incremental updates, and asynchronous parallel computation. Despite these advantages, projective splitting had two important drawbacks which we have rectified in this work. First, the method uses calculations entirely based on the proximal operator of the functions in the objective. However, for many functions this is intractable. We develop new calculations based on forward steps – explicit evaluations of the gradient – whenever the gradient is Lipschitz continuous. This extends the scope of the method to a much wider class of problems. Second, no convergence rates were previously known for the method. We derive an O(1/k) rate for convex optimization problems, which is unimprovable for this algorithm and problem class. Furthermore, we derive a linear convergence rate under certain strong convexity and smoothness conditions.

1 7 8 9