VA & Opt Webinar: Scott Lindstrom

Title: A primal/dual computable approach to improving spiraling algorithms, based on minimizing spherical surrogates for Lyapunov functions

Speaker: Scott Lindstrom (Curtin University)

Date and Time: June 2nd, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization problems are frequently tackled by iterative application of an operator whose fixed points allow for fast recovery of locally optimal solutions. Under light-weight assumptions, stability is equivalent to existence of a function—called a Lyapunov function—that encodes structural information about both the problem and the operator. Lyapunov functions are usually hard to find, but if a practitioner had a priori knowledge—or a reasonable guess—about one’s structure, they could equivalently tackle the problem by seeking to minimize the Lyapunov function directly. We introduce a class of methods that does this. Interestingly, for certain feasibility problems, circumcentered-reflection method (CRM) is an extant example therefrom. However, CRM may not lend itself well to primal/dual adaptation, for reasons we show. Motivated by the discovery of our new class, we experimentally demonstrate the success of one of its other members, implemented in a primal/dual framework.

VA & Opt Webinar: Guoyin Li

Title: Proximal methods for nonsmooth and nonconvex fractional programs: when sparse optimization meets fractional programs

Speaker: Guoyin Li (UNSW)

Date and Time: May 26th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Nonsmooth and nonconvex fractional programs are ubiquitous and also highly challenging. It includes the composite optimization problems studied extensively lately, and encompasses many important modern optimization problems arising from diverse areas such as the recent proposed scale invariant sparse signal reconstruction problem in signal processing, the robust Sharpe ratio optimization problems in finance and the sparse generalized eigenvalue problem in discrimination analysis. In this talk, we will introduce extrapolated proximal methods for solving nonsmooth and nonconvex fractional programs and analyse their convergence behaviour. Interestingly, we will show that the proposed algorithm exhibits linear convergence for sparse generalized eigenvalue problem with either cardinality regularization or sparsity constraints. This is achieved by identifying the explicit desingularization function of the Kurdyka-Lojasiewicz inequality for the merit function of the fractional optimization models. Finally, if time permits, we will present some preliminary encouraging numerical results for the proposed methods for sparse signal reconstruction and sparse Fisher discriminant analysis.

Postdoctoral Fellow – Mathematical Optimization

The Postdoctoral Fellow will undertake collaborative and self-directed research on an ARC-funded Discovery Project titled “Data-driven multistage robust optimization”. The primary research goals are to make a major contribution to the understanding of optimization in the face of data uncertainty and to develop mathematical principles for broad classes of multi-stage robust optimization problems, to design associated data-driven numerical methods to find solutions to these problems, and to provide an advanced optimization framework to solve a wide range of real-life optimization models of multi-stage technical decision-making under uncertain environments.

For more information, please refer to

https://external-careers.jobs.unsw.edu.au/cw/en/job/501990/postdoctoral-fellow-mathematical-optimization

VA & Opt Webinar: Yura Malitsky

Title: Adaptive gradient descent without descent

Speaker: Yura Malitsky (Linköping University)

Date and Time: May 19th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk I will present some recent results for the most classical optimization method — gradient descent. We will show that a simple zero cost rule is sufficient to completely automate gradient descent. The method adapts to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. The presentation is based on a joint work with K. Mishchenko, see arxiv.org/abs/1910.09529.

VA & Opt Webinar: Hung Phan

Title: Adaptive splitting algorithms for the sum of operators

Speaker: Hung Phan (University of Massachusetts Lowell)

Date and Time: May 12th, 2021, 11:00 AEST (Register here for remote connection via Zoom)

Abstract: A general optimization problem can often be reduced to finding a zero of a sum of multiple (maximally) monotone operators, which creates challenging computational tasks as a whole. It motivates the development of splitting algorithms in order to simplify the computations by dealing with each operator separately, hence the name “splitting”. Some of the most successful splitting algorithms in applications are the forward-backward algorithm, the Douglas-Rachford algorithm, and the alternating directions method of multipliers (ADMM). In this talk, we discuss some adaptive splitting algorithms for finding a zero of the sum of operators. The main idea is to adapt the algorithm parameters to the generalized monotonicity of the operators so that the generated sequence converges to a fixed point.