VA & Opt Webinar: Christopher Price (University of Canterbury)

Title: A direct search method for constrained optimization via the rounded ℓ1 penalty function.

Speaker: Christopher Price (University of Canterbury)

Date and Time: September 16th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: This talk looks at the constrained optimization problem when the objective and constraints are Lipschitz continuous black box functions.   The approach uses a sequence of smoothed and offset ℓ1 penalty functions. The method generates an approximate minimizer to each penalty function, and then adjusts the offsets and other parameters. The smoothing is steadily reduced, ultimately revealing the ℓ1 exact penalty function. The method preferentially uses a discrete quasi-Newton step, backed up by a global direction search. Theoretical convergence results are given for the smooth and non-smooth cases subject to relevant conditions. Numerical results are presented on a variety of problems with non-smooth objective or constraint functions. These results show the method is effective in practice.

VA & Opt Webinar: Christiane Tammer (MLU)

Title: Subdifferentials and Lipschitz properties of translation invariant functionals and applications

Speaker: Christiane Tammer (MLU)

Date and Time: September 9th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In the talk, we are dealing with translation invariant functionals and their application for deriving necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets.

Translation invariant functionals are a natural and powerful tool for the separation of not necessarily convex sets and scalarization. There are many applications of translation invariant functionals in nonlinear functional analysis, vector optimization, set optimization, optimization under uncertainty, mathematical finance as well as consumer and production theory.

The primary objective of this talk is to establish formulas for basic and singular subdifferentials of translation invariant functionals and to study important properties such as monotonicity, the PSNC property, the Lipschitz behavior, etc. of these nonlinear functionals without assuming that the shifted set involved in the definition of the functional is convex. The second objective is to propose a new way to scalarize a set-valued optimization problem. It allows us to study necessary conditions for minimal solutions in a very broad setting in which the domination set is not necessarily convex or solid or conical. The third objective is to apply our results to vector-valued approximation problems.

This is a joint work with T.Q. Bao (Northern Michigan University).

VA & Opt Webinar: Gerd Wachsmuth (BTU)

Title: New Constraint Qualifications for Optimization Problems in Banach Spaces based on Asymptotic KKT Conditions

Speaker: Gerd Wachsmuth (BTU)

Date and Time: September 2nd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization theory in Banach spaces suffers from the lack of available constraint qualifications. Despite the fact that there exist only a very few constraint qualifications, they are, in addition, often violated even in simple applications. This is very much in contrast to finite-dimensional nonlinear programs, where a large number of constraint qualifications is known. Since these constraint qualifications are usually defined using the set of active inequality constraints, it is difficult to extend them to the infinite-dimensional setting. One exception is a recently introduced sequential constraint qualification based on asymptotic KKT conditions. This paper shows that this so-called asymptotic KKT regularity allows suitable extensions to the Banach space setting in order to obtain new constraint qualifications. The relation of these new constraint qualifications to existing ones is discussed in detail. Their usefulness is also shown by several examples as well as an algorithmic application to the class of augmented Lagrangian methods.

This is a joint work with Christian Kanzow (Würzburg) and Patrick Mehlitz (Cottbus).

VA & Opt Webinar: Jein-Shan Chen (NTNU)

Title: Two approaches for absolute value equation by using smoothing functions

Speaker: Jein-Shan Chen (NTNU)

Date and Time: August 26th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk, we present two approaches for solving absolute value equation.These two approaches are based on using some smoothing function. In particular, thereare several systematic ways of constructing smoothing functions. Numerical experimentswith comparisons are reported, which suggest what kinds of smoothing functions workwell along with the proposed approaches.

Invitation for Submissions to the Topical Collection “Mathematics of Computation and Optimisation” (MoCaO) in Advances in Computational Mathematics (ACOM)

Objective:
In an effort to promote interaction between researchers in Computational Mathematics and Optimization we plan a collection of articles in ACOM whose topics span both research fields.

The submissions should be done via ACOM submission system:

https://www.springer.com/journal/10444/updates/18256258

Guest Editors:
Jerome Droniou (Monash University, Melbourne, Australia)
Andrew Eberhard (RMIT, Melbourne, Australia)
Guoyin Li (University of New South Wales, Sydney, Australia)
Russell Luke (University of Goettingen, Germany)
Thanh Tran (University of New South Wales, Sydney, Australia)

The board is chaired by Thanh Tran.

Submission deadline: July 31, 2021.

VA & Opt Webinar: Hieu Thao Nguyen (TU Delft)

Title:  Projection Algorithms for Phase Retrieval with High Numerical Aperture

Speaker: Hieu Thao Nguyen (TU Delft)

Date and Time: August 19th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We develop the mathematical framework in which the class of projection algorithms can be applied to high numerical aperture (NA) phase retrieval. Within this framework we first analyze the basic steps of solving this problem by projection algorithms and establish the closed forms of all the relevant prox-operators. We then study the geometry of the high-NA phase retrieval problem and the obtained results are subsequently used to establish convergence criteria of projection algorithms. Making use of the vectorial point-spread-function (PSF) is, on the one hand, the key difference between this work and the literature of phase retrieval mathematics which mostly deals with the scalar PSF. The results of this paper, on the other hand, can be viewed as extensions of those concerning projection methods for low-NA phase retrieval. Importantly, the improved performance of projection methods over the other classes of phase retrieval algorithms in the low-NA setting now also becomes applicable to the high-NA case. This is demonstrated by the accompanying numerical results which show that all available solution approaches for high-NA phase retrieval are outperformed by projection methods.

VA & Opt Webinar: Xiaoqi Yang (Hong Kong PolyU)

Title:  On error bound moduli for locally Lipschitz and regular functions

Speaker: Xiaoqi Yang (Hong Kong PolyU)

Date and Time: August 12th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We first introduce for a closed and convex set two classes of subsets: the near and far ends relative to a point, and give some full characterizations for these end sets by virtue of the face theory of closed and convex sets. We provide some connections between closedness of the far (near) end and the relative continuity of the gauge (cogauge) for closed and convex sets. We illustrate that the distance from 0 to the outer limiting subdifferential of the support function of the subdifferential set, which is essentially the distance from 0 to the end set of the subdifferential set, is an upper estimate of the local error bound modulus. This upper estimate becomes tight for a convex function under some regularity conditions. We show that the distance from 0 to the outer limiting subdifferential set of a lower C^1 function is equal to the local error bound modulus.


References:
Li, M.H., Meng K.W. and Yang X.Q., On far and near ends of closed and convex sets. Journal of Convex Analysis. 27 (2020) 407–421.
Li, M.H., Meng K.W. and Yang X.Q., On error bound moduli for locally Lipschitz and regular functions, Math. Program. 171 (2018) 463–487.

VA & Opt Webinar: Evgeni Nurminski (FEFU)

Title: Practical Projection with Applications

Speaker: Evgeni Nurminski
(FEFU, Vladivostok)

Date and Time: August 5th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Projection of a point on a given set is a very common computational operation in an endless number of algorithms and applications. However, with exception of simplest sets it by itself is a nontrivial operation which is often complicated by large dimension, computational degeneracy, nonuniqueness (even for orthogonal projection on convex sets in certain situations), and so on. This talk aims to present some practical solutions, i.e. finite algorithms, for projection on polyhedral sets, among those: simplex, polytopes, polyhedron, finite-generated cones with a certain discussion of “nonlinearities”, decomposition and parallel computations. We also consider the application of projection operation in linear optimization and epi-projection algorithm for convex optimization.

VA & Opt Webinar: Akiko Takeda (University of Tokyo)

Title: Deterministic and Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization

Speaker: Akiko Takeda (University of Tokyo)

Date and Time: July 29th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Our work focuses on deterministic/stochastic gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer. Research on stochastic gradient methods is quite limited, and until recently no non-asymptotic convergence results have been reported. After showing a deterministic approach, we present simple stochastic gradient algorithms, for finite-sum and general stochastic optimization problems, which have superior convergence complexities compared to the current state-of-the-art. We also compare our algorithms’ performance in practice for empirical risk minimization.


This is based on joint works with  Tianxiang Liu, Ting Kei Pong and Michael R. Metel.

VA & Opt Webinar: Oliver Stein (KIT)

Title: A general branch-and-bound framework for global multiobjective optimization

Speaker: Oliver Stein (KIT)

Date and Time: July 22nd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We develop a general framework for branch-and-bound methods in multiobjective optimization. Our focus is on natural generalizations of notions and techniques from the single objective case. In particular, after the notions of upper and lower bounds on the globally optimal value from the single objective case have been transferred to upper and lower bounding sets on the set of nondominated points for multiobjective programs, we discuss several possibilities for discarding tests. They compare local upper bounds of the provisional nondominated sets with relaxations of partial upper image sets, where the latter can stem from ideal point estimates, from convex relaxations, or from relaxations by a reformulation-linearization technique.

The discussion of approximation properties of the provisional nondominated set leads to the suggestion for a natural selection rule along with a natural termination criterion. Finally we discuss some issues which do not occur in the single objective case and which impede some desirable convergence properties, thus also motivating a natural generalization of the convergence concept.

This is joint work with Gabriele Eichfelder, Peter Kirst, and Laura Meng.

1 16 17 18 19 20 25