VA & Opt Webinar: Yboon Garcia Ramos

Title: Characterizing quasiconvexity of the pointwise infimum of a family of arbitrary translations of quasiconvex functions

Speaker: Yboon Garcia Ramos (Universidad del Pacífico)

Date and Time: March 31st, 2021, 11:00 AEDT (Register here for remote connection via Zoom)

Abstract: In this talk we will present some results concerning the problem of preserving quasiconvexity when summing up quasiconvex functions and we will relate it to the problem of preserving quasiconvexity when taking the infimum of a family of quasiconvex functions. To develop our study, the notion of quasiconvex family is introduced, and we establish various characterizations of such a concept.

Joint work with Fabián Flores, Universidad de Concepción and Nicolas Hadjisavvas, University of the Aegean.

VA & Opt Webinar: Huynh Van Ngai

Title: Generalized Nesterov’s accelerated proximal gradient algorithms with convergence rate of order o(1/k2)

Speaker: Huynh Van Ngai (University of Quy Nhon)

Date and Time: March 24th, 2021, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: The accelerated gradient method initiated by Nesterov is now recognized to be one of the most powerful tools for solving smooth convex optimization problems. This method improves significantly the convergence rate of function values from O(1/k) of the standard gradient method down to O(1/k2). In this paper, we present two generalized variants of Nesterov’s accelerated proximal gradient method for solving composition convex optimization problems in which the objective function is represented by the sum of a smooth convex function and a nonsmooth convex part. We show that with suitable ways to pick the sequences of parameters, the convergence rate for the function values of this proposed method is actually of order o(1/k2). Especially, when the objective function is p-uniformly convex for p>2, the convergence rate is of order O(\ln k/k^{2p/(p-2)}), and the convergence is linear if the objective function is strongly convex. By-product, we derive a forward-backward algorithm generalizing the one by Attouch-Peypouquet [SIAM J. Optim., 26(3), 1824-1834, (2016)], which produces a convergence sequence with a convergence rate of the function values of order o(1/k2).

Congratulations! Vera Roshchina (UNSW) has been awarded the 2021 Christopher Heyde Medal

Dear MoCaO members.

I would like to share an exciting news: Dr. Vera Roshchina (UNSW) has been awarded the 2021 Christopher Heyde Medal:

https://www.science.org.au/supporting-science/awards-and-opportunities/honorific-awardees/2021-awardees#heyde

The award’s purpose is to recognise outstanding research in the mathematical sciences by researchers up to 10 years post PhD. Vera is an excellent mathematician, active MoCaO member and a great person to work with.

Congratulations, Vera!!!!!!!

VA & Opt Webinar: David Bartl

Title: Every compact convex subset of matrices is the Clarke Jacobian of some Lipschitzian mapping

Speaker: David Bartl (Silesian University in Opava)

Date and Time: March 17th, 2021, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: Given a non-empty compact convex subset P of m×n matrices, we show constructively that there exists a Lipschitzian mapping g : Rn → Rm such that its Clarke Jacobian ∂g(0) = P.

LION15: The 15th Learning and Intelligent Optimization Conference, Athens, Greece, June 20-25, 2021

LION15: The 15th Learning and Intelligent Optimization Conference
Athens, Greece, June 20-25, 2021
—————————————————————
Conference website https://lion15.sba-research.org/index.html
Submission link https://easychair.org/conferences/?conf=lion15
Extended Submission deadline March 15, 2021

The large variety of heuristic algorithms for hard optimization problems raises numerous interesting and challenging issues. Practitioners using heuristic algorithms for hard optimization problems are confronted with the burden of selecting the most appropriate method, in many cases through expensive algorithm configuration and parameter tuning. Scientists seek theoretical insights and demand a sound experimental methodology for evaluating algorithms and assessing strengths and weaknesses. This effort requires a clear separation between the algorithm and the experimenter, who, in too many cases, is “in the loop” as a motivated intelligent learning component. LION deals with designing and engineering ways of “learning” about the performance of different techniques, and ways of using past experience about the algorithm behavior to improve performance in the future. Intelligent learning schemes for mining the knowledge obtained online or offline can improve the algorithm design process and simplify the applications of high-performance optimization methods. Combinations of different algorithms can further improve the robustness and performance of the individual components.

This meeting explores the intersections and uncharted territories between machine learning, artificial intelligence, energy, mathematical programming and algorithms for hard optimization problems. The main purpose of the event is to bring together experts from these areas to discuss new ideas and methods, challenges and opportunities in various application areas, general trends and specific developments. We are excited to be bringing the LION conference in Greece for the third time.

VA & Opt Webinar: Alexander Kruger

Title: Error bounds revisited

Speaker: Alexander Kruger (Federation University Australia)

Date and Time: March 10th, 2021, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: We propose a unifying general framework of quantitative primal and dual sufficient error bound conditions covering linear and nonlinear, local and global settings. We expose the roles of the assumptions involved in the error bound assertions, in particular, on the underlying space: general metric, Banach or Asplund. Employing special collections of slope operators, we introduce a succinct form of sufficient error bound conditions, which allows one to combine in a single statement several different assertions: nonlocal and local primal space conditions in complete metric spaces, and subdifferential conditions in Banach and Asplund spaces. In the nonlinear setting, we cover both the conventional and the ‘alternative’ error bound conditions.

It is a joint work with Nguyen Duy Cuong (Federation University). The talk is based on the paper: N. D. Cuong and A. Y. Kruger, Error bounds revisited, arXiv: 2012.03941 (2020).