Skip to content

Stochastic Optimization Problems Homework

Title: Local Asymptotics for Stochastic Optimization: Optimality, Constraint Identification, and Dual Averaging

Authors:John Duchi, Feng Ruan

(Submitted on 16 Dec 2016 (v1), last revised 3 Aug 2017 (this version, v3))

Abstract: We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H\'{a}jek and Le Cam for classical statistical problems, and giving efficient procedures based on Nesterov's dual averaging that (often) adaptively achieve optimal convergence guarantees. Our results provide function-specific lower bounds and convergence results that make precise a correspondence between statistical difficulty and the geometric notion of tilt-stability from optimization. We show how variants of dual averaging---a stochastic gradient-based procedure---guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures provably fail. Additionally, we highlight a gap between optimization problems with linear and nonlinear constraints: standard stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints.

Submission history

From: Feng Ruan [view email]
[v1] Fri, 16 Dec 2016 19:54:22 GMT (69kb,D)
[v2] Wed, 2 Aug 2017 09:08:42 GMT (80kb,D)
[v3] Thu, 3 Aug 2017 02:32:45 GMT (80kb,D)

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)

Spring 2018 – IE 597: Stochastic Optimization

  • Time: MW 2:30 – 3:45

  • Office Hours: M 3:45 – 4:45

Course Description

This course is designed to provide students with the ability to model optimization problems in uncertain settings and develop and analyze the convergence properties of the associated algorithms. Several weeks of this course will be devoted to the study of stochastic optimization problems arising in machine learning. It consists of four parts: 1) Models for decision-making under uncertainty; 2) Stochastic programming (Theory, Decomposition Methods and Monte-Carlo Sampling Methods); 3) Introduction to robust optimization; 4) Machine learning problems (with an accent on convergence and rate analysis for a broad class of smooth and nonsmooth learning problems). The course will be offered in a lecture format, and homeworks will be used to reinforce and supplement information in each section. The course will include a comprehensive final exam and a project. Apart from students in IME, this course would be of interest to students from math, engineering, computer science, statistics, economics, and the operations management program in the business school. Students would be required to have some background in Optimization and Stochastic Processes. Some background slides are provided on the website. The following book will be adhered to as a reference text. A tentative course outline is provided below. The course grade will be based on a homeworks (30%), a final examination (40%), and a course project (30%). There will also be a series of homeworks for which solutions will be provided.

Course Syllabus

Lecture Notes*

Homework Assignments