1 / 6

STOC: Annual ACM Symposium on the Theory of Computing

STOC: Annual ACM Symposium on the Theory of Computing. Ivan Joveti ć. Conference summary. 49 th edition June 19 th to June 23 rd , 2017 in Montreal, Canada 103 papers accepted and presented, as well as 8 invited paper talks

Download Presentation

STOC: Annual ACM Symposium on the Theory of Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STOC: Annual ACM Symposium on the Theory of Computing Ivan Jovetić

  2. Conference summary • 49th edition • June 19th to June 23rd, 2017 in Montreal, Canada • 103 papers accepted and presented, as well as 8 invited paper talks • Typical topics of interest for STOC papers: optimization problems, approximation algorithms, machine learning etc.

  3. Finding Approximate Local MinimaFaster than Gradient Descent • FastCubic algorithm • FastCubic finds approximate local minima faster than first-order methods despite them only finding critical points • Applies to non-convex objectives arising in machine learning, e.g. training a neural network

  4. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods • In large-scale machine learning the number of data examples is very large • stochastic gradient iterations • Stochastic gradient methods are used because they only need one example per iteration to form an estimator of the full gradient • Nesterov’s momentum trick doesn’t necessarily accelerate methods in a stochastic setting • Katyusha is a direct, primal-only stochastic gradient method that uses “negative momentum” to fix the issue

  5. Trace Reconstruction with Samples • In trace reconstruction problem, the goal is to reconstruct an unknown bit string x from multiple noisy observations of x • Focused on the case where the noise is due to x going through the deletion channel • Deletion channel deletes each bit with probability q, resulting in contracted x̃ • How many independent copies of x̃ are needed to reconstruct original x with high probability?

  6. References • Zeyuan Allen-Zhu. 2017. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. InProceedings of 49th Annual ACM SIGACT Symposium on the Theory of Computing, Montreal, Canada, June 2017 (STOC’17). DOI: 10.1145/3055399.305544 • NamanAgarwal, Zeyuan Allen-Zhu, Brian Bullins, EladHazan, and TengyuMa. 2017. Finding Approximate Local MinimaFaster than Gradient Descent. InProceedings of 49th Annual ACM SIGACTSymposium on the Theory of Computing, Montreal, Canada, June 2017 (STOC’17). DOI: 10.1145/3055399.305546 • FedorNazarov and Yuval Peres. 2017. Trace Reconstruction with Samples. In Proceedings of 49th Annual ACM SIGACT Symposium on the Theory of Computing, Montreal, Canada, June 2017 (STOC’17). DOI: 10.1145/3055399.305549

More Related