in Mathematics and B.A. SODA 2023: 4667-4767. I am broadly interested in mathematics and theoretical computer science. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods Email / We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Secured intranet portal for faculty, staff and students. It was released on november 10, 2017. Management Science & Engineering /Creator (Apache FOP Version 1.0) CV (last updated 01-2022): PDF Contact. However, even restarting can be a hard task here. ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games By using this site, you agree to its use of cookies. She was 19 years old and looking forward to the start of classes and reuniting with her college pals. The design of algorithms is traditionally a discrete endeavor. . Contact. Here is a slightly more formal third-person biography, and here is a recent-ish CV. ", "Team-convex-optimization for solving discounted and average-reward MDPs! If you see any typos or issues, feel free to email me. 2016. with Yair Carmon, Kevin Tian and Aaron Sidford Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs to be advised by Prof. Dongdong Ge. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. One research focus are dynamic algorithms (i.e. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. Full CV is available here. If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Stanford, CA 94305 I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. Journal of Machine Learning Research, 2017 (arXiv). what is a blind trust for lottery winnings; ithaca college park school scholarships; Articles 1-20. Abstract. ", "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data. Thesis, 2016. pdf. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. [pdf] In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). % Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. Group Resources. COLT, 2022. arXiv | code | conference pdf (alphabetical authorship), Annie Marsden, John Duchi and Gregory Valiant, Misspecification in Prediction Problems and Robustness via Improper Learning. theses are protected by copyright. Our method improves upon the convergence rate of previous state-of-the-art linear programming . Yair Carmon. [pdf] with Aaron Sidford [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. Allen Liu. Personal Website. Fall'22 8803 - Dynamic Algebraic Algorithms, small tool to obtain upper bounds of such algebraic algorithms. with Yair Carmon, Aaron Sidford and Kevin Tian small tool to obtain upper bounds of such algebraic algorithms. D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. [pdf] [talk] [poster] Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Yin Tat Lee and Aaron Sidford. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. The Journal of Physical Chemsitry, 2015. pdf, Annie Marsden. [pdf] arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . Goethe University in Frankfurt, Germany. University, where Honorable Mention for the 2015 ACM Doctoral Dissertation Award went to Aaron Sidford of the Massachusetts Institute of Technology, and Siavash Mirarab of the University of Texas at Austin. Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). [pdf] [poster] SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in From 2016 to 2018, I also worked in Source: www.ebay.ie the Operations Research group. arXiv preprint arXiv:2301.00457, 2023 arXiv. Many of my results use fast matrix multiplication 4 0 obj Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions publications by categories in reversed chronological order. ReSQueing Parallel and Private Stochastic Convex Optimization. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Their, This "Cited by" count includes citations to the following articles in Scholar. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. which is why I created a The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. [last name]@stanford.edu where [last name]=sidford. Aaron Sidford, Gregory Valiant, Honglin Yuan COLT, 2022 arXiv | pdf. Improves the stochas-tic convex optimization problem in parallel and DP setting. Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent stream Anup B. Rao. I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. sidford@stanford.edu. Try again later. [pdf] [slides] [pdf] Simple MAP inference via low-rank relaxations. (ACM Doctoral Dissertation Award, Honorable Mention.) pdf, Sequential Matrix Completion. 2016. We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. With Cameron Musco and Christopher Musco. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate I am broadly interested in optimization problems, sometimes in the intersection with machine learning Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). Yujia Jin. 113 * 2016: The system can't perform the operation now. My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. I have the great privilege and good fortune of advising the following PhD students: I have also had the great privilege and good fortune of advising the following PhD students who have now graduated: Kirankumar Shiragur (co-advised with Moses Charikar) - PhD 2022, AmirMahdi Ahmadinejad (co-advised with Amin Saberi) - PhD 2020, Yair Carmon (co-advised with John Duchi) - PhD 2020. Yin Tat Lee and Aaron Sidford; An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. 2021. by Aaron Sidford. Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games July 8, 2022. Algorithms Optimization and Numerical Analysis. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Aleksander Mdry; Generalized preconditioning and network flow problems Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. There will be a talk every day from 16:00-18:00 CEST from July 26 to August 13. Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. I enjoy understanding the theoretical ground of many algorithms that are ", "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line? We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. I was fortunate to work with Prof. Zhongzhi Zhang. In each setting we provide faster exact and approximate algorithms. "I am excited to push the theory of optimization and algorithm design to new heights!" Assistant Professor Aaron Sidford speaks at ICME's Xpo event. If you see any typos or issues, feel free to email me. With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. with Yair Carmon, Aaron Sidford and Kevin Tian endobj This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. Instructor: Aaron Sidford Winter 2018 Time: Tuesdays and Thursdays, 10:30 AM - 11:50 AM Room: Education Building, Room 128 Here is the course syllabus. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. how . We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . . with Yang P. Liu and Aaron Sidford. My long term goal is to bring robots into human-centered domains such as homes and hospitals. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. in Chemistry at the University of Chicago. Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . Neural Information Processing Systems (NeurIPS), 2014. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). A nearly matching upper and lower bound for constant error here! Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. Aaron Sidford is an Assistant Professor of Management Science and Engineering at Stanford University, where he also has a courtesy appointment in Computer Science and an affiliation with the Institute for Computational and Mathematical Engineering (ICME). Done under the mentorship of M. Malliaris. resume/cv; publications. . COLT, 2022. Aaron Sidford. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG