Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! I am a senior researcher in the Algorithms group at Microsoft Research Redmond. DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . xwXSsN`$!l{@ $@TR)XZ( RZD|y L0V@(#q `= nnWXX0+; R1{Ol (Lx\/V'LKP0RX~@9k(8u?yBOr y with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford theses are protected by copyright. << I am a fourth year PhD student at Stanford co-advised by Moses Charikar and Aaron Sidford. Yang P. Liu, Aaron Sidford, Department of Mathematics ReSQueing Parallel and Private Stochastic Convex Optimization. Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian STOC 2023. NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. 2016. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Some I am still actively improving and all of them I am happy to continue polishing. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f Alcatel flip phones are also ready to purchase with consumer cellular. . Aaron Sidford Stanford University Verified email at stanford.edu. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 ", Applied Math at Fudan Best Paper Award. Aaron's research interests lie in optimization, the theory of computation, and the . 9-21. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. Applying this technique, we prove that any deterministic SFM algorithm . en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space Done under the mentorship of M. Malliaris. Contact. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, the Operations Research group. in Mathematics and B.A. rl1 My research focuses on AI and machine learning, with an emphasis on robotics applications. [pdf] . With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. From 2016 to 2018, I also worked in I completed my PhD at Conference on Learning Theory (COLT), 2015. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! University of Cambridge MPhil. Yujia Jin. SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games With Yair Carmon, John C. Duchi, and Oliver Hinder. with Aaron Sidford In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. Slides from my talk at ITCS. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Computer Science. I was fortunate to work with Prof. Zhongzhi Zhang. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG {{{;}#q8?\. My long term goal is to bring robots into human-centered domains such as homes and hospitals. AISTATS, 2021. [pdf] [talk] (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. COLT, 2022. In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. I am fortunate to be advised by Aaron Sidford. Secured intranet portal for faculty, staff and students. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! Some I am still actively improving and all of them I am happy to continue polishing. Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. 2021 - 2022 Postdoc, Simons Institute & UC . Thesis, 2016. pdf. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . This site uses cookies from Google to deliver its services and to analyze traffic. In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Anup B. Rao. I enjoy understanding the theoretical ground of many algorithms that are This improves upon previous best known running times of O (nr1.5T-ind) due to Cunningham in 1986 and (n2T-ind+n3) due to Lee, Sidford, and Wong in 2015. endobj /N 3 The Journal of Physical Chemsitry, 2015. pdf, Annie Marsden. [pdf] Aaron Sidford. 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford; Computing maximum flows with augmenting electrical flows. Articles Cited by Public access. data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. It was released on november 10, 2017. Faculty Spotlight: Aaron Sidford. However, many advances have come from a continuous viewpoint. Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization with Yair Carmon, Aaron Sidford and Kevin Tian (ACM Doctoral Dissertation Award, Honorable Mention.) Articles 1-20. A nearly matching upper and lower bound for constant error here! I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. } 4(JR!$AkRf[(t Bw!hz#0 )l`/8p.7p|O~ 5 0 obj Efficient Convex Optimization Requires Superlinear Memory. Stanford University. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in [pdf] [poster] Office: 380-T Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 Student Intranet. Before attending Stanford, I graduated from MIT in May 2018. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). Lower bounds for finding stationary points I, Accelerated Methods for NonConvex Optimization, SIAM Journal on Optimization, 2018 (arXiv), Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. Mail Code. Sidford received his PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where he was advised by Professor Jonathan Kelner. Google Scholar; Probability on trees and . February 16, 2022 aaron sidford cv on alcatel kaios flip phone manual. riba architectural drawing numbering system; fort wayne police department gun permit; how long does chambord last unopened; wayne county news wv obituaries to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching I often do not respond to emails about applications. SODA 2023: 5068-5089. "t a","H Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. University, Research Institute for Interdisciplinary Sciences (RIIS) at Research Institute for Interdisciplinary Sciences (RIIS) at Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. >> Huang Engineering Center I am broadly interested in mathematics and theoretical computer science. David P. Woodruff . In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. 2017. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . In International Conference on Machine Learning (ICML 2016). BayLearn, 2021, On the Sample Complexity of Average-reward MDPs in Chemistry at the University of Chicago. SODA 2023: 4667-4767. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. ", "A short version of the conference publication under the same title. Here are some lecture notes that I have written over the years. small tool to obtain upper bounds of such algebraic algorithms. United States. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). With Cameron Musco and Christopher Musco. Another research focus are optimization algorithms. Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. Email / However, even restarting can be a hard task here. Personal Website. (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. They will share a $10,000 prize, with financial sponsorship provided by Google Inc. Their, This "Cited by" count includes citations to the following articles in Scholar. Lower bounds for finding stationary points II: first-order methods. Many of my results use fast matrix multiplication F+s9H You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. ! Aleksander Mdry; Generalized preconditioning and network flow problems Here is a slightly more formal third-person biography, and here is a recent-ish CV. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. Improved Lower Bounds for Submodular Function Minimization. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). /Length 11 0 R with Yair Carmon, Kevin Tian and Aaron Sidford I received a B.S. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. Follow. [pdf] [poster] I am broadly interested in optimization problems, sometimes in the intersection with machine learning Allen Liu. [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. Yujia Jin. with Yair Carmon, Arun Jambulapati and Aaron Sidford pdf, Sequential Matrix Completion. In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners.