## Posts

### Cheat Sheet: Hölder Error Bounds for Conditional Gradients [research]

*TL;DR: Cheat Sheet for convergence of Frank-Wolfe algorithms (aka Conditional Gradients) under the Hölder Error Bound (HEB) condition, or how to interpolate between convex and strongly convex convergence rates. Continuation of the Frank-Wolfe series. Long and technical.*### Toolchain Tuesday No. 2 [random]

*TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see here.*### Cheat Sheet: Linear convergence for Conditional Gradients [research]

*TL;DR: Cheat Sheet for linearly convergent Frank-Wolfe algorithms (aka Conditional Gradients). What does linear convergence mean for Frank-Wolfe and how to achieve it? Continuation of the Frank-Wolfe series. Long and technical.*### Training Neural Networks with LPs [research]

*TL;DR: This is an informal summary of our recent paper Principled Deep Neural Network Training through Linear Programming with Dan Bienstock and Gonzalo Muñoz, where we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures by means of constructing (relatively) small LPs.*### Toolchain Tuesday No. 1 [random]

*TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see here.*### Cheat Sheet: Frank-Wolfe and Conditional Gradients [research]

*TL;DR: Cheat Sheet for Frank-Wolfe and Conditional Gradients. Basic mechanics and results; this is a rather long post and the start of a series of posts on this topic.*### Tractability limits of small treewidth [research]

*TL;DR: This is an informal summary of our recent paper Limits of Treewidth-based tractability in Optimization with Yuri Faenza and Gonzalo Muñoz, where we provide almost matching lower bounds for extended formulations that exploit small treewidth to obtain smaller formulations. We also show that treewidth in some sense is the only graph-theoretic notion that appropriately captures sparsity and tractability in a broader algorithmic setting.*### On the relevance of AI and ML research in academia [random]

*TL;DR: Is AI and ML research in academia relevant and necessary? Yes.*### Collaborating online, in real-time, with math-support and computations [random]

*TL;DR: Using atom + teletype + markdown as real-time math collaboration environment.*

subscribe via RSS