Jekyll2018-09-22T13:25:03-04:00http://www.pokutta.com/blog/One trivial observation at a timeEverything Mathematics, Optimization, Machine Learning, and Artificial IntelligenceTractability limits of small treewidth2018-09-22T09:50:00-04:002018-09-22T09:50:00-04:00http://www.pokutta.com/blog/research/2018/09/22/treewidth-abstract<p><em>TL;DR: This is an informal summary of our recent paper <a href="https://arxiv.org/abs/1807.02551">Limits of Treewidth-based tractability in Optimization</a> with <a href="https://ieor.columbia.edu/faculty/yuri-faenza">Yuri Faenza</a> and <a href="http://cerc-datascience.polymtl.ca/person/gonzalo-munoz/">Gonzalo Muñoz</a>, where we provide almost matching lower bounds for extended formulations that exploit small treewidth to obtain smaller formulations. We also show that treewidth in some sense is the only graph-theoretic notion that appropriately captures sparsity and tractability in a broader algorithmic setting.</em></p>
<h2 id="what-is-the-paper-about-and-why-you-might-care">What is the paper about and why you might care</h2>
<p>It it well known that for many problems on graphs, e.g., optimization problems but also problems in the context of graphical models, that are hard to solve in full generality, we can obtain fast algorithms if the underlying graph $G$ exhibits <em>small treewidth</em>. Without going into detail here, <em>treewidth</em> basically measures how close a graph is to a tree and it has been extensively used as a concept to model and capture sparsity in problems. Typically, we can obtain then algorithms that have a running time that is super-polynomial in the treewidth but polynomial in the problem dimension, where the notion of <em>dimension</em> depends on the context. One such example is combinatorial problems on graphs, where <em>Dynamic Programming</em> can be used to obtain fast algorithms (with a non-polynomial dependence on the treewidth); we refer the interested reader to the actual paper for an extensive list of references. As such one might think of <em>treewidth</em> as a complexity parameter leading to some form of parametrized complexity.</p>
<p>It is also known, that such problems often admit linear programming formulations or semidefinite programming formulation parametrized by treewidth (e.g., via Sherali-Adams and Lasserre hierarchy approaches). More recently a very comprehensive result in [BM] by Bienstock and Muñoz extended this to the general case of mixed-integer polynomial optimization basically showing the following:</p>
<p class="mathcol"><strong>Theorem (informal) [Bienstock and Muñoz].</strong> Consider a polynomial optimization problem of the form
<script type="math/tex">\min \{ c^\intercal x \mid f_i(x) \geq 0\quad \forall i \in [m], x \in \{0,1\}^p \times [0,1]^{n-p}\},</script>
where $f_i(x)$ are polynomials of degree at most $\rho$. If the underlying constraint intersection graph $\Gamma$ (the graph that has a vertex for each variable of the problem and an edge between two variables if they appear in a common constraint) has treewidth at most $\omega$, then there is a linear program of size $O((2\rho/\varepsilon)^{\omega + 1}n \log 1/\varepsilon)$ that computes an $\varepsilon$-approximate solution to the polynomial optimization problem.</p>
<p>This basically matches the type of bounds that have been obtained beforehand for other types of problems. With this a couple of natural questions arise:</p>
<ol>
<li>Are these bounds of the form of $O(n 2^\omega)$ the best one can hope for, for linear or semidefinite programming formulations?</li>
<li>Are there other graph-theoretic concepts of sparsity apart from treewidth that can be used similarly to obtain parametrized complexity measures?</li>
</ol>
<h2 id="our-results">Our results</h2>
<p>We basically answer both of those questions from above. It is important to point out though that the results for linear programming or semidefinite programming are somewhat different from the algorithmic ones: the algorithmic statements holds for general algorithms but are conditional on the usual $\operatorname{P} \neq \operatorname{NP}$ assumption (and in fact we use the complexity-theoretic assumption $\operatorname{NP} \not\subseteq \operatorname{BPP}$), the linear programming and semidefinite programming statements hold unconditionally but only apply to those two optimization paradigms.</p>
<p>First we show that, there is no other graph-theoretic structure that yields tractability in the same way as treewidth does (for optimization problems). In a nutshell:</p>
<p class="mathcol">Unbounded treewidth can yield intractability.</p>
<p>This result relies on the commonly believed complexity-theoretic assumption $\operatorname{NP} \not\subseteq \operatorname{BPP}$ and the grid-minor hypothesis that was recently shown to be true in a breakthrough result in [CC] by Chekuri and Chuzhoy. Our proof works via a reduction and is the analog of a similar result known in graphical models; see Chandrasekaran et al. [CSH].</p>
<p>Second we show that, the upper bounds as parametrized by treewidth obtained for linear programming formulations as well as semidefinite formulations are essentially optimal (with minor losses):</p>
<p class="mathcol">Linear programming and semidefinite programming formulations of size of the form $O(n 2^\omega)$ are basically the best one can hope for.</p>
<p><em>Finally, I would also like to mention that independent of our work Aboulker et al. showed in [A+] a similar result for the linear extension complexity case based on analyzing faces of the correlation polytope.</em></p>
<h3 id="references">References</h3>
<p>[FMP] Faenza, Y., Muñoz, G., & Pokutta, S. (2018). Limits of Treewidth-based tractability in Optimization. arXiv preprint arXiv:1807.02551. <a href="https://arxiv.org/abs/1807.02551">arxiv</a></p>
<p>[BM] Bienstock, D., & Mun͂oz, G. (2018). LP Formulations for Polynomial Optimization Problems. SIAM Journal on Optimization, 28(2), 1121-1150. <a href="https://epubs.siam.org/doi/10.1137/15M1054079">journal</a> <a href="https://arxiv.org/abs/1501.00288">arxiv</a></p>
<p>[CC] Chekuri, C., & Chuzhoy, J. (2016). Polynomial bounds for the grid-minor theorem. Journal of the ACM (JACM), 63(5), 40. <a href="https://dl.acm.org/citation.cfm?id=2820609">journal</a> <a href="https://arxiv.org/abs/1305.6577">arxiv</a></p>
<p>[CSH] Chandrasekaran, V., Srebro, N., & Harsha, P. (2012). Complexity of inference in graphical models. arXiv preprint arXiv:1206.3240. <a href="https://arxiv.org/abs/1206.3240">arxiv</a></p>
<p>[A+] Aboulker, P., Fiorini, S., Huynh, T., Macchia, M., & Seif, J. (2018). Extension Complexity of the Correlation Polytope. arXiv preprint arXiv:1806.00541. <a href="https://arxiv.org/abs/1806.00541">arxiv</a></p>Sebastian PokuttaTL;DR: This is an informal summary of our recent paper Limits of Treewidth-based tractability in Optimization with Yuri Faenza and Gonzalo Muñoz, where we provide almost matching lower bounds for extended formulations that exploit small treewidth to obtain smaller formulations. We also show that treewidth in some sense is the only graph-theoretic notion that appropriately captures sparsity and tractability in a broader algorithmic setting.On the relevance of AI and ML research in academia2018-09-15T09:50:00-04:002018-09-15T09:50:00-04:00http://www.pokutta.com/blog/random/2018/09/15/ai-academia<p><em>TL;DR: Is AI and ML research in academia relevant and necessary? Yes.</em></p>
<p>Over the last few months (and at our very recent faculty retreat) I had various discussions about the role of Artificial Intelligence and Machine Learning research (short: ML research) in academia and its relevance in light of various large companies, such as Google, Facebook, Microsoft, (Google’s) Deepmind, OpenAI, and Amazon pursuing their own research efforts in that space, at an unprecedented scale with a resource backing that no academic institution will ever be able replicate. A naïve first assessment might lead to the conclusion: we are done - let the industry guys take it from here. However, a more realistic assessment points to a synergetic relationship between industry and academia being located at very different stages in the <em>research-to-product</em> pipeline. Clearly, this post is (highly) biased; I am in academia after all (though have worked in industry at various stages).</p>
<h2 id="industry-ml-research-is-valuable-and-important">Industry ML research is valuable and important</h2>
<p>ML research conducted in industry has had a huge impact over the last few years with various high-profile examples including the <a href="https://deepmind.com/research/alphago/">AlphaGo’s success</a> in playing go as well as more generally autonomous vehicles—although the latter recently came under heightened scrutiny.</p>
<h3 id="transition-to-scale">Transition-to-scale</h3>
<p>Often these successes are not necessarily about <em>fundamental</em> advancements in the underlying methodology but impressive demonstrations of <em>transition-to-scale</em>. In fact several of these recent high-profile successes are made possible by an <a href="https://blog.openai.com/ai-and-compute/">insane amount of compute power for training</a> but the underlying methodology (e.g., policy gradients) has not fundamentally improved. This is good and bad at the same time. First of all, it demonstrates the capabilities of methodology that we have <em>in the limit</em>. That’s good as it helps us understand whether there is an inherent shortcoming within the methodology or whether, e.g., it is just not scaling. At the same time it is bad as it might negatively impact the perceived need for improving the underlying methodology, as we can somehow make it work.</p>
<h3 id="access-to-data-and-infrastructure">Access to data and infrastructure</h3>
<p>Another big advantage of ML research through industry is that industry often has access to data and infrastructure that is not available in an academic setting. This allows industry to build ML systems that cannot be realized within an academic context, e.g., large-scale machine translations, systems such as Google’s assistant etc. Moreover these systems can be integrated into other large-scale systems offering value to the user and society at large—not necessarily for free though. The impact of these systems on day-to-day life can be quite impressive.</p>
<h2 id="academic-ml-research-is-essential-to-society">Academic ML research is essential to society</h2>
<p>I believe that academic ML research, does/should/can/will play a very different role and can serve societal needs that are beyond the scope and interest of industry-driven ML research, as they do not bear an immediate or mid-term profit opportunity. I would like to stress a few things first though:</p>
<ul>
<li>this applies to a large extent beyond the specifics of ML research, however the current representation of large global industry players so close to academic entities in ML is (arguably) unprecedented.</li>
<li>The following topics etc are not exclusive to academic research although, in my experience, they have been much stronger represented in academia. Moreover, these topics are <em>on top</em> of the <em>foundational research agendas</em> in ML and AI that are found throughout top academic institutions and that I deem essential for the academic pursuit as a whole.</li>
</ul>
<h3 id="conducting-curiosity-driven-high-risk-research">Conducting curiosity-driven high-risk research</h3>
<p>In general, academic research is situated very differently. Not having the need to serve a company’s agenda and ultimately a profit motive, it allows for exploration of fundamentally new methodologies and ideas that are more high risk but might ultimately lead to revolutionary approaches. After all, basic ideas of the approaches that we are riding on right now date back to about the 1940s and 1950s, but back then these ideas were considered crazy, unrealistic, or simply crackpot. It is precisely this curiosity-driven research that academia can provide and that is essential to society. Andrew Odlyzko provided an interesting and nuanced perspective on this in 1995 when he was at Bell labs in <a href="http://www.dtc.umn.edu/~odlyzko/doc/decline.txt">“The decline of unfettered research”</a>:</p>
<blockquote>
<p>We are going through a period of technological change that is unprecedented in extent and speed. The success of corporations and even nations depends more than ever on rapid adoption of new technologies and operating methods. It is widely acknowledged that science made this transformation possible. At the same time, scientific research is under stress, with pressures to change, to turn away from investigation of fundamental scientific problems, and to focus on short-term projects. The aim of this essay is to discuss the reasons for this paradox, and especially for the decline of unfettered research.</p>
</blockquote>
<h3 id="open-transparent-and-falsifiable">Open, transparent, and falsifiable</h3>
<p>In contrast to industry research that is often proprietary and only available in watered-down versions (lacking details, data, or both), academic research is typically made available to the public including enough details to <em>falsify</em> a proposed approach. Staying true to Popper, this tiny detail is extremely important as it allows for scientific discourse, where a hypothesis can be tested and rejected and as such ultimately advances science and is highly relevant in the context of the current “alchemy-discussion” in ML research (see here for <a href="https://www.youtube.com/watch?v=ORHFOnaEzPc">Ali Rahimi’s talk at NIPS</a>, <a href="https://www2.isye.gatech.edu/~tzhao80/Yann_Response.pdf">Yann LeCun’s response</a>, and some background <a href="https://syncedreview.com/2017/12/12/lecun-vs-rahimi-has-machine-learning-become-alchemy/">here</a>, <a href="http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy">here</a>, and <a href="http://www.argmin.net/2017/12/11/alchemy-addendum/">an Addendum on Ben’s blog</a>). I am with Ali, Ben, et al on this one, especially if we really plan on deploying ML-based systems in the physical world and putting them at the center of life and death decisions, as e.g., in autonomous vehicles… but that discussion is for some other time.</p>
<h3 id="tackling-societal-challenges">Tackling societal challenges</h3>
<p>Academic (ML) research allows for tackling societal challenges that I believe deserve our attention even if they do not bear a short or mid-term profit opportunity. These include:</p>
<ul>
<li><strong>Infrastructure management.</strong> E.g., improving power systems, transportation systems, etc., especially given that many of those systems are beyond end-of-life or highly strained. One example that comes to mind is <a href="https://robohub.org/talking-machines-machine-learning-and-the-flint-water-crisis-with-jake-abernethy/">Jake Abernethy’s ML/data approach</a> in the context of the Flint water crisis (see also <a href="http://www.mywater-flint.com/">here</a> and <a href="https://www.ic.gatech.edu/news/610023/using-data-science-fix-flint-water-crisis">here</a>)—this is also a great example for synergies between industry and academia as Google <a href="https://news.umich.edu/google-funded-flint-water-app-helps-residents-find-lead-risk-resources/">funded the research with $150k</a>.</li>
<li><strong>Healthcare.</strong> (Beyond longevity), e.g., support systems for elderly healthcare, and systems for improving health-related outcomes in resource-limited settings. These can for example include systems for the early detection of cognitive decline as done at Riken AIP’s <a href="http://www.riken.jp/en/research/labs/aip/goalorient_tech/cogn_behav_assist_tech/">Cognitive Behavioral Assistive Technology Team</a>.</li>
<li><strong>Human impact on the earth.</strong> This includes understanding the human impact on global weather change, as well as mitigating the effects of severe weather events (through AI-based early warning systems) and potentially reversing them through a holistic understanding of the causal chains.</li>
<li><strong>Broad societal challenges.</strong> Including, mitigating societal impact from unequal wealth distribution and workforce impact through ever faster technology cycles.</li>
<li><strong>Education.</strong> Having reached a point in time where technological cycles are so short that life-long learning is a necessity, ML approaches in teaching might significantly improve and speed-up learning outcomes. This goes hand in hand with the sustainability question of online education and the resulting challenges from such decentralized approaches.</li>
<li><strong>Protecting society against manipulation through data and ML.</strong> This includes things such as, detecting <a href="http://fortune.com/2018/09/11/deep-fakes-obama-video/">deep fakes (now available as an app)</a> (<a href="https://www.iflscience.com/technology/deep-fake-videos-created-by-ai-just-got-even-more-terrifying/">here is the SIGGRAPH video—check it out! </a>), detecting manipulated news, as well as the detection of broader exposition to information bias, and many more.</li>
</ul>
<h3 id="working-with-smaller-noisy-data-sets-and-unbounded-losses">Working with smaller, noisy data sets, and unbounded losses</h3>
<p>I believe another important challenge in learning that has received relatively little consideration in industry ML research is working with small, noisy, and potentially unlabelled data sets. Working with such data, which is often at the core of real-world applications requires new approaches, interpolating dynamically between model-based approaches (regularly incorporating deep domain knowledge) and model-free approaches, where the system dynamics are learned directly from data. For example:</p>
<ul>
<li><strong>Medical applications.</strong> Often it is hard (to impossible) to obtain the necessary amount of data for data-intensive learning approaches (such as, e.g., deep learning). Typically obtaining or ‘generating’ such data follows very complex and time-intense processes requiring complex IRBs and even if all formal requirements are met, then often, say, a condition one would like to obtain data about, is so rare that the overall data availability and throughput is very limited.</li>
<li><strong>Physical systems.</strong> Here the main challenges are that physical systems are bound to the limitations of physics and as such generation of data is often slow and expensive. To make things a bit more tangible, say you would like to build a reinforcement learning-based system for inventory management in a highly dynamic environment. For the necessary data collection, you either have to wait a long time as you actually have to observe system to obtain the data (<em>reality-in-the-loop</em> approach), apart from the fact that <em>testing</em> in such systems is nearly impossible, or you have to run a simulation but then you likely to run into model-mismatch issues as the simulation model does not quite match reality.</li>
<li><strong>Unbounded losses.</strong> A standard approach for many learning problems is based on (regularized) empirical risk minimization (ERM), where we solve problems of the form $\min_{\theta} \frac{1}{n} \sum_{i \in [n]} \ell(f(x_i,\theta),y_i) + R(\theta)$ and then $\theta$ is the parametrization of the learned model. ‘Getting it right on average’ (or some other form of probabilistic statement or risk measure) however is often not good enough for real-world applications. A great example is (again) autonomous driving: we do not want to learn that crashing into a wall is not good by actually crashing into the wall; a typical scenario where losses are essentially unbounded but in the ERM problem their contribution would be limited. These applications require either a very different learning approach (here is a nice example from the <a href="http://www.mpc.berkeley.edu/research/adaptive-and-learning-predictive-control">MPC Lab @ Berkeley</a> for safe learning; check out the video!)—or explicit consideration of the maximal loss (see this work of <a href="https://arxiv.org/abs/1602.01690">Shalev-Shwartz and Wexler</a>; can be nicely incorporated into many ERM approaches) if an ERM formulation is desired or required.</li>
</ul>
<h2 id="synergetic-relationship-between-academia-and-industry">Synergetic relationship between Academia and Industry</h2>
<p>So how does this all come together? I strongly believe that the relationship between Academia and Industry has to be synergetic. Never has an ‘academic skillset’ had such a direct translation into an industry context. While this direct translation is a root cause for the current debate on relevance of academic ML research at the same time it is also an opportunity for doing something together and rather than outlining a very limited model of what one could do I’d rather mention two current themes that I think are <em>not helpful</em> to achieve synergies—of course as always there are exceptions.</p>
<h3 id="the-false-god-of-co-employment">The false god of co-employment</h3>
<p>It is impossible to serve two masters with vastly different objectives. Some time back I talked to a researcher with a co-employment deal (similar to the one that Facebook would like to see) and I asked him about publishing. He told me about an argument that he had had with one his superiors. Upon requesting time to publish (a pretty substantial methodological advance) he got the following answer (paraphrasing): “If it creates value for the company why do you want to make it available to the public? If it does not create value, why do you waste time writing it up?” (For a much more nuanced and detailed discussion you should read: <a href="http://www.argmin.net/2018/08/09/co-employment/">“You Cannot Serve Two Masters: The Harms of Dual Affiliation”</a>)</p>
<h3 id="tapping-the-talent-pool-disguised-as-university-partnerships">Tapping the talent pool disguised as university partnerships</h3>
<p>Current interactions between academia and established industry players in the ML field are often reduced to treating the academic institution as a talent pool. This comes with many complications. Given the strong demand for ML talent, companies are vacuuming up whatever comes into their way, including students that would have been brilliant academic researchers and that are much less suited for an industry R&D type role. Often it is pure compensation numbers that lure students away and while there might be a short-term benefit for industry eventually this is akin to killing the golden goose. This is not to say that university partnerships with industry cannot be successful—I believe it is quite the opposite actually—but the <em>current predominant model</em> is harmful to academic institutions (and society at large) and does not satisfy the <em>equal partners</em> requirement; you know how the saying goes: <em>if you cannot spot the sucker in the room, it is you.</em></p>Sebastian PokuttaTL;DR: Is AI and ML research in academia relevant and necessary? Yes.Collaborating online, in real-time, with math-support and computations2018-08-20T11:20:04-04:002018-08-20T11:20:04-04:00http://www.pokutta.com/blog/random/2018/08/20/atom-markdown<p>Collaborating online, in real-time, with math-support and computations
One of the biggest challenges to overcome in (research) collaborations is often that people are not in the same place and do not have a common “space” available where one can exchange and discuss, including being able to write math equations, in real-time: <em>a shared digital whiteboard</em>. Recently, I finally found a solution that works well (for my needs) and I thought it might be helpful for others as well.
I have been through numerous iterations, including onenote, overleaf, rudel, google docs, gobby-based chats and many other tools and constructs but all of them fell short in at least one of the following dimensions:</p>
<ol>
<li>Real-time collaboration</li>
<li>Support for LaTeX-like math equations</li>
<li>Minimal setup time and cost</li>
<li>Compatible with other tools (e.g., integration with git)</li>
<li>Easily parseable for computations (e.g., for verification purposes)</li>
</ol>
<p><em>Disclaimer.</em> Since nobody has time to read and even less time to write: The following is really only a quick summary of what is needed in terms of software and packages. Go out and explore - it is super easy to use… and feel free to ask questions!!</p>
<h2 id="what-do-youget">What do you get?</h2>
<p>What you will get is a text editor with
Markdown support. Markdown in a nutshell:</p>
<blockquote>
<p>“Markdown is a lightweight markup language with plain text formatting syntax. It is designed so that it can be converted to HTML and many other formats using a tool by the same name” <a href="https://en.wikipedia.org/wiki/Markdown">[wikipedia]</a></p>
</blockquote>
<p>See <a href="https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet">[here]</a> and <a href="https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf">[here]</a> for a markdown cheatsheet and <a href="https://github.com/burodepeper/language-markdown/blob/master/FAQ.md">[here]</a> for some common questions.</p>
<p><strong>Real-time preview.</strong>
Think of instantaneous typesetting (see animation below) for all participants of the session.</p>
<p><strong>Math support.</strong> Simply type LaTeX code and have it typeset in real-time. Supported LaTeX commands etc are limited but good enough for most applications; see <a href="https://github.com/atom-community/markdown-preview-plus/blob/master/docs/math.md">[here]</a> and <a href="http://docs.mathjax.org/en/latest/tex.html#supported-latex-commands">[here]</a> for details.</p>
<p><strong>Real-time collaboration.</strong> Share a file in real-time and work on it together, similar to google docs but with the interactivity of a jupyter notebook and the readability of a document with typeset formulae.</p>
<p><strong>Interactive code.</strong> Execute python, julia, R, etc. in place via hydrogen (see below). Highlight code. shift-enter. And it runs. Right in your editor.</p>
<p><strong>Apart from that.</strong> Atom is also great for coding, LaTeX typesetting and much more… but that’s for some other time.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/atomMD.gif" alt="Atom + markdown + math" class="align-center" /></p>
<h2 id="what-do-you-need-aka-installation">What do you need? aka Installation</h2>
<ol>
<li>Atom text editor from <a href="https://atom.io/">[atom.io]</a></li>
<li>Within atom install the following packages with its package manager:
<ul>
<li>markdown-preview-plus <a href="https://atom.io/packages/markdown-preview-plus">[link]</a></li>
<li>language-markdown <a href="https://atom.io/packages/language-markdown">[link]</a></li>
<li>teletype <a href="https://teletype.atom.io/">[link]</a></li>
<li>hydrogen <a href="https://atom.io/packages/hydrogen">[link]</a> (optional: for interactive code execution)</li>
</ul>
</li>
<li>Remarks:
<ul>
<li>You will need a github account to use teletype. Teletype uses github for data exchange between participants.</li>
<li>You might want to activate <em>“Enable Math Rendering By Default”</em> in the settings of the <em>markdown-preview-plus</em> package settings</li>
<li>You can activate the markdown preview with ctrl-shift-M</li>
<li>You can activate the (atom) command window with cmd/windows-shift-p, which is helpful to access <em>hydrogen</em> commands.</li>
</ul>
</li>
</ol>
<h2 id="adding-hydrogen-to-themix">Adding hydrogen to the mix.</h2>
<p>What is <a href="https://github.com/nteract/hydrogen#multiple-kernels-inside-one-rich-document">hydrogen</a>:</p>
<blockquote>
<p>Hydrogen was inspired by Bret Victor’s ideas about the power of instantaneous feedback and the design of Light Table. Running code inline and in real time is a more natural way to develop. By bringing the interactive style of Light Table to the rock-solid usability of Atom, Hydrogen makes it easy to write code the way you want to.</p>
</blockquote>
<p>In short: run your code out of atom like jupyter (including plotting etc) and collaborate like a boss. Wanna verify computations of a limit or integral? Write it in sympy and run it <strong>right there</strong>.</p>
<p class="center"><img src="http://www.pokutta.com/blog/assets/hydrogen.gif" alt="from the hydrogen site for illustration only" class="align-center" /></p>
<p><a href="https://github.com/nteract/hydrogen#multiple-kernels-inside-one-rich-document">[Image from the hydrogen site for illustration only]</a></p>
<p>…and works especially well together with the setup from above. <a href="https://blog.nteract.io/hydrogen-interactive-computing-in-atom-89d291bcc4dd">[Further read]</a></p>Sebastian PokuttaCollaborating online, in real-time, with math-support and computations One of the biggest challenges to overcome in (research) collaborations is often that people are not in the same place and do not have a common “space” available where one can exchange and discuss, including being able to write math equations, in real-time: a shared digital whiteboard. Recently, I finally found a solution that works well (for my needs) and I thought it might be helpful for others as well. I have been through numerous iterations, including onenote, overleaf, rudel, google docs, gobby-based chats and many other tools and constructs but all of them fell short in at least one of the following dimensions: Real-time collaboration Support for LaTeX-like math equations Minimal setup time and cost Compatible with other tools (e.g., integration with git) Easily parseable for computations (e.g., for verification purposes)