TL;DR: Part of a series of posts about tools, services, and packages that I use in day-to-day operations to boost efficiency and free up time for the things that really matter. Use at your own risk - happy to answer questions. For the full, continuously expanding list so far see here.
This is the second installment of a series of posts; the full list is expanding over time. This time around will be about the python environment that I am using. Python has become my go-to language for rapid prototyping. In some sense these tools are some of the most fundamental ones but at the same time they do not provide direct utility by solving a specific problem but rather by accelerating problem solving etc.
Extremely powerful integrated development environment (IDE) for python.
Learning curve: ⭐️⭐️⭐️
Excellent support for coding including simple things such as syntax highlighting and more complex refactoring. Support for managing different build/run environments, remote kernels, etc. Also great for managing larger scale projects.
Python distribution geared towards scientific computing and data science applications.
Learning curve: ⭐️⭐️
Anaconda is a very comprehensive and well-maintained python distribution geared towards scientific computing and data science applications. It uses the
conda package manager making package management as well as creating different environments with different python versions exceptionally convenient. Learning curve only got ⭐️⭐️ as it is not harder than any other python distribution.
Interactive python computing.
Learning curve: ⭐️⭐️⭐️
PyCharm is great for more traditional development (write code, run, debug, iterate),
Jupyter provides an interactive (python) computing environment in a web browser (for those in the know it is basically
IPython on steroids). So what it allows to do is essentially to work with data etc in real-time and interactively by allowing partial execution of code, directly reviewing results, plotting etc. without having to fully re-run the code. Great for example, for exploratory data analysis. Allows for significantly faster tinkering etc with code and data and then once it is stable it can be easily transferred into a more traditional python code setup.
A typical process that I regularly follow is first writing a library that provides black box functions for some tasks and then I use Jupyter to do very high level tinkering/modifications. My Jupyter notebook might look like this:
tools library does all the heavy lifting behind the scenes and I use the
Jupyter notebook for very high level manipulations and tests.