Tune is an abstraction layer for general parameter tuning built on top of Fugue. It can run hyperparameter tuning frameworks such as Optuna and Hyperopt on the backends supported by Fugue (Spark, Dask, Ray, and local). Tune can also be used for general scientific computing in addition to typical machine learning libraries such as Scikit-learn and Keras.
Tune has the following goals:
Provide the simplest and most intuitive APIs for major tuning cases.
Be scale agnostic and platform agnostic. We want you to worry less about distributed computing, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change.
Have questions? Chat with us on Github or Slack:
Tune is available through pip.
pip install tune
Tune does not come with any machine learning libraries because it can also be used to tune any objective functionn (as in the case of scientific computing). To use it with scikit-learn and Bayesian Optimization, you can install with extras.
pip install tune[hyperopt,sklearn]
Here we learn how to define the search space for hyperparameter tuning. We’ll learn how Fugue Tune provides an intuitive and scalable interface for defining hyperparameter combinations for an experiment. Tune’s search space is decoupled any specific framework.
Next we apply the search space on non-iterative problems. These are machine learning models that converge to a solution. Scikit-learn models fall under this.
Next we apply the search space on iterative problems such as deep learning problems