Fugue Tune#

Tune is an abstraction layer for general parameter tuning built on top of Fugue. It can run hyperparameter tuning frameworks such as Optuna and Hyperopt on the backends supported by Fugue (Spark, Dask, Ray, and local). Tune can also be used for general scientific computing in addition to typical machine learning libraries such as Scikit-learn and Keras.

Tune has the following goals:

  • Provide the simplest and most intuitive APIs for major tuning cases.

  • Be scale agnostic and platform agnostic. We want you to worry less about distributed computing, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change.

  • Be highly extendable and flexible on lower level abstractions to integrate with libraries such as Hyperopt, Optuna, and Nevergrad.

Have questions? Chat with us on Github or Slack:

Homepage Slack Status

Installation#

Tune is available through pip.

pip install tune

Tune does not come with any machine learning libraries because it can also be used to tune any objective functionn (as in the case of scientific computing). To use it with scikit-learn and Bayesian Optimization, you can install with extras.

pip install tune[hyperopt,sklearn]

Tune Tutorials#

Search Space#

Here we learn how to define the search space for hyperparameter tuning. We’ll learn how Fugue Tune provides an intuitive and scalable interface for defining hyperparameter combinations for an experiment. Tune’s search space is decoupled any specific framework.

Non-iterative Problems#

Next we apply the search space on non-iterative problems. These are machine learning models that converge to a solution. Scikit-learn models fall under this.

Iterative Problems#

Next we apply the search space on iterative problems such as deep learning problems