# Search Space#

Search space is an important concept in parameter optimization. We know that grid search and random search are the common tuning methods. And they seem to be exclusive to each other. Actually if we have a well defined Space concept, both of them can be included.

We don’t see a satisfying space design in popular tuning frameworks, so here we polished this concept and created a space language. Plus, we make it as intuitive and minimal as possible.

## Core Classes & Concept#

The core classes include the Space class itself plus Grid and stochastic expressions. In the following example, we only import the most used ones.

from tune import Space, Grid, Rand, RandInt, Choice


A space can be converted to a list of configurations (parameter combinations), and every configurations is independent and can execute any time any where.

## Static Space#

space = Space(a=1, b=2)

list(space)  # this is how you get all combinations (configuarations) in a defined space

[{'a': 1, 'b': 2}]


## Grid Search Space#

Grid means every value must present in the configurations, so if there are multiple grid expressions, we simply cross product them

space = Space(a=1, b=Grid(2,3), c=Grid("a","b"))

list(space)

[{'a': 1, 'b': 2, 'c': 'a'},
{'a': 1, 'b': 2, 'c': 'b'},
{'a': 1, 'b': 3, 'c': 'a'},
{'a': 1, 'b': 3, 'c': 'b'}]


## Random Search Space#

Stochastic expressions will randomly draw a value from the collection every time called, it does not guarantee the final sample space will contain all values. But you can control the total number of samples, so you can control the compute load.

space = Space(a=1, b=Rand(0,1), c=Choice("a","b"))

list(space)

[{'a': 1, 'b': Rand(low=0, high=1, q=None, log=False, include_high=True), 'c': Choice('a', 'b')}]


So without a sampling instruction, the randome stochastic expressions do not flatten by themselves. You must be explicit on how many samples you want. Also you can set a seed to make it reproduceable.

space = Space(a=1, b=Rand(0,1), c=Choice("a","b"))

list(space.sample(3, seed=10))

[{'a': 1, 'b': 0.771320643266746, 'c': 'b'},
{'a': 1, 'b': 0.0207519493594015, 'c': 'a'},
{'a': 1, 'b': 0.6336482349262754, 'c': 'b'}]


So far you can see that, the difference between grid and random search is just on expression. When they are converted to the Space class, they both represent a collection of configurations that can run independently. And they are both pre-determined, you know what you are going to do before real execution.

You may see some newer tuning frameworks such as Optuna let you get rand variables when running a trial. It pushes the responsibility to runtime, giving you the most flexibility. However, that is no longer necessary with this design. You can keep the definition of the spaces in ‘compile time’

## Random Search Space without Sampling#

Why do we let this happen?

space = Space(a=1, b=Rand(0,1), c=Choice("a","b"))

list(space)

[{'a': 1, 'b': Rand(low=0, high=1, q=None, log=False, include_high=True), 'c': Choice('a', 'b')}]


This is because there is another very popular search algo: Bayesian Optimization, it requires to keep the stochastic expressions so the algo can decide what values to try at each iteration. It will utilize the historical iterations to determine the best next guess.

It is sequential, but it takes much less guesses than random search to achieve comparable results. So the compute is much less, however, the total time taken is not necessarily less because random search can be fully parallelized while BO can’t.

To sum up, all search algos have pros and cons, do not stick with one. So that is why we are going to combine them in the next step.

## Space Combination#

What if for a training set, you want to try different models with different searching algos? Let’s write some pseudo code first

space1 = Space(model="model1", a=1, b=Grid(2,3))  # Grid search
space2 = Space(model="model2", x=Rand(3,4)).sample(2) # Random search
space3 = Space(model="model3", y=Rand(3,4))  # Bayesian Optimization