hypernets.searchers package

Submodules

hypernets.searchers.evolution_searcher module

class hypernets.searchers.evolution_searcher.EvolutionSearcher(space_fn, population_size, sample_size, regularized=False, candidates_size=10, optimize_direction=<OptimizeDirection.Minimize: 'min'>, use_meta_learner=True, space_sample_validation_fn=None, random_state=None)[source]

Bases: hypernets.core.searcher.Searcher

Evolutionary Algorithm

Parameters:
  • space_fn (callable, required) – A search space function which when called returns a HyperSpace instance
  • population_size (int, required) – Size of population
  • sample_size (int, required) – The number of parent candidates selected in each cycle of evolution
  • regularized (bool) – (default=False), Whether to enable regularized
  • candidates_size (int, (default=10)) – The number of samples for the meta-learner to evaluate candidate paths when roll out
  • optimize_direction ('min' or 'max', (default='min')) – Whether the search process is approaching the maximum or minimum reward value.
  • use_meta_learner (bool, (default=True)) – Whether to enable meta leaner. Meta-learner aims to evaluate the performance of unseen samples based on previously evaluated samples. It provides a practical solution to accurately estimate a search branch with many simulations without involving the actual training.
  • space_sample_validation_fn (callable or None, (default=None)) – Used to verify the validity of samples from the search space, and can be used to add specific constraint rules to the search space to reduce the size of the space.

References

Real, Esteban, et al. “Regularized evolution for image classifier architecture search.” Proceedings of the aaai conference on artificial intelligence. Vol. 33. 2019.

parallelizable
population_size
sample(space_options=None)[source]
summary()[source]
update_result(space_sample, result)[source]
class hypernets.searchers.evolution_searcher.Individual(space_sample, reward)[source]

Bases: object

mutate()[source]
class hypernets.searchers.evolution_searcher.Population(size=50, optimize_direction=<OptimizeDirection.Minimize: 'min'>, random_state=None)[source]

Bases: object

append(space_sample, reward)[source]
eliminate(num=1, regularized=False)[source]
initializing
length
mutate(parent_space, offspring_space)[source]
sample_best(sample_size)[source]
shuffle()[source]

hypernets.searchers.genetic module

class hypernets.searchers.genetic.Individual(dna, scores, random_state)[source]

Bases: object

class hypernets.searchers.genetic.Recombination(random_state)[source]

Bases: object

check_parents(ind1: hypernets.searchers.genetic.Individual, ind2: hypernets.searchers.genetic.Individual)[source]
do(ind1: hypernets.searchers.genetic.Individual, ind2: hypernets.searchers.genetic.Individual, out_space: hypernets.core.search_space.HyperSpace)[source]
class hypernets.searchers.genetic.ShuffleCrossOver(random_state)[source]

Bases: hypernets.searchers.genetic.Recombination

do(ind1: hypernets.searchers.genetic.Individual, ind2: hypernets.searchers.genetic.Individual, out_space: hypernets.core.search_space.HyperSpace)[source]
class hypernets.searchers.genetic.SinglePointCrossOver(random_state)[source]

Bases: hypernets.searchers.genetic.Recombination

do(ind1: hypernets.searchers.genetic.Individual, ind2: hypernets.searchers.genetic.Individual, out_space: hypernets.core.search_space.HyperSpace)[source]
class hypernets.searchers.genetic.SinglePointMutation(random_state, proba=0.7)[source]

Bases: object

do(sample_space, out_space, proba=None)[source]
class hypernets.searchers.genetic.UniformCrossover(random_state)[source]

Bases: hypernets.searchers.genetic.Recombination

do(ind1: hypernets.searchers.genetic.Individual, ind2: hypernets.searchers.genetic.Individual, out_space: hypernets.core.search_space.HyperSpace)[source]
hypernets.searchers.genetic.create_recombination(name, random_state, **kwargs)[source]

hypernets.searchers.grid_searcher module

class hypernets.searchers.grid_searcher.GridSearcher(space_fn, optimize_direction=<OptimizeDirection.Minimize: 'min'>, space_sample_validation_fn=None, n_expansion=5)[source]

Bases: hypernets.core.searcher.Searcher

export()[source]
get_best()[source]
parallelizable
reset()[source]
sample(space_options=None)[source]
update_result(space, result)[source]
hypernets.searchers.grid_searcher.test_parameter_grid(self)[source]

hypernets.searchers.mcts_core module

class hypernets.searchers.mcts_core.BasePolicy[source]

Bases: object

back_propagation(node, reward)[source]
selection(node)[source]
class hypernets.searchers.mcts_core.MCNode(id, name, param_sample, parent=None, tree=None, is_terminal=False, random_state=None)[source]

Bases: object

add_child(param_sample)[source]
children
depth
expanded
expansion(param_space, max_space)[source]
info()[source]
is_leaf
is_terminal
random_sample()[source]
set_parent(parent)[source]
set_terminal()[source]
class hypernets.searchers.mcts_core.MCTree(space_fn, policy, max_node_space)[source]

Bases: object

back_propagation(node, reward, is_simulation=False)[source]
current_node
expansion(node)[source]
node_to_space(node)[source]
path_to_node(node)[source]
roll_out(space_sample, node)[source]
selection_and_expansion()[source]
simulation(node)[source]
class hypernets.searchers.mcts_core.UCT(exploration_bonus=0.6)[source]

Bases: hypernets.searchers.mcts_core.BasePolicy

back_propagation(node, reward, is_simulation=False)[source]
selection(node)[source]

hypernets.searchers.mcts_searcher module

class hypernets.searchers.mcts_searcher.MCTSSearcher(space_fn, policy=None, max_node_space=10, candidates_size=10, optimize_direction=<OptimizeDirection.Minimize: 'min'>, use_meta_learner=True, space_sample_validation_fn=None)[source]

Bases: hypernets.core.searcher.Searcher

Parameters:
  • space_fn (Callable) – A search space function which when called returns a HyperSpace object.
  • policy (hypernets.searchers.mcts_core.BasePolicy, (default=None)) – The policy for Selection and Backpropagation phases, UCT by default.
  • max_node_space (int, (default=10)) – Maximum space for node expansion
  • candidates_size (int, (default=10)) – The number of samples for the meta-learner to evaluate candidate paths when roll out
  • optimize_direction ('min' or 'max', (default='min')) – Whether the search process is approaching the maximum or minimum reward value
  • use_meta_learner (bool, (default=True)) – Meta-learner aims to evaluate the performance of unseen samples based on previously evaluated samples. It provides a practical solution to accurately estimate a search branch with many simulations without involving the actual training
  • space_sample_validation_fn (Callable or None, (default=None)) – Used to verify the validity of samples from the search space, and can be used to add specific constraint rules to the search space to reduce the size of the space

References

[1] Wang, Linnan, et al. “Alphax: exploring neural architectures with deep neural networks and monte carlo tree search.” arXiv preprint arXiv:1903.11059 (2019).

[2] Browne, Cameron B., et al. “A survey of monte carlo tree search methods.” IEEE Transactions on Computational Intelligence and AI in games 4.1 (2012): 1-43.

export()[source]
get_best()[source]
max_node_space
parallelizable()[source]
reset()[source]
sample(space_options=None)[source]
summary()[source]
update_result(space_sample, result)[source]

hypernets.searchers.moead_searcher module

class hypernets.searchers.moead_searcher.Decomposition(**kwargs)[source]

Bases: object

static adaptive_normalization(F, ideal, nadir)[source]
do(scores: numpy.ndarray, weight_vector: numpy.ndarray, Z: numpy.ndarray, ideal: numpy.ndarray, nadir: numpy.ndarray, **kwargs)[source]
class hypernets.searchers.moead_searcher.MOEADSearcher(space_fn, objectives, n_sampling=5, n_neighbors=2, recombination=None, mutate_probability=0.7, decomposition=None, space_sample_validation_fn=None, random_state=None)[source]

Bases: hypernets.searchers.moo.MOOSearcher

An implementation of “MOEA/D”.

Parameters:
  • space_fn (callable, required) – A search space function which when called returns a HyperSpace instance
  • objectives (List[Objective], optional, (default to NumOfFeatures instance)) – The optimization objectives.
  • n_sampling (int, optional, default to 5.) –

    The number of samples in each objective, it affects the number of optimization objectives after decomposition:

    \(N = C_{samples + objectives - 1}^{ objectives - 1 }\)

  • n_neighbors (int, optional, default to 3.) – Number of neighbors to crossover.
  • recombination (Recombination, optional, default to instance of SinglePointCrossOver) –

    the strategy to recombine DNA of parents to generate offspring. Builtin strategies:

    • ShuffleCrossOver
    • UniformCrossover
    • SinglePointCrossOver
  • decomposition (Decomposition, optional, default to instance of TchebicheffDecomposition) –

    The strategy to decompose multi-objectives optimization problem and calculate scores for the sub problem, now supported:

    • TchebicheffDecomposition
    • PBIDecomposition
    • WeightedSumDecomposition

    Due to the possible differences in dimension of objectives, normalization will be performed on the scores, the formula:

    \(f_{i}' = \frac{ f_i - z_i^* } { z_i ^ {nad} - z^* + \epsilon }\)

mutate_probability: float, optional, default to 0.7
the probability of genetic variation for offspring, when the parents can not recombine, it will definitely mutate a gene for the generated offspring.
space_sample_validation_fn: callable or None, (default=None)
used to verify the validity of samples from the search space, and can be used to add specific constraint rules to the search space to reduce the size of the space.
random_state: np.RandomState, optional
used to reproduce the search process

References

[1] Q. Zhang and H. Li, “MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition,” in IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712-731, Dec. 2007, doi: 10.1109/TEVC.2007.892759.

[2] Das I, Dennis J E. “Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems[J].” SIAM Journal on Optimization, 1998, 8(3): 631-657.

calc_euler_distance(vectors)[source]
distribution_number(n_samples, n_objectives)[source]

Uniform weighted vectors, an implementation of Normal-boundary intersection.

export()[source]
get_best()[source]
get_historical_population() → List[hypernets.searchers.genetic.Individual][source]
get_ideal_point()[source]
get_nadir_point()[source]
get_nondominated_set()[source]
get_population() → List[hypernets.searchers.genetic.Individual][source]
get_reference_point()[source]

calculate Z in tchebicheff decomposition

init_mean_vector_by_NBI(n_samples, n_objectives)[source]
init_population(weight_vectors)[source]
n_objectives
parallelizable
population_size
reset()[source]
sample(space_options=None)[source]
update_result(space, result)[source]
class hypernets.searchers.moead_searcher.PBIDecomposition(penalty=0.5)[source]

Bases: hypernets.searchers.moead_searcher.Decomposition

An implementation of “Boundary Intersection Approach base on penalty”

Parameters:penalty (float, optional, default to 0.5) – Penalty the solution(F) deviates from the weight vector, the larger the value, the faster the convergence.
do(scores: numpy.ndarray, weight_vector: numpy.ndarray, Z: numpy.ndarray, ideal: numpy.ndarray, nadir: numpy.ndarray, **kwargs)[source]
class hypernets.searchers.moead_searcher.TchebicheffDecomposition(**kwargs)[source]

Bases: hypernets.searchers.moead_searcher.Decomposition

do(scores: numpy.ndarray, weight_vector, Z, ideal: numpy.ndarray, nadir: numpy.ndarray, **kwargs)[source]
class hypernets.searchers.moead_searcher.WeightedSumDecomposition(**kwargs)[source]

Bases: hypernets.searchers.moead_searcher.Decomposition

do(scores: numpy.ndarray, weight_vector, Z: numpy.ndarray, ideal: numpy.ndarray, nadir: numpy.ndarray, **kwargs)[source]
hypernets.searchers.moead_searcher.create_decomposition(name, **kwargs)[source]

hypernets.searchers.moo module

class hypernets.searchers.moo.MOOSearcher(space_fn, objectives: List[hypernets.core.objective.Objective], *, use_meta_learner=True, space_sample_validation_fn=None, **kwargs)[source]

Bases: hypernets.core.searcher.Searcher

check_plot()[source]
get_historical_population() → List[hypernets.searchers.genetic.Individual][source]
get_nondominated_set() → List[hypernets.searchers.genetic.Individual][source]
get_pareto_nondominated_set()[source]
get_population() → List[hypernets.searchers.genetic.Individual][source]
kind()[source]

Type of the Searcher, should be one of soo, moo. This property used to avoid having to import MOOSearcher when detecting Searcher type.

plot_population(figsize=(6, 6), **kwargs)[source]

hypernets.searchers.nsga_searcher module

class hypernets.searchers.nsga_searcher.NSGAIISearcher(space_fn, objectives, recombination=None, mutate_probability=0.7, population_size=30, space_sample_validation_fn=None, random_state=None)[source]

Bases: hypernets.searchers.nsga_searcher._NSGAIIBasedSearcher

An implementation of “NSGA-II”.

Parameters:
  • space_fn (callable, required) – A search space function which when called returns a HyperSpace instance
  • objectives (List[Objective], optional, (default to NumOfFeatures instance)) – The optimization objectives.
  • recombination (Recombination, required) – the strategy to recombine DNA of parents to generate offspring. Builtin strategies: - ShuffleCrossOver - UniformCrossover - SinglePointCrossOver
  • mutate_probability (float, optional, default to 0.7) – the probability of genetic variation for offspring, when the parents can not recombine, it will definitely mutate a gene for the generated offspring.
  • population_size (int, default to 30) – size of population
  • space_sample_validation_fn (callable or None, (default=None)) – used to verify the validity of samples from the search space, and can be used to add specific constraint rules to the search space to reduce the size of the space.
  • random_state (np.RandomState, optional) – used to reproduce the search process

References

[1] K. Deb, A. Pratap, S. Agarwal and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” in IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182-197, April 2002, doi: 10.1109/4235.996017.

class hypernets.searchers.nsga_searcher.RNSGAIISearcher(space_fn, objectives, ref_point=None, weights=None, dominance_threshold=0.3, recombination=None, mutate_probability=0.7, population_size=30, space_sample_validation_fn=None, random_state=None)[source]

Bases: hypernets.searchers.nsga_searcher._NSGAIIBasedSearcher

An implementation of R-NSGA-II which is a variant of NSGA-II algorithm.

Parameters:
  • space_fn (callable, required) – A search space function which when called returns a HyperSpace instance
  • objectives (List[Objective], optional, (default to NumOfFeatures instance)) – The optimization objectives.
  • ref_point (Tuple[float], required) – user-specified reference point, used to guide the search toward the desired region.
  • weights (Tuple[float], optional, default to uniform) – weights vector, provides more detailed information about what Pareto optimal to converge to.
  • dominance_threshold (float, optional, default to 0.3) – distance threshold, in case of pareto-equivalent, compare distance between two solutions.
  • recombination (Recombination, required) – the strategy to recombine DNA of parents to generate offspring. Builtin strategies: - ShuffleCrossOver - UniformCrossover - SinglePointCrossOver
  • mutate_probability (float, optional, default to 0.7) – the probability of genetic variation for offspring, when the parents can not recombine, it will definitely mutate a gene for the generated offspring.
  • population_size (int, default to 30) – size of population
  • space_sample_validation_fn (callable or None, (default=None)) – used to verify the validity of samples from the search space, and can be used to add specific constraint rules to the search space to reduce the size of the space.
  • random_state (np.RandomState, optional) – used to reproduce the search process

References

[1] L. Ben Said, S. Bechikh and K. Ghedira, “The r-Dominance: A New Dominance Relation for Interactive Evolutionary Multicriteria Decision Making,” in IEEE Transactions on Evolutionary Computation, vol. 14, no. 5, pp. 801-818, Oct. 2010, doi: 10.1109/TEVC.2010.2041060.

hypernets.searchers.playback_searcher module

class hypernets.searchers.playback_searcher.PlaybackSearcher(history: hypernets.core.trial.TrialHistory, top_n=None, reverse=False, optimize_direction=<OptimizeDirection.Minimize: 'min'>)[source]

Bases: hypernets.core.searcher.Searcher

parallelizable
sample(space_options=None)[source]
update_result(space, result)[source]

hypernets.searchers.random_searcher module

class hypernets.searchers.random_searcher.RandomSearcher(space_fn, optimize_direction=<OptimizeDirection.Minimize: 'min'>, space_sample_validation_fn=None)[source]

Bases: hypernets.core.searcher.Searcher

export()[source]
get_best()[source]
parallelizable
reset()[source]
sample(space_options=None)[source]
update_result(space, result)[source]

Module contents

hypernets.searchers.get_searcher_cls(identifier)[source]
hypernets.searchers.make_searcher(cls, search_space_fn, optimize_direction='min', objectives=None, **kwargs)[source]