leap_ec.executable_rep package

Submodules

leap_ec.executable_rep.cgp module

Cartesian genetic programming (CGP) representation.

The CGPDecoder does most of the work here: it converts a linear genome into a graph structure, and wraps the latter in a CGPExecutable (which knows how to execute the graph).

class leap_ec.executable_rep.cgp.CGPDecoder(primitives, num_inputs, num_outputs, num_layers, nodes_per_layer, max_arity, prune: bool = True, levels_back=None)

Bases: Decoder

Implements the genotype-phenotype decoding for Cartesian genetic programming (CGP).

A CGP genome is linear, but made up of one sub-sequence for each circuit element. In our version here, the first gene in each sub-sequence indicates the primitive (i.e., function) that node computes, and the subsequence genes indicate the inputs to that primitive.

That is, each node is specified by three genes [p_id, i_1, i_2], where p_id is the index of the node’s primitive, and i_1, i_2 are the indices of the nodes that feed into it.

The sequence [ 0, 2, 3 ] indicates an element that computes the 0th primitive (as an index of the primitives list) and takes its inputs from nodes 2 and 3, respectively.

bounds()

Return the (min, max) allowed value they every gene may assume, taking into account the levels structure.

These values should be used by initialization and mutation operators to ensure that CGP’s constraints are met.

>>> primitives = [ sum, lambda x: x[0] - x[1], lambda x: x[0] * x[1] ]
>>> decoder = CGPDecoder(primitives, num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2, levels_back=1)
>>> decoder.bounds()
[(0, 2), (0, 1), (0, 1), (0, 2), (0, 1), (0, 1), (0, 2), (2, 3), (2, 3), (0, 2), (2, 3), (2, 3), (0, 5), (0, 5)]
check_constraints(next_individual: Iterator)

An operator that checks whether individual’s genomes satisfy the CGP constraints.

For example, say we have the following population:

>>> from leap_ec import Individual
>>> genome0 = np.array([ 0, 0, 1, 1, 0, 1, 2, 2, 3, 0, 2, 3, 4, 5 ])
>>> genome1 = np.array([ 0, 0, 1, 1, 0, 1, 2, 2, 3, 0, 2, 1, 4, 5 ])
>>> genome2 = np.array([ 0, 0, 1, 4, 0, 1, 2, 2, 3, 0, 2, 3, 4, 5 ])
>>> genome3 = np.array([ 0, 0, 1, 1, 0, 1, 2, 2, 3, 0, 2, 3, 4, 5, 3, 4, 5 ])
>>> genome4 = np.array([ 0.0, 0.0, 1.0, 1.0, 0, 1.0, 2.0, 2.0, 3.0, 0.0, 2.0, 3.0, 4.0, 5.0 ])
>>> population = iter([ Individual(genome0),
...                     Individual(genome1),
...                     Individual(genome2),
...                     Individual(genome3),
...                     Individual(genome4) ])

Then given this decoder:

>>> primitives = [ sum, lambda x: x[0] - x[1], lambda x: x[0] * x[1] ]
>>> decoder = CGPDecoder(primitives, num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2, levels_back=1)

The first individual (genome0) satisfies the constraints:

>>> op = decoder.check_constraints
>>> next(op(population))
Individual<...>(...)

The next fails (genome1), however, because it violates the levels_back constraint:

>>> next(op(population))
Traceback (most recent call last):
...
ValueError: CGP constraints violated by individual: expected gene at locus 11 to be between the values of (2, 3) (inclusive), but found a value of 1.

Then genome2 fails because it contains a cycle:

>>> next(op(population))
Traceback (most recent call last):
...
ValueError: CGP constraints violated by individual: expected gene at locus 3 to be between the values of (0, 2) (inclusive), but found a value of 4.

The new (genome3) fails because it has the incorrect genome length:

>>> next(op(population))
Traceback (most recent call last):
...
ValueError: CGP constraints violated by individual: genome of length 17 found, but expected 14 genes.

And the last (genome4) fails because the genes are of the wrong type:

>>> next(op(population))
Traceback (most recent call last):
...
ValueError: CGP constraints violated by individual: genome must contain only integers, but the gene at locus 0 has a non-integral value of 0.0.
decode(genome, *args, **kwargs)

Decode a linear CGP genome into an executable circuit.

>>> primitives = [ sum, lambda x: x[0] - x[1], lambda x: x[0] * x[1] ]
>>> decoder = CGPDecoder(primitives, num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2)
>>> genome = [ 0, 0, 1, 1, 0, 1, 2, 2, 3, 0, 2, 3, 4, 5 ]
>>> decoder.decode(genome)
<leap_ec.executable_rep.cgp.CGPExecutable object at ...>
get_input_sources(genome, layer, node)

Given a linear CGP genome, return the list of all of the input sources (as integers) which feed into the given node in the given layer.

get_output_sources(genome)

Given a linear CGP genome, return the list of nodes that connect to each output.

get_primitive(genome, layer, node)

Given a linear CGP genome, return the primitive object for the given node in the given layer.

initializer()

Convenience method that returns an initialization function for creating integer-vector genomes that obey this CGP representation’s constraints.

num_cgp_nodes()

Return the total number of nodes that will be in the resulting CGP graph, including inputs and outputs.

For example, a 2x2 CGP individual with 2 outputs and 2 inputs will have $4 + 2 + 2 = 8$ total graph nodes.

>>> decoder = CGPDecoder([sum], num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2, levels_back=1)
>>> decoder.num_cgp_nodes()
8
num_genes()

The number of genes we expect to find in each genome. This will equal the number of outputs plus the total number of genes needed to specify the nodes of the graph.

The number of inputs has no effect on the size of the genome.

For example, a 2x2 CGP individual with 2 outputs an a max_arity of 2 will have 14 genes: $3*4 = 12$ genes to specify the primitive and inputs (1 + 2) for each internal node, plus 2 genes to specify the circuit outputs.

>>> decoder = CGPDecoder([sum], num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2, levels_back=1)
>>> decoder.num_genes()
14
static prune_graph(graph, num_inputs: int, num_outputs: int)

Prune parts of the graph that do not feed into any of the output nodes.

class leap_ec.executable_rep.cgp.CGPExecutable(primitives, num_inputs, num_outputs, graph)

Bases: Executable

Represents a decoded CGP circuit, which can be executed on inputs.

class leap_ec.executable_rep.cgp.CGPWithParametersDecoder(primitives, num_inputs: int, num_outputs: int, num_layers: int, nodes_per_layer: int, max_arity: int, num_parameters_per_node: int, prune: bool = True, levels_back=None)

Bases: CGPDecoder

A CGP decoder that takes a genome with two segments: an integer vector defining the usual CGP genome (functions and connectivity), and an auxiliary vector defining additional constant parameters to be fed into each node’s function.

Much like bias weights in a neural network, these parameters allow a slightly different computation to be performed at different nodes that use the same primitive function.

decode(genome, *args, **kwargs)

Decode a genome containing both a CGP graph and a list of auxiliary parameters.

>>> primitives=[
...                lambda x, y, z: sum([x, y, z]),
...                lambda x, y, z: (x - y)*z,
...                lambda x, y, z: (x*y)*z
...            ]
>>> decoder = CGPWithParametersDecoder(primitives, num_inputs=2, num_outputs=2, num_layers=2, nodes_per_layer=2, max_arity=2, num_parameters_per_node=1)
>>> genome = [ [ 0, 0, 1, 1, 0, 1, 2, 2, 3, 0, 2, 3, 4, 5 ], [ 0.5, 15, 2.7, 0.0 ] ]
>>> executable = decoder.decode(genome)
>>> executable
<leap_ec.executable_rep.cgp.CGPExecutable object at ...>

Now node #2 (i.e. the first computational node, skipping the two inputs #0 and #1) should have a parameter value of 0.5, and so on:

>>> executable.graph.nodes[2]['parameters']
[0.5]
>>> executable.graph.nodes[3]['parameters']
[15]
>>> executable.graph.nodes[4]['parameters']
[2.7]
>>> executable.graph.nodes[5]['parameters']
[0.0]
initialize(parameters_initializer)

Return an initializer for creating the two-segment genomes that this decoder expects as input.

The first segment will be initialized with our standard CGP initializer. The second will use the provided initializer.

class leap_ec.executable_rep.cgp.FunctionPrimitive(func, f_arity: int)

Bases: Primitive

A convenience wrapper that defines a generic primitive function for CGP from a function (ex. a lambda). Basically this lets us define a function that we can also query the arity of.

>>> f = FunctionPrimitive(lambda x, y: x ^ y, 2)
>>> f(True, False)
True
>>> f.arity
2
property arity

How many args are used inside the __call__ function

class leap_ec.executable_rep.cgp.NAND

Bases: Primitive

Primitive NAND function for use in genetic programming.

>>> f = NAND()
>>> f(True, True)
False
>>> f(True, False)
True
property arity

How many args are used inside the __call__ function

class leap_ec.executable_rep.cgp.NotX

Bases: Primitive

Primitive NOT function for use in genetic programming.

>>> f = NotX()
>>> f(True)
False
>>> f(False)
True
property arity

How many args are used inside the __call__ function

class leap_ec.executable_rep.cgp.Primitive

Bases: ABC

Abstract class that primitive functions inherit from for CGP.

You don’t need to use this class to define primitive for CGP. But if you do, it allows CGP to know the arity of each function— which CGPDecoder can use to prune un-needed edges in the resulting graph. This sometimes leads better performance or simpler graphs.

abstract property arity: int

How many args are used inside the __call__ function

leap_ec.executable_rep.cgp.cgp_art_primitives()

Returns a standard set of primitives that Ashmore and Miller originally published in an online report on “Evolutionary Art with Cartesian Genetic Programming” (2004).

leap_ec.executable_rep.cgp.cgp_genome_mutate(cgp_decoder, expected_num_mutations: Optional[float] = None, probability: Optional[float] = None)
leap_ec.executable_rep.cgp.cgp_mutate(cgp_decoder, expected_num_mutations: Optional[float] = None, probability: Optional[float] = None)

A special integer-vector mutation operator that respects the constraints on valid genomes that are implied by the parameters of the given CGPDecoder.

Parameters
  • cgp_decoder – the Decoder, which informs us about the bounds genes should obey

  • expected_num_mutations – on average how many mutations done (specificy either this or probability, but not both)

  • probability – the probability of mutating any given gene (specificy either this or expected_num_mutations, but not both)

leap_ec.executable_rep.cgp.create_cgp_vector(cgp_decoder)

leap_ec.executable_rep.executable module

This module provides executable object representations. An Executable in LEAP represents problem solutions as functions, agent controllers, etc.

A LEAP Executable is a kind of phenotype, typically constructed when we use a Decoder to convert a genotypic representation of the object into an executable phenotype.

Executable are also just callable functors, so you can use them in your code like any other function.

class leap_ec.executable_rep.executable.ArgmaxExecutable(wrapped_executable)

Bases: Executable

Wraps another Executable with logic that returns the index of the highest output.

For example, we can use this to convert the class selection distribution output by a softmax layer to an integer representing the index of the most likely class:

>>> executable = lambda x: [ x[0] ^ x[1], x[0] & x[1], x[0] + x[1] ]
>>> wrapped = ArgmaxExecutable(executable)
>>> executable([1, 1])
[0, 1, 2]
>>> wrapped([1, 1])
2
class leap_ec.executable_rep.executable.Executable

Bases: ABC

class leap_ec.executable_rep.executable.KeyboardExecutable(input_space, output_space, keymap=<function KeyboardExecutable.<lambda>>)

Bases: Executable

A non-autonomous Executable phenotype that allows users to control an agent via the keyboard.

Parameters
  • input_space – space of possible inputs (ignored)

  • output_space – the space of possible actions to sample from, satisfying the Space interface used by OpenAI Gym

  • keymapdict mapping keys to elements of the output space

key_press(key, mod)

You’ll need to assign this function to your environment’s key_press handler.

key_release(key, mod)

You’ll need to assign this function to your environment’s key_release handler.

class leap_ec.executable_rep.executable.RandomExecutable(input_space, output_space)

Bases: Executable

A trivial Executable phenotype that samples a random value from its output space.

Parameters
  • input_space – space of possible inputs (ignored)

  • output_space – the space of possible actions to sample from, satisfying the Space interface used by OpenAI Gym

class leap_ec.executable_rep.executable.WrapperDecoder(wrapped_decoder, decorator)

Bases: Decoder

A decoder that takes an executable object output by the wrapped Decoder, and then wrapps that Executable with an additional decorator function.

For example, if we have a Decoder that produces Executable objects whose output is governed by a softmax layer (i.e. a distribution), we can use this class to decorate them with an ArgmaxExecutable to transform their output into an integer.

decode(genome, *args, **kwargs)
Parameters

genome – a genome you wish to convert

Returns

the phenotype associated with that genome

leap_ec.executable_rep.neural_network module

Tools for decoding and executing a neural network from its genetic representation.

class leap_ec.executable_rep.neural_network.GraphPhenotypeProbe(modulo=1, ax=None, weights: bool = False, weight_multiplier: float = 1.0, context={'leap': {'distrib': {'non_viable': 0}, 'generation': 100}})

Bases: object

Visualize the graph for the best individual in the population.

This requires that the phenotypes of the individuals in the population have a graph attribute that provides a networkx graph object.

class leap_ec.executable_rep.neural_network.SimpleNeuralNetworkDecoder(shape: ~typing.Tuple[int], activation=<function sigmoid>)

Bases: object

Decode a real-vector genome into a neural network by treating it as a test_sequence of weight matrices.

For example, say we have a linear real-valued made up of 29 values:

>>> genome = list(range(0, 29))

We can decode this into a neural network with 4 inputs, two hidden layers (of size 3 and 2), and 2 outputs like so:

>>> from leap_ec.executable_rep import neural_network
>>> dec = neural_network.SimpleNeuralNetworkDecoder([ 4, 3, 2, 2 ])
>>> nn = dec.decode(genome)
Parameters

shape ((int)) – the size of each layer of the network, i.e. (inputs, hidden nodes, outputs). The shape tuple must have at least two elements (inputs + bias weight and outputs): each additional value is treated as a hidden layer. Note also that we expect a bias weight to exist for the inputs of each layer, so the number of weights at each layer will be set to 1 greater than the number of inputs you specify for that layer.

decode(genome, *args, **kwargs)

Decode a genome into a SimpleNeuralNetworkExecutable.

class leap_ec.executable_rep.neural_network.SimpleNeuralNetworkExecutable(weight_matrices, activation)

Bases: Executable

A simple fixed-architecture neural network that can be executed on inputs.

Takes a list of weight matrices and an activation function as arguments. The weight matrices each must have 1 row more than the previous layer’s outputs, to support a bias node that is implicitly connected to each layer.

For example, here we build a network with 10 inputs, two hidden layers (with 5 and 3 nodes, respectively), and 5 output nodes, and random weights:

>>> import numpy as np
>>> from leap_ec.executable_rep import neural_network
>>> n_inputs = 10
>>> n_hidden1, n_hidden2 = 5, 3
>>> n_outputs = 5
>>> weights = [ np.random.uniform((n_inputs + 1, n_hidden1)),
...             np.random.uniform((n_hidden1 + 1, n_hidden2)),
...             np.random.uniform((n_hidden2 + 1, n_outputs)) ]
>>> nn = neural_network.SimpleNeuralNetworkExecutable(weights, neural_network.sigmoid)
property graph

Create a graph representation of this neural network (ex., for visualization).

property num_hidden_layers

The number of hidden layers in this network.

property num_inputs

The number of inputs the network receives.

property num_outputs

The number of outputs the network produces.

leap_ec.executable_rep.neural_network.relu(x)

A rectified linear unit (ReLu) activation function. Accept array-like inputs, and uses NumPy for efficient computation.

leap_ec.executable_rep.neural_network.sigmoid(x)

A logistic sigmoid activation function. Accepts array-like inputs, and uses NumPy for efficient computation.

leap_ec.executable_rep.neural_network.softmax(x)

A softmax activation function. Accepts array-like input and normalizes each element relative to the others.

leap_ec.executable_rep.problems module

class leap_ec.executable_rep.problems.EnvironmentProblem(runs: int, steps: int, environment, fitness_type: str, gui: bool, stop_on_done=True, maximize=True)

Bases: ScalarProblem

Defines a fitness function over Executable by evaluating them within a given environment.

Parameters
  • runs (int) – The number of independent runs to aggregate data over.

  • steps (int) – The number of steps to run the simulation for within each run.

  • environment – A simulation environment corresponding to the OpenAI Gym environment interface.

  • behavior_fitness – A function

evaluate(phenome)

Run the environmental simulation using executable phenotype as a controller, and use the resulting observations & rewards to compute a fitness value.

property num_inputs

Return the number of dimensions in the environment’s input space.

property num_outputs

Return the number of dimensions in the environment’s action space.

static space_dimensions(observation_space) int

Helper to get the number of dimensions (variables) in an OpenAI Gym space.

The point of this helper is that it works on simple spaces:

>>> from gymnasium import spaces
>>> discrete = spaces.Discrete(8)
>>> EnvironmentProblem.space_dimensions(discrete)
1

Box spaces:

>>> box = spaces.Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)
>>> EnvironmentProblem.space_dimensions(box)
12

And Tuple spaces:

>>> tup = spaces.Tuple([discrete, box])
>>> EnvironmentProblem.space_dimensions(tup)
13
class leap_ec.executable_rep.problems.ImageXYProblem(path, maximize=False)

Bases: ScalarProblem

A problem that takes a function that generates an image defined over (x, y) coordinates and computed its fitness based on its match to an externally-defined image.

evaluate(phenome)

Evaluate the given phenome.

Practitioners must over-ride this member function.

Note that by default the individual comparison operators assume a maximization problem; if this is a minimization problem, then just negate the value when returning the fitness.

Parameters

phenome – the phenome to evaluate (this will not be modified)

Returns

the fitness value

static generate_image(executable, width, height)
class leap_ec.executable_rep.problems.TruthTableProblem(boolean_function, num_inputs, num_outputs, name: Optional[str] = None, pad_inputs=False, maximize=True)

Bases: ScalarProblem

Defines a fitness function over a Executable by evaluating it against each row of a given Boolean function’s truth table.

Both the executable we receive and the boolean_function we compare against should return a list of 1 or more outputs.

evaluate(phenome)

Say our object function is \($(x_0 \wedge x_1) \vee x_3$:\)

>>> problem = TruthTableProblem(lambda x: [ (x[0] and x[1]) or x[2] ], num_inputs=3, num_outputs=1)

The truth table for this Boolean function has eight entries:

F F F=F F F T=T F T F=F F T T=T T F F=F T F T=T T T F=T T T T=T

Now consider a different function, \($(x_0 \wedge x_1) \oplus x_3$\).

>>> executable = lambda x: [ (x[0] and x[1]) ^ x[2] ]

This function’s truth table differs from the first one by exactly one entry (in the second one, TTT=F). So we expect a fitness value of $7/8 = 0.875$:

>>> from leap_ec import Individual
>>> problem.evaluate(executable)
0.875

Note that we our lambda functions above return a list that contains a computed value, rather than just the value directly. This is because this framework allows us to work with functions of more than one output:

>>> problem = TruthTableProblem(lambda x: [ x[0] and x[1], x[0] or x[1] ], num_inputs=3, num_outputs=2)
>>> problem.evaluate(lambda x: [ x[0] and x[1], x[0] or x[1] ])
1.0

leap_ec.executable_rep.rules module

Pitt-approach rule systems are one of the two basic approach to evolving rule-based programs (alongside Michigan-approach systems). In Pitt systems, every individual encodes a complete set of rules for producing an output given a set of inputs.

Evolutionary rule systems (also known as learning classifier systems) are often used to create controller for agents (i.e. for reinforcement learning problems), or to evolve classifiers for pattern recognition (i.e. supervised learning).

This module provides a basic Pitt-approach system that uses the spaces API from OpenAI Gym to define input and output spaces for rule conditions and actions, respectively.

class leap_ec.executable_rep.rules.PittRulesDecoder(input_space, output_space, memory_space=None, priority_metric=None)

Bases: Decoder

A Decoder that contructs a Pitt-approach rule system phenotype (PittRulesExecutable) out of a real-valued genome.

We use the OpenAI Gym spaces API to define the types and dimensionality of the rule system’s inputs and outputs.

Parameters
  • input_space – an OpenAI-gym-style space defining the inputs

  • output_space – an OpenAI-gym-style space defining the outputs

  • priority_metric – a PittRulesExecutable.PriorityMetric enum value defining how matching rules are deconflicted within the controller

  • num_memory_registers – the number of stateful memory registers that each rule considers as additional inputs

If, for example, we want to evolve controllers for a robot that has 3 real-valued sensor inputs and 4 mutually exclusive actions to choose from, we might use a Box and Discrete space, respectively, from gym.spaces:

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)
property action_bounds

The bounds of permitted values on action genes within each rule.

For example, the following decoder

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.5, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

allows just one output value gene in each rule, with a maximum value of 4.

Bounds are inclusive, so they look like this:

>>> decoder.action_bounds
[(0, 3)]
bounds(num_rules)

Return the (low, high) bounds that it makes sense for each gene to vary within.

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)
>>> decoder.bounds(num_rules=4)
[[(0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0, 3)], [(0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0, 3)], [(0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0, 3)], [(0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0.0, 1.0), (0, 3)]]
property condition_bounds

The bounds of permitted values on condition genes within each rule.

For example, the following decoder

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.5, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

produces bounds that restrict the low and high value of each condition’s range between 0 and 1.5:

>>> decoder.condition_bounds
[(0.0, 1.5), (0.0, 1.5), (0.0, 1.5), (0.0, 1.5), (0.0, 1.5), (0.0, 1.5)]
decode(genome, *args, **kwargs)

Decodes a real-valued genome into a PittRulesExecutable.

For example, say we have a Decoder that takes continuous inputs from a 2-D box and selects between two discrete actions:

>>> import numpy as np
>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=np.array((0, 0)), high=np.array((1.0, 1.0)), dtype=np.float32)
>>> out_ = spaces.Discrete(2)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

Now we can take genomes that represent each rule as as segment of the form [low, high, low, high, action] and converts them into executable controllers:

>>> genome = [ [ 0.0,0.6, 0.0,0.4, 0],
...            [ 0.4,1.0, 0.6,1.0, 1] ]
>>> decoder.decode(genome)
<leap_ec.executable_rep.rules.PittRulesExecutable object at ...>
genome_to_rules(genome)

Convert a genome into a list of Rules.

Usage example:

>>> import numpy as np
>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=np.array((0, 0)), high=np.array((1.0, 1.0)), dtype=np.float32)
>>> out_ = spaces.Discrete(2)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

Now we can take genomes that represent each rule as as segment of the form [low, high, low, high, action] and converts them into Rule objects:

>>> genome = [ [ 0.0,0.6, 0.0,0.4, 0],
...            [ 0.4,1.0, 0.6,1.0, 1] ]
>>> decoder.genome_to_rules(genome)
[Rule(conditions=[(0.0, 0.6), (0.0, 0.4)], actions=[0]), Rule(conditions=[(0.4, 1.0), (0.6, 1.0)], actions=[1])]
initializer(num_rules: int)

Returns an initializer function that can generate genomes according to the segmented scheme that we use for rule sets—i.e. with the appropriate number of segments, inputs, outputs, and hidden registers.

For instance, if we have the following decoder:

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

Then we can get an initializer like so that creates genomes compatible with the decoder when called:

>>> initialize = decoder.initializer(num_rules=4)
>>> initialize()
[array(...), array(...), array(...), array(...)]

Notice that it creates four top-level segments (one for each rule), and that the condition bounds for each input within a rule are wrapped in tuple sub-segments.

mutator(condition_mutator, action_mutator)

Returns a mutation operator that properly handles the segmented genome representation used for rule sets.

This wraps two different mutation operators you provide, so that mutation can be configured differently for rule conditions and rule actions, respectively.

Parameters
  • condition_mutator – a mutation operator to use for the condition genes in each rule.

  • action_mutator – a mutation operator to use for the action genes in each rule.

For example, often we’ll apply a rule system to a real-valued observation space and an integer-valued action space.

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

These two spaces call for different mutation strategies:

>>> from leap_ec.real_rep.ops import genome_mutate_gaussian
>>> from leap_ec.int_rep.ops import individual_mutate_randint
>>> mutator = decoder.mutator(
...                     condition_mutator=genome_mutate_gaussian,
...                     action_mutator=individual_mutate_randint
... )
property num_genes_per_rule

This property reports the total number of genes that specify each rule.

For example, the following decoder

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 3), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

takes rule genomes that have 7 values in each segment: 6 to specify the condition ranges ((low, high) for each of 3 inputs), and 1 to specify the output action.

>>> decoder.num_genes_per_rule
7
property num_inputs

This property reports the number of dimensions in the system’s input space.

For example, the following decoder

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 12), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

has a 12-dimensional input space:

>>> decoder.num_inputs
12
property num_memory_registers
property num_outputs

This property reports the number of dimensions in the system’s output space.

For example, the following decoder

>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=0, high=1.0, shape=(1, 12), dtype=np.float32)
>>> out_ = spaces.Discrete(4)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

has a 1-dimensional output space:

>>> decoder.num_outputs
1
class leap_ec.executable_rep.rules.PittRulesExecutable(input_space, output_space, rules, priority_metric, init_mem=[])

Bases: Executable

An Executable phenotype that interprets a Pittsburgh-style ruleset and outputs the appropriate action.

Parameters
  • input_space – an OpenAI-gym-style space defining the inputs

  • output_space – an OpenAI-gym-style space defining the outputs

  • init_memory – a list of initial values for the memory registers

  • rules – a list of Rule objects

  • priority_metric – the rule prioritization strategy used to resolve conflicts

Rulesets are lists of rules. Rules are lists of the form [ c1 c1’ c2 c2’ … cn cn’ a1 … am m1 … mr], where (cx, cx’) are are the min and max bounds that the rule covers, a1 .. am are the output actions, and m1 … mr are values to write to the memory registers.

For example, this ruleset has two rules. The first rule covers the square bounded by (0.0, 0.6)’ and `(0.0, 0.4), returning the output action 0 if the input falls within that range:

>>> rules = [ Rule(conditions=[(0.0, 0.6), (0.0, 0.4)], actions=[0]),
...           Rule(conditions=[(0.4, 1.0), (0.6, 1.0)], actions=[1])
...         ]

The input and output spaces are defined in the style of OpenAI gym. For example, here’s how you would set up a PittRulesExecutable with the above ruleset that takes two continuous input variables on (0.0, 1.0), and outputs discrete values in {0, 1}:

>>> import numpy as np
>>> from gymnasium import spaces
>>> input_space = spaces.Box(low=np.array((0, 0)), high=np.array((1.0, 1.0)), dtype=np.float32)
>>> output_space = spaces.Discrete(2)
>>> rules = PittRulesExecutable(input_space, output_space, rules,
...                             priority_metric=PittRulesExecutable.PriorityMetric.RULE_ORDER)
class PriorityMetric(value)

Bases: Enum

An enumeration.

GENERALITY = 2
PERIMETER = 3
RULE_ORDER = 1
class leap_ec.executable_rep.rules.PlotPittRuleProbe(decoder, plot_dimensions: (<class 'int'>, <class 'int'>) = (0, 1), ax=None, xlim=(0, 1), ylim=(0, 1), modulo=1, context={'leap': {'distrib': {'non_viable': 0}, 'generation': 100}})

Bases: object

A visualization operator that takes the best individual in the population and plots the condition bounds for each rule, i.e. as boxes over the input space.

Parameters
  • num_inputs (int) – the number of inputs in the sensor space

  • num_outputs (int) – the number of output actions

  • plot_dimensions ((int, int)) – which two dimensions of the input space to visualize along the x and y axes; defaults to the first two dimensions, (0, 1)

  • ax – the matplotlib axis to plot to; if None (the default), new Axes are created

  • xlim ((float, float)) – bounds for the horizontal axis

  • ylim ((float, float)) – bounds for the vertical axis

  • modulo (int) – the interval (in generations) to go between each visualization; i.e. if set to 10, then the visualization will be updated every 10 generations

  • context – the context objected that the generation count is read from (should be updated by the algorithm at each generation)

This probe requires a decoder, which it uses to parse individual genomes into sets of rules that it can visualize:

>>> import numpy as np
>>> from gymnasium import spaces
>>> in_ = spaces.Box(low=np.array((0, 0)), high=np.array((1.0, 1.0)), dtype=np.float32)
>>> out_ = spaces.Discrete(2)
>>> decoder = PittRulesDecoder(input_space=in_, output_space=out_)

Now we can create the probe itself:

>>> probe = PlotPittRuleProbe(decoder)

If we feed it a population of a single individual, we’ll see all that individual’s rules visualized. Like all LEAP probes, it returns the population unmodified. This allows the probe to be inserted into an EA’s operator pipeline.

>>> from leap_ec.individual import Individual
>>> ruleset = np.array([[0.0, 0.6, 0.0, 0.5, 0],
...                     [0.4, 1.0, 0.3, 1.0, 1],
...                     [0.1, 0.2, 0.1, 0.2, 0],
...                     [0.5, 0.6, 0.8, 1.0, 1]])
>>> pop = [Individual(genome=ruleset)]
>>> probe(pop)
[Individual<...>(...)]

(Source code)

class leap_ec.executable_rep.rules.Rule(conditions, actions)

Bases: tuple

property actions

Alias for field number 1

property conditions

Alias for field number 0

Module contents