scml.oneshot.rl

Submodules

Package Contents

Classes

ActionManager

Manges actions of an agent in an RL environment.

FlexibleActionManager

An action manager that matches any context.

OneShotRLAgent

A oneshot agent that can execute trained RL models in appropriate worlds. It falls back to the given agent type otherwise

OneShotEnv

The main Gymnasium class for implementing Reinforcement Learning Agents environments.

ObservationManager

Manages the observations of an agent in an RL environment

FlexibleObservationManager

An observation manager that can be used with any SCML world.

RewardFunction

Represents a reward function.

DefaultRewardFunction

The default reward function of SCML

Functions

model_wrapper(→ RLModel)

Wraps a stable_baselines3 model as an RL model

random_action(→ numpy.ndarray)

Samples a random action from the action space of the

random_policy(→ numpy.ndarray)

Ends the negotiation or accepts with a predefined probability or samples a random response.

greedy_policy()[, debug])

A simple greedy policy.

Attributes

DefaultActionManager

The default action manager

RLState

We assume that RL states are numpy arrays

RLAction

We assume that RL actions are numpy arrays

RLModel

A policy is a callable that receives a state and returns an action

DefaultObservationManager

The default observation manager

__all__

class scml.oneshot.rl.ActionManager[source]

Bases: abc.ABC

Manges actions of an agent in an RL environment.

context: scml.oneshot.context.BaseContext
continuous: bool = False
n_suppliers: int
n_consumers: int
n_partners: int
abstract make_space() gymnasium.Space[source]

Creates the action space

abstract decode(awi: scml.oneshot.awi.OneShotAWI, action: numpy.ndarray) dict[str, negmas.sao.common.SAOResponse][source]

Decodes an action from an array to a PurchaseOrder and a CounterMessage.

encode(awi: scml.oneshot.awi.OneShotAWI, responses: dict[str, negmas.sao.common.SAOResponse]) numpy.ndarray[source]

Encodes an action as an array. This is only used for testing so it is optional

class scml.oneshot.rl.FlexibleActionManager[source]

Bases: ActionManager

An action manager that matches any context.

Parameters:
  • n_prices – Number of distinct prices allowed in the action.

  • max_quantity – Maximum allowed quantity to offer in any negotiation. The number of quantities is one plus that because zero is allowed to model ending negotiation.

  • n_partners – Maximum of partners allowed in the action.

Remarks:
  • This action manager will always generate offers that are within the price and quantity limits given in its parameters. Wen decoding them, it will scale them up so that the maximum corresponds to the actual value in the world it finds itself. For example, if n_prices is 10 and the world has only two prices currently in the price issue, it will use any value less than 5 as the minimum price and any value above 5 as the maximum price. If on the other hand the current price issue has 20 values, then it will scale by multiplying the number given in the encoded action (ranging from 0 to 9) by 19/9 which makes it range from 0 to 19 which is what is expected by the world.

  • This action manager will adjust offers for different number of partners as follows: - If the true number of partners is larger than n_partners used by this action manager,

    it will simply use n_partners of them and always end negotiations with the rest of them.

    • If the true number of partners is smaller than n_partners, it will use the first n_partners values in the encoded action and increase the quantities of any counter offers (i.e. ones in which the response is REJECT_OFFER) by the amount missing from the ignored partners in the encoded action up to the maximum quantities allowed by the current negotiation context. For example, if n_partneers is 4 and we have only 2 partners in reality, and the received quantities from partners were [4, 3] while the maximum quantity allowed is 10 and the encoded action was [2, *, 3, *, 2, *, 1, *] (where we ignored prices), then the encoded action will be converted to [(Reject, 5, *), (Accept, 3, *)] where the 3 extra units that were supposed to be offered to the last two partners are moved to the first partner. If the maximum quantity allowed was 4 in that example, the result will be [(Reject, 4, *), (Accept, 3, *)].

capacity_multiplier: int = 1
n_prices: int = 2
max_group_size: int = 2
reduce_space_size: bool = True
extra_checks: bool = False
max_quantity: int
__attrs_post_init__()[source]
make_space() gymnasium.spaces.MultiDiscrete | gymnasium.spaces.Box[source]

Creates the action space

decode(awi: scml.oneshot.awi.OneShotAWI, action: numpy.ndarray) dict[str, negmas.sao.common.SAOResponse][source]

Generates offers to all partners from an encoded action. Default is to return the action as it is assuming it is a dict[str, SAOResponse]

encode(awi: scml.oneshot.awi.OneShotAWI, responses: dict[str, negmas.sao.common.SAOResponse]) numpy.ndarray[source]

Receives offers for all partners and generates the corresponding action. Used mostly for debugging and testing.

scml.oneshot.rl.DefaultActionManager[source]

The default action manager

class scml.oneshot.rl.OneShotRLAgent(*args, models: list[scml.oneshot.rl.common.RLModel] | tuple[scml.oneshot.rl.common.RLModel, Ellipsis] = tuple(), observation_managers: list[scml.oneshot.rl.observation.ObservationManager] | tuple[scml.oneshot.rl.observation.ObservationManager, Ellipsis] = tuple(), action_managers: list[scml.oneshot.rl.action.ActionManager] | tuple[scml.oneshot.rl.action.ActionManager, Ellipsis] | None = None, fallback_type: type[scml.oneshot.agent.OneShotAgent] | None = GreedyOneShotAgent, fallback_params: dict[str, Any] | None = None, dynamic_context_switching: bool = False, randomize_test_order: bool = False, **kwargs)[source]

Bases: scml.oneshot.policy.OneShotPolicy

A oneshot agent that can execute trained RL models in appropriate worlds. It falls back to the given agent type otherwise

Parameters:
  • models – List of models to choose from.

  • observation_managers – List of observation managers. Must be the same length as models

  • action_managers – List of action managers of the same length as models or None to use the default action manager.

  • fallback_type – A OneShotAgent type to use as a fall-back if the current world is not compatible with any observation/action managers

  • fallback_params – Parameters of the fallback_type

  • dynamic_context_switching – If True, the world is tested each step (instead of only at init) to find the appropriate model

  • randomize_test_order – If True, the order at which the observation/action managers are checked for compatibility with the current world is randomized.

  • **kwargs – Any other OneShotPolicy parameters

setup_fallback()[source]
has_no_valid_model()[source]
context_switch()[source]
init()[source]

Called once after the AWI is set.

Remarks:
  • Use this for any proactive initialization code.

encode_state(mechanism_states: dict[str, negmas.sao.common.SAOState]) scml.oneshot.rl.common.RLState[source]

Called to generate a state to be passed to the act() method. The default is all of awi of type OneShotState

decode_action(action: scml.oneshot.rl.common.RLAction) dict[str, negmas.sao.common.SAOResponse][source]

Generates offers to all partners from an encoded action. Default is to return the action as it is assuming it is a dict[str, SAOResponse]

act(state: scml.oneshot.rl.common.RLState) scml.oneshot.rl.common.RLAction[source]

The main policy. Generates an action given a state

propose(*args, **kwargs) negmas.outcomes.Outcome | None[source]

Called when the agent is asking to propose in one negotiation

respond(*args, **kwargs) negmas.gb.common.ResponseType[source]

Called when the agent is asked to respond to an offer

before_step()[source]

Called at at the BEGINNING of every production step (day)

step()[source]

Called at at the END of every production step (day)

on_negotiation_failure(*args, **kwargs) None[source]

Called when a negotiation the agent is a party of ends without agreement

on_negotiation_success(*args, **kwargs) None[source]

Called when a negotiation the agent is a party of ends with agreement

scml.oneshot.rl.RLState[source]

We assume that RL states are numpy arrays

scml.oneshot.rl.RLAction[source]

We assume that RL actions are numpy arrays

scml.oneshot.rl.RLModel[source]

A policy is a callable that receives a state and returns an action

scml.oneshot.rl.model_wrapper(model, deterministic: bool = False) RLModel[source]

Wraps a stable_baselines3 model as an RL model

class scml.oneshot.rl.OneShotEnv(action_manager: scml.oneshot.rl.action.ActionManager, observation_manager: scml.oneshot.rl.observation.ObservationManager, reward_function: scml.oneshot.rl.reward.RewardFunction = DefaultRewardFunction(), context: scml.oneshot.context.BaseContext = FixedPartnerNumbersOneShotContext(), agent_type: type[scml.oneshot.agent.OneShotAgent] = Placeholder, agent_params: dict[str, Any] | None = None, extra_checks: bool = True, skip_after_negotiations: bool = True, render_mode=None, debug=False)[source]

Bases: gymnasium.Env

The main Gymnasium class for implementing Reinforcement Learning Agents environments.

The class encapsulates an environment with arbitrary behind-the-scenes dynamics through the step() and reset() functions. An environment can be partially or fully observed by single agents. For multi-agent environments, see PettingZoo.

The main API methods that users of this class need to know are:

  • step() - Updates an environment with actions returning the next agent observation, the reward for taking that actions, if the environment has terminated or truncated due to the latest action and information from the environment about the step, i.e. metrics, debug info.

  • reset() - Resets the environment to an initial state, required before calling step. Returns the first agent observation for an episode and information, i.e. metrics, debug info.

  • render() - Renders the environments to help visualise what the agent see, examples modes are “human”, “rgb_array”, “ansi” for text.

  • close() - Closes the environment, important when external software is used, i.e. pygame for rendering, databases

Environments have additional attributes for users to understand the implementation

  • action_space - The Space object corresponding to valid actions, all valid actions should be contained within the space.

  • observation_space - The Space object corresponding to valid observations, all valid observations should be contained within the space.

  • reward_range - A tuple corresponding to the minimum and maximum possible rewards for an agent over an episode. The default reward range is set to \((-\infty,+\infty)\).

  • spec - An environment spec that contains the information used to initialize the environment from gymnasium.make()

  • metadata - The metadata of the environment, i.e. render modes, render fps

  • np_random - The random number generator for the environment. This is automatically assigned during super().reset(seed=seed) and when assessing self.np_random.

See also

For modifying or extending environments use the gymnasium.Wrapper class

Note

To get reproducible sampling of actions, a seed can be set with env.action_space.seed(123).

_get_obs()[source]
calc_info()[source]

Calculates info to be returned from step().

_render_frame()[source]

Used for rendering. Override with your rendering code

close()[source]

After the user has finished using the environment, close contains the code necessary to “clean up” the environment.

This is critical for closing rendering windows, database or HTTP connections. Calling close on an already closed environment has no effect and won’t raise an error.

render()[source]

Compute the render frames as specified by render_mode during the initialization of the environment.

The environment’s metadata render modes (env.metadata["render_modes"]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.

Note

As the render_mode is known during __init__, the objects used to render the environment state should be initialised in __init__.

By convention, if the render_mode is:

  • None (default): no render is computed.

  • “human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during step() and render() doesn’t need to be called. Returns None.

  • “rgb_array”: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.

  • “ansi”: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).

  • “rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper, gymnasium.wrappers.RenderCollection that is automatically applied during gymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped after render() is called or reset().

Note

Make sure that your class’s metadata "render_modes" key includes the list of supported modes.

Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e., gymnasium.make("CartPole-v1", render_mode="human")

reset(*, seed: int | None = None, options: dict[str, Any] | None = None) tuple[Any, dict[str, Any]][source]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset.

Therefore, reset() should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version v0.25: The return_info parameter was removed and now info is expected to be returned.

Parameters:
  • seed (optional int) – The seed that is used to initialize the environment’s PRNG (np_random). If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

  • options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns:

Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to

the info returned by step().

Return type:

observation (ObsType)

step(action)[source]

Run one timestep of the environment’s dynamics using the agent actions.

When the end of an episode is reached (terminated or truncated), it is necessary to call reset() to reset this environment’s state for the next episode.

Changed in version 0.26: The Step API was changed removing done in favor of terminated and truncated to make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.

Parameters:

action (ActType) – an action provided by the agent to update the environment state.

Returns:

An element of the environment’s observation_space as the next observation due to the agent actions.

An example is a numpy array containing the positions and velocities of the pole in CartPole.

reward (SupportsFloat): The reward as a result of taking the action. terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task)

which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call reset().

truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.

Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call reset().

info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).

This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.

done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further step() calls will

return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.

Return type:

observation (ObsType)

class scml.oneshot.rl.ObservationManager[source]

Bases: Protocol

Manages the observations of an agent in an RL environment

property context: scml.oneshot.context.BaseContext
make_space() gymnasium.spaces.Space[source]

Creates the observation space

encode(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Encodes an observation from the agent’s awi

make_first_observation(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Creates the initial observation (returned from gym’s reset())

get_offers(awi: scml.oneshot.awi.OneShotAWI, encoded: numpy.ndarray) dict[str, negmas.outcomes.Outcome | None][source]

Gets the offers from an encoded awi

class scml.oneshot.rl.FlexibleObservationManager[source]

Bases: BaseObservationManager

An observation manager that can be used with any SCML world.

Parameters:
  • capacity_multiplier – A factor to multiply by the number of lines to give the maximum quantity allowed in offers

  • exogenous_multiplier – A factor to multiply maximum production capacity with when encoding exogenous quantities

  • continuous – If given the observation space will be a Box otherwise it will be a MultiDiscrete

  • n_prices – The number of prices to use for encoding the unit price (if not continuous)

  • max_production_cost – The limit for production cost. Anything above that will be mapped to this max

  • max_group_size – Maximum size used for grouping observations from multiple partners. This will be used in the number of partners in the simulation is larger than the number used for training.

  • n_past_received_offers – Number of past received offers to add to the observation.

  • n_bins

    1. bins to use for discretization (if not continuous)

  • n_sigmas – The number of sigmas used for limiting the range of randomly distributed variables

  • extra_checks – If given, extra checks are applied to make sure encoding and decoding make sense

Remarks:

capacity_multiplier: int = 1
n_prices: int = 2
max_group_size: int = 2
reduce_space_size: bool = True
n_past_received_offers: int = 1
extra_checks: bool = False
n_bins: int = 40
n_sigmas: int = 2
max_production_cost: int = 10
exogenous_multiplier: int = 1
max_quantity: int
_chosen_partner_indices: list[int] | None
_previous_offers: collections.deque
_dims: list[int] | None
__attrs_post_init__()[source]
get_dims() list[int][source]

Get the sizes of all dimensions in the observation space. Used if not continuous.

make_space() gymnasium.spaces.MultiDiscrete | gymnasium.spaces.Box[source]

Creates the action space

make_first_observation(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Creates the initial observation (returned from gym’s reset())

encode(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Encodes the awi as an array

extra_obs(awi: scml.oneshot.awi.OneShotAWI) list[tuple[float, int] | float][source]

The observation values other than offers and previous offers.

Returns:

A list of tuples. Each is some observation variable as a real number between zero and one and a number of bins to use for discrediting this variable. If a single value, the number of bins will be self.n_bin

get_offers(awi: scml.oneshot.awi.OneShotAWI, encoded: numpy.ndarray) dict[str, negmas.outcomes.Outcome | None][source]

Gets offers from an encoded awi.

scml.oneshot.rl.DefaultObservationManager[source]

The default observation manager

scml.oneshot.rl.random_action(obs: numpy.ndarray, env: scml.oneshot.rl.env.OneShotEnv) numpy.ndarray[source]

Samples a random action from the action space of the

scml.oneshot.rl.random_policy(obs: numpy.ndarray, env: scml.oneshot.rl.env.OneShotEnv, pend: float = 0.05, paccept: float = 0.15) numpy.ndarray[source]

Ends the negotiation or accepts with a predefined probability or samples a random response.

scml.oneshot.rl.greedy_policy(obs: numpy.ndarray, awi: scml.oneshot.awi.OneShotAWI, obs_manager: scml.oneshot.rl.observation.ObservationManager, action_manager: scml.oneshot.rl.action.ActionManager = FlexibleActionManager(ANACOneShotContext()), debug=False, distributor: Callable[[int, int], list[int]] = all_but_concentrated) numpy.ndarray[source]

A simple greedy policy.

Parameters:
  • obs – The current observation

  • awi – The AWI of the agent running the policy

  • obs_manager – The observation manager used to encode the observation

  • action_manager – The action manager to be used to encode the action

  • debug – If True, extra assertions are tested

  • distributor – A callable that receives a total quantity to be distributed over n partners and returns a list of n values that sum to this total quantity

Remarks:
  • Accepts the subset of offers with maximum total quantity under current needs.

  • The remaining quantity is distributed over the remaining partners using the distributor function

  • Prices are set to the worst for the agent if the price range is small else they are set randomly

class scml.oneshot.rl.RewardFunction[source]

Bases: Protocol

Represents a reward function.

Remarks:
  • before_action is called before the action is executed for initialization and should return info to be passed to the call

  • __call__ is called with the awi (to get the state), action and info and should return the reward

before_action(awi: scml.oneshot.awi.OneShotAWI) Any[source]

Called before executing the action from the RL agent to save any required information for calculating the reward in its return

Remarks:

The returned value will be passed as info to __call__() when it is time to calculate the reward.

__call__(awi: scml.oneshot.awi.OneShotAWI, action: dict[str, negmas.SAOResponse], info: Any) float[source]

Called to calculate the reward to be given to the agent at the end of a step.

Parameters:
  • awiOneShotAWI to access the agent’s state

  • action – The action (decoded) as a mapping from partner ID to responses to their last offer.

  • info – Information generated from before_action(). You an use this to store baselines for calculating the reward

Returns:

The reward (a number) to be given to the agent at the end of the step.

class scml.oneshot.rl.DefaultRewardFunction[source]

Bases: RewardFunction

The default reward function of SCML

Remarks:
  • The reward is the difference between the balance before the action and after it.

before_action(awi: scml.oneshot.awi.OneShotAWI) float[source]

Called before executing the action from the RL agent to save any required information for calculating the reward in its return

Remarks:

The returned value will be passed as info to __call__() when it is time to calculate the reward.

__call__(awi: scml.oneshot.awi.OneShotAWI, action: dict[str, negmas.SAOResponse], info: float)[source]

Called to calculate the reward to be given to the agent at the end of a step.

Parameters:
  • awiOneShotAWI to access the agent’s state

  • action – The action (decoded) as a mapping from partner ID to responses to their last offer.

  • info – Information generated from before_action(). You an use this to store baselines for calculating the reward

Returns:

The reward (a number) to be given to the agent at the end of the step.

scml.oneshot.rl.__all__[source]