scml.std.rl

Submodules

Package Contents

Classes

ActionManager

Manges actions of an agent in an RL environment.

FlexibleActionManager

An action manager that matches any context.

StdEnv

The main Gymnasium class for implementing Reinforcement Learning Agents environments.

ObservationManager

Manages the observations of an agent in an RL environment

FlexibleObservationManager

An observation manager that can be used with any SCML world.

RewardFunction

Represents a reward function.

DefaultRewardFunction

The default reward function of SCML

Functions

model_wrapper(→ RLModel)

Wraps a stable_baselines3 model as an RL model

random_action(→ numpy.ndarray)

Samples a random action from the action space of the

random_policy(→ numpy.ndarray)

Ends the negotiation or accepts with a predefined probability or samples a random response.

greedy_policy()[, debug])

A simple greedy policy.

Attributes

DefaultActionManager

The default action manager

StdRLAgent

RLState

We assume that RL states are numpy arrays

RLAction

We assume that RL actions are numpy arrays

RLModel

A policy is a callable that receives a state and returns an action

DefaultObservationManager

The default observation manager

__all__

class scml.std.rl.ActionManager[source]

Bases: abc.ABC

Manges actions of an agent in an RL environment.

context: scml.oneshot.context.BaseContext
continuous: bool = False
n_suppliers: int
n_consumers: int
n_partners: int
abstract make_space() gymnasium.Space[source]

Creates the action space

abstract decode(awi: scml.oneshot.awi.OneShotAWI, action: numpy.ndarray) dict[str, negmas.sao.common.SAOResponse][source]

Decodes an action from an array to a PurchaseOrder and a CounterMessage.

encode(awi: scml.oneshot.awi.OneShotAWI, responses: dict[str, negmas.sao.common.SAOResponse]) numpy.ndarray[source]

Encodes an action as an array. This is only used for testing so it is optional

class scml.std.rl.FlexibleActionManager[source]

Bases: ActionManager

An action manager that matches any context.

Parameters:
  • n_prices – Number of distinct prices allowed in the action.

  • max_quantity – Maximum allowed quantity to offer in any negotiation. The number of quantities is one plus that because zero is allowed to model ending negotiation.

  • n_partners – Maximum of partners allowed in the action.

Remarks:
  • This action manager will always generate offers that are within the price and quantity limits given in its parameters. Wen decoding them, it will scale them up so that the maximum corresponds to the actual value in the world it finds itself. For example, if n_prices is 10 and the world has only two prices currently in the price issue, it will use any value less than 5 as the minimum price and any value above 5 as the maximum price. If on the other hand the current price issue has 20 values, then it will scale by multiplying the number given in the encoded action (ranging from 0 to 9) by 19/9 which makes it range from 0 to 19 which is what is expected by the world.

  • This action manager will adjust offers for different number of partners as follows: - If the true number of partners is larger than n_partners used by this action manager,

    it will simply use n_partners of them and always end negotiations with the rest of them.

    • If the true number of partners is smaller than n_partners, it will use the first n_partners values in the encoded action and increase the quantities of any counter offers (i.e. ones in which the response is REJECT_OFFER) by the amount missing from the ignored partners in the encoded action up to the maximum quantities allowed by the current negotiation context. For example, if n_partneers is 4 and we have only 2 partners in reality, and the received quantities from partners were [4, 3] while the maximum quantity allowed is 10 and the encoded action was [2, *, 3, *, 2, *, 1, *] (where we ignored prices), then the encoded action will be converted to [(Reject, 5, *), (Accept, 3, *)] where the 3 extra units that were supposed to be offered to the last two partners are moved to the first partner. If the maximum quantity allowed was 4 in that example, the result will be [(Reject, 4, *), (Accept, 3, *)].

capacity_multiplier: int = 1
n_prices: int = 2
max_group_size: int = 2
reduce_space_size: bool = True
extra_checks: bool = False
max_quantity: int
__attrs_post_init__()[source]
make_space() gymnasium.spaces.MultiDiscrete | gymnasium.spaces.Box[source]

Creates the action space

decode(awi: scml.oneshot.awi.OneShotAWI, action: numpy.ndarray) dict[str, negmas.sao.common.SAOResponse][source]

Generates offers to all partners from an encoded action. Default is to return the action as it is assuming it is a dict[str, SAOResponse]

encode(awi: scml.oneshot.awi.OneShotAWI, responses: dict[str, negmas.sao.common.SAOResponse]) numpy.ndarray[source]

Receives offers for all partners and generates the corresponding action. Used mostly for debugging and testing.

scml.std.rl.DefaultActionManager[source]

The default action manager

scml.std.rl.StdRLAgent[source]
scml.std.rl.RLState[source]

We assume that RL states are numpy arrays

scml.std.rl.RLAction[source]

We assume that RL actions are numpy arrays

scml.std.rl.RLModel[source]

A policy is a callable that receives a state and returns an action

scml.std.rl.model_wrapper(model, deterministic: bool = False) RLModel[source]

Wraps a stable_baselines3 model as an RL model

class scml.std.rl.StdEnv(action_manager: scml.oneshot.rl.action.ActionManager, observation_manager: scml.oneshot.rl.observation.ObservationManager, reward_function: scml.oneshot.rl.reward.RewardFunction = DefaultRewardFunction(), render_mode=None, context: scml.oneshot.context.GeneralContext = FixedPartnerNumbersStdContext(), agent_type: type[scml.std.agent.StdAgent] = StdPlaceholder, agent_params: dict[str, Any] | None = None, extra_checks: bool = True, skip_after_negotiations: bool = True)[source]

Bases: scml.oneshot.rl.env.OneShotEnv

The main Gymnasium class for implementing Reinforcement Learning Agents environments.

The class encapsulates an environment with arbitrary behind-the-scenes dynamics through the step() and reset() functions. An environment can be partially or fully observed by single agents. For multi-agent environments, see PettingZoo.

The main API methods that users of this class need to know are:

  • step() - Updates an environment with actions returning the next agent observation, the reward for taking that actions, if the environment has terminated or truncated due to the latest action and information from the environment about the step, i.e. metrics, debug info.

  • reset() - Resets the environment to an initial state, required before calling step. Returns the first agent observation for an episode and information, i.e. metrics, debug info.

  • render() - Renders the environments to help visualise what the agent see, examples modes are “human”, “rgb_array”, “ansi” for text.

  • close() - Closes the environment, important when external software is used, i.e. pygame for rendering, databases

Environments have additional attributes for users to understand the implementation

  • action_space - The Space object corresponding to valid actions, all valid actions should be contained within the space.

  • observation_space - The Space object corresponding to valid observations, all valid observations should be contained within the space.

  • reward_range - A tuple corresponding to the minimum and maximum possible rewards for an agent over an episode. The default reward range is set to \((-\infty,+\infty)\).

  • spec - An environment spec that contains the information used to initialize the environment from gymnasium.make()

  • metadata - The metadata of the environment, i.e. render modes, render fps

  • np_random - The random number generator for the environment. This is automatically assigned during super().reset(seed=seed) and when assessing self.np_random.

See also

For modifying or extending environments use the gymnasium.Wrapper class

Note

To get reproducible sampling of actions, a seed can be set with env.action_space.seed(123).

class scml.std.rl.ObservationManager[source]

Bases: Protocol

Manages the observations of an agent in an RL environment

property context: scml.oneshot.context.BaseContext
make_space() gymnasium.spaces.Space[source]

Creates the observation space

encode(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Encodes an observation from the agent’s awi

make_first_observation(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Creates the initial observation (returned from gym’s reset())

get_offers(awi: scml.oneshot.awi.OneShotAWI, encoded: numpy.ndarray) dict[str, negmas.outcomes.Outcome | None][source]

Gets the offers from an encoded awi

class scml.std.rl.FlexibleObservationManager[source]

Bases: BaseObservationManager

An observation manager that can be used with any SCML world.

Parameters:
  • capacity_multiplier – A factor to multiply by the number of lines to give the maximum quantity allowed in offers

  • exogenous_multiplier – A factor to multiply maximum production capacity with when encoding exogenous quantities

  • continuous – If given the observation space will be a Box otherwise it will be a MultiDiscrete

  • n_prices – The number of prices to use for encoding the unit price (if not continuous)

  • max_production_cost – The limit for production cost. Anything above that will be mapped to this max

  • max_group_size – Maximum size used for grouping observations from multiple partners. This will be used in the number of partners in the simulation is larger than the number used for training.

  • n_past_received_offers – Number of past received offers to add to the observation.

  • n_bins

    1. bins to use for discretization (if not continuous)

  • n_sigmas – The number of sigmas used for limiting the range of randomly distributed variables

  • extra_checks – If given, extra checks are applied to make sure encoding and decoding make sense

Remarks:

capacity_multiplier: int = 1
n_prices: int = 2
max_group_size: int = 2
reduce_space_size: bool = True
n_past_received_offers: int = 1
extra_checks: bool = False
n_bins: int = 40
n_sigmas: int = 2
max_production_cost: int = 10
exogenous_multiplier: int = 1
max_quantity: int
_chosen_partner_indices: list[int] | None
_previous_offers: collections.deque
_dims: list[int] | None
__attrs_post_init__()[source]
get_dims() list[int][source]

Get the sizes of all dimensions in the observation space. Used if not continuous.

make_space() gymnasium.spaces.MultiDiscrete | gymnasium.spaces.Box[source]

Creates the action space

make_first_observation(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Creates the initial observation (returned from gym’s reset())

encode(awi: scml.oneshot.awi.OneShotAWI) numpy.ndarray[source]

Encodes the awi as an array

extra_obs(awi: scml.oneshot.awi.OneShotAWI) list[tuple[float, int] | float][source]

The observation values other than offers and previous offers.

Returns:

A list of tuples. Each is some observation variable as a real number between zero and one and a number of bins to use for discrediting this variable. If a single value, the number of bins will be self.n_bin

get_offers(awi: scml.oneshot.awi.OneShotAWI, encoded: numpy.ndarray) dict[str, negmas.outcomes.Outcome | None][source]

Gets offers from an encoded awi.

scml.std.rl.DefaultObservationManager[source]

The default observation manager

scml.std.rl.random_action(obs: numpy.ndarray, env: scml.oneshot.rl.env.OneShotEnv) numpy.ndarray[source]

Samples a random action from the action space of the

scml.std.rl.random_policy(obs: numpy.ndarray, env: scml.oneshot.rl.env.OneShotEnv, pend: float = 0.05, paccept: float = 0.15) numpy.ndarray[source]

Ends the negotiation or accepts with a predefined probability or samples a random response.

scml.std.rl.greedy_policy(obs: numpy.ndarray, awi: scml.oneshot.awi.OneShotAWI, obs_manager: scml.oneshot.rl.observation.ObservationManager, action_manager: scml.oneshot.rl.action.ActionManager = FlexibleActionManager(ANACOneShotContext()), debug=False, distributor: Callable[[int, int], list[int]] = all_but_concentrated) numpy.ndarray[source]

A simple greedy policy.

Parameters:
  • obs – The current observation

  • awi – The AWI of the agent running the policy

  • obs_manager – The observation manager used to encode the observation

  • action_manager – The action manager to be used to encode the action

  • debug – If True, extra assertions are tested

  • distributor – A callable that receives a total quantity to be distributed over n partners and returns a list of n values that sum to this total quantity

Remarks:
  • Accepts the subset of offers with maximum total quantity under current needs.

  • The remaining quantity is distributed over the remaining partners using the distributor function

  • Prices are set to the worst for the agent if the price range is small else they are set randomly

class scml.std.rl.RewardFunction[source]

Bases: Protocol

Represents a reward function.

Remarks:
  • before_action is called before the action is executed for initialization and should return info to be passed to the call

  • __call__ is called with the awi (to get the state), action and info and should return the reward

before_action(awi: scml.oneshot.awi.OneShotAWI) Any[source]

Called before executing the action from the RL agent to save any required information for calculating the reward in its return

Remarks:

The returned value will be passed as info to __call__() when it is time to calculate the reward.

__call__(awi: scml.oneshot.awi.OneShotAWI, action: dict[str, negmas.SAOResponse], info: Any) float[source]

Called to calculate the reward to be given to the agent at the end of a step.

Parameters:
  • awiOneShotAWI to access the agent’s state

  • action – The action (decoded) as a mapping from partner ID to responses to their last offer.

  • info – Information generated from before_action(). You an use this to store baselines for calculating the reward

Returns:

The reward (a number) to be given to the agent at the end of the step.

class scml.std.rl.DefaultRewardFunction[source]

Bases: RewardFunction

The default reward function of SCML

Remarks:
  • The reward is the difference between the balance before the action and after it.

before_action(awi: scml.oneshot.awi.OneShotAWI) float[source]

Called before executing the action from the RL agent to save any required information for calculating the reward in its return

Remarks:

The returned value will be passed as info to __call__() when it is time to calculate the reward.

__call__(awi: scml.oneshot.awi.OneShotAWI, action: dict[str, negmas.SAOResponse], info: float)[source]

Called to calculate the reward to be given to the agent at the end of a step.

Parameters:
  • awiOneShotAWI to access the agent’s state

  • action – The action (decoded) as a mapping from partner ID to responses to their last offer.

  • info – Information generated from before_action(). You an use this to store baselines for calculating the reward

Returns:

The reward (a number) to be given to the agent at the end of the step.

scml.std.rl.__all__[source]