Package teamwork :: Package agent :: Module GoalBased :: Class GoalBasedAgent
[hide private]
[frames] | no frames]

Class GoalBasedAgent

source code

              Agent.Agent --+    
                            |    
RecursiveAgent.RecursiveAgent --+
                                |
                               GoalBasedAgent
Known Subclasses:

An entity mix-in class that has a reward function based on maximization/minimization of features/actions

Nested Classes [hide private]

Inherited from Agent.Agent: actionClass

Instance Methods [hide private]
 
__init__(self, name='') source code
 
setGoals(self, goals)
Sets the goals to the provided list, after normalizing weights
source code
 
normalizeGoals(self)
Scales all goal weights to sum to 1.
source code
 
setGoalWeight(self, goal, value, normalize=True)
Assigns the weight of the specified goal
source code
Distribution over float
applyGoals(self, entity=None, world=None, debug=None)
Returns: expected reward of the entity in current world
source code
PWLGoal[]
getGoals(self) source code
float
getGoalWeight(self, goal)
Returns: the weight of the specified goal
source code
KeyedVector instance
getGoalVector(self)
Returns a vector representing the goal weights
source code
teamwork.math.ProbabilityTree
getGoalTree(self)
Returns: the decision tree representing this entity's goal weights
source code
 
actionValue(self, actions, horizon=1, state=None, debug=None)
Compute the expected value of performing the given action
source code
 
expectedValue(self, horizon=1, start={}, goals=None, state=None, debug=None)
Returns: the expected reward from the current state
source code
KeyedVector
getNormalization(self, constant=False)
Returns: the vector expressing the constraint that the goal weights sum to 1
source code
dict[]
generateConstraints(self, desired, horizon=-1, state=None)
Computes a set of constraints on possible goal weights for this agent that, if satisfied, will cause the agent to prefer the desired action in the given state.
source code
KeyedVector (str)
fit(self, desired, horizon=-1, state=None, granularity=0.01, label=None)
Computes a new set of goal weights for this agent that will cause the agent to prefer the desired action in the given state.
source code
 
__str__(self)
Returns a string representation of this entity
source code
 
__copy__(self, new=None) source code
 
__xml__(self) source code
 
parse(self, element)
Extracts this agent's recursive belief structure from the given XML Element
source code

Inherited from RecursiveAgent.RecursiveAgent: __deepcopy__, __eq__, __getitem__, __ne__, ancestry, applyChanges, applyPolicy, beliefDepth, findObservation, freeze, getActionKeys, getAllBeliefs, getBelief, getBeliefKeys, getDynamics, getEntities, getEntity, getEntityBeliefs, getNestedBelief, getObservation, getObservations, getSelfBelief, getState, getStateFeatures, hasBelief, incorporateMessage, initialStateEstimator, invalidateCache, multistep, observe, preComStateEstimator, resetHistory, saveObservations, setBelief, setEntity, setName, setObservation, setRecursiveBelief, setSelfBelief, setState, stateEstimator, step, toHTML, updateStateDict

Inherited from Agent.Agent: __cmp__, generateAllObservations, generateHistories, legalActions, legalMessages, observable, postComStateEstimator

Class Variables [hide private]
  valueType = 'average'
Instance Variables [hide private]
dict:str->KeyedPlane constraints
the constraints on the goal weights already imposed.
MinMaxGoal→float goals
the goals of this agent
int horizon
the horizon of this agent's lookahead

Inherited from RecursiveAgent.RecursiveAgent: dynamics, parent, relationships, state

Inherited from Agent.Agent: actions, name, omega

Method Details [hide private]

__init__(self, name='')
(Constructor)

source code 
Parameters:
  • name - label for this instance
Overrides: Agent.Agent.__init__
(inherited documentation)

setGoals(self, goals)

source code 

Sets the goals to the provided list, after normalizing weights

Parameters:

Warning: Replaces any existing goals.

setGoalWeight(self, goal, value, normalize=True)

source code 

Assigns the weight of the specified goal

Parameters:
  • normalize (bool) - if True, renormalizes weights across all goals to sum to 1
  • goal (MinMaxGoal)
  • value (float)

applyGoals(self, entity=None, world=None, debug=None)

source code 
Parameters:
Returns: Distribution over float
expected reward of the entity in current world

getGoalWeight(self, goal)

source code 
Parameters:
Returns: float
the weight of the specified goal

getGoalTree(self)

source code 
Returns: teamwork.math.ProbabilityTree
the decision tree representing this entity's goal weights

actionValue(self, actions, horizon=1, state=None, debug=None)

source code 

Compute the expected value of performing the given action

Parameters:
  • actions (Action[]) - the actions whose effect we want to evaluate
  • horizon (int) - the length of the forward projection
  • state (teamwork.math.probability.Distribution) - the world state to evaluate the actions in (defaults to current world state)

expectedValue(self, horizon=1, start={}, goals=None, state=None, debug=None)

source code 
Parameters:
  • horizon (int) - the horizon for the lookahead when computing the expected value
  • start (dict:str→Action[]) - a dictionary of actions to be specified in the first time step
  • goals (GoalBasedAgent[]) - the agent(s) whose reward function should be used to compute the expectation (defaults to self)
  • state (teamwork.math.probability.Distribution) - the world state to evaluate the actions in (defaults to current world state)
  • debug (Debugger)
Returns:
the expected reward from the current state

getNormalization(self, constant=False)

source code 
Parameters:
  • constant (bool) - if True, include a column for the constant factor (which will be 1)
Returns: KeyedVector
the vector expressing the constraint that the goal weights sum to 1

generateConstraints(self, desired, horizon=-1, state=None)

source code 

Computes a set of constraints on possible goal weights for this agent that, if satisfied, will cause the agent to prefer the desired action in the given state. Each constraint is dictionary with the following elements:

  • delta: the total difference that must be made up
  • slope: dictionary of coefficients for each goal weight in the sum that must make up that difference
  • plane: the vector of weights, such that the product of this vector and the goal weight vector must exceed 0 for the desired action to be preferred
Parameters:
  • desired (Action[]) - the action that the agent should prefer
  • horizon (int) - the horizon of lookahead to use (if not provided, the agent's default horizon is used)
  • state (dict) - the current state of this agent's beliefs (if not provided, defaults to the result of getAllBeliefs
Returns: dict[]
a list of constraints

fit(self, desired, horizon=-1, state=None, granularity=0.01, label=None)

source code 

Computes a new set of goal weights for this agent that will cause the agent to prefer the desired action in the given state.

Parameters:
  • desired (Action[]) - the action that the agent should prefer
  • horizon (int) - the horizon of lookahead to use (if not provided, the agent's default horizon is used)
  • state (dict) - the current state of this agent's beliefs (if not provided, defaults to the result of getAllBeliefs
  • granularity (float) - the minimum movement of a goal weight (default is 0.01)
  • label (str) - the label to store the generated constraints under, overwriting any previous constraints using the same label (by default, None)
Returns: KeyedVector (str)
a goal vector (error message if no such vector exists)

__str__(self)
(Informal representation operator)

source code 

Returns a string representation of this entity

Overrides: RecursiveAgent.RecursiveAgent.__str__
(inherited documentation)

__copy__(self, new=None)

source code 
Overrides: Agent.Agent.__copy__

__xml__(self)

source code 
Overrides: Agent.Agent.__xml__

parse(self, element)

source code 

Extracts this agent's recursive belief structure from the given XML Element

Overrides: Agent.Agent.parse

Instance Variable Details [hide private]

constraints

the constraints on the goal weights already imposed. Each constraint is a dictionary, with the key element being the 'plane' expressing the constraint
Type:
dict:str->KeyedPlane