Package teamwork :: Package math :: Module fitting
[hide private]
[frames] | no frames]

Module fitting

source code

Functions [hide private]
 
expandPolicy(entity, name, interrupt=None, keys=[], debug=0)
Returns: the dynamics that the given entity thinks govern the turn of the named other entity
source code
 
getActionKeys(entity, sequence) source code
 
getLookaheadTree(entity, chosenAction, sequence, local=False, choices=None, goals=None, interrupt=None, debug=0)
Returns: a decision tree representing the dynamics of the given actions followed by the given sequence of agent responses.
source code
 
getDiffTree(entity, action1, action2, sequence, debug=1)
Returns a decision tree representing the state difference between the two specified actions (i.e., S(action1)-S(action2)), subject to the provided turn sequence, following the format for getLookaheadTree.
source code
 
sign(value)
Returns 1 for any positive value, -1 for any negative value, and 0 for any zero value
source code
 
findConstraint(entity, goodAction, badAction, sequence, debug=None)
Returns a dictionary of possible singleton goal weight changes that satisfy the constraint that the specified entity prefer the goodAction over the badAction given the provided lookahead turn sequence.
source code
 
findAllConstraints(entity, goodAction, sequence, debug=0) source code
Variables [hide private]
  EMPTY_NAMESPACE = None
  EMPTY_PREFIX = None
  StringTypes = (<type 'str'>, <type 'unicode'>)
  XMLNS_NAMESPACE = 'http://www.w3.org/2000/xmlns/'
Function Details [hide private]

expandPolicy(entity, name, interrupt=None, keys=[], debug=0)

source code 
Returns:
the dynamics that the given entity thinks govern the turn of the named other entity

getLookaheadTree(entity, chosenAction, sequence, local=False, choices=None, goals=None, interrupt=None, debug=0)

source code 
Parameters:
  • chosenAction (teamwork.action.PsychActions.Action[]) - the action being projected
  • sequence (str[][]) - the turns anticipated by this agent in its lookahead, as a list of turns, where each turn is a list of names
  • local (boolean) - if True, then the entity will compile a locally optimal policy, expecting itself to behave according to whatever mental model it has of itself; otherwise, it will plan over all of its turns in the sequence (more strategic). Default is False
  • choices (Action[][]) - the possible actions to be considered in this policy (if None, defaults to all available actions)
  • goals (ProbabilityTree) - if you want an expected value tree (as opposed to a reward-independent sum over state projections), then you can pass in a tree representing the reward function to use (default is to be reward-independent)
  • interrupt (Event) - a thread Event, the compilation process will continually test whether the event is set; if it is, it will exit. In other words, this is a way to interrupt the compilation
Returns:
a decision tree representing the dynamics of the given actions followed by the given sequence of agent responses. The sequence is a list of lists of agent name strings. The agents named in list i of the sequence apply their policy-driven actions at time i, where time 0 occurs in parallel with the given entity's performance of the chosen action

findConstraint(entity, goodAction, badAction, sequence, debug=None)

source code 

Returns a dictionary of possible singleton goal weight changes that satisfy the constraint that the specified entity prefer the goodAction over the badAction given the provided lookahead turn sequence. If the constraint is satisfied by the current goal weights, then the returned dictionary is empty