tasks.decks module

Author:Dominic Hunt
Reference:Regulatory fit effects in a choice task Worthy, D. a, Maddox, W. T., & Markman, A. B. (2007). Psychonomic Bulletin & Review, 14(6), 1125–32. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/18229485
class tasks.decks.Decks(draws=None, decks=array([[ 2, 2, 1, 1, 2, 1, 1, 3, 2, 6, 2, 8, 1, 6, 2, 1, 1, 5, 8, 5, 10, 10, 8, 3, 10, 7, 10, 8, 3, 4, 9, 10, 3, 6, 3, 5, 10, 10, 10, 7, 3, 8, 5, 8, 6, 9, 4, 4, 4, 10, 6, 4, 10, 3, 10, 5, 10, 3, 10, 10, 5, 4, 6, 10, 7, 7, 10, 10, 10, 3, 1, 4, 1, 3, 1, 7, 1, 3, 1, 8], [ 7, 10, 5, 10, 6, 6, 10, 10, 10, 8, 4, 8, 10, 4, 9, 10, 8, 6, 10, 10, 10, 4, 7, 10, 5, 10, 4, 10, 10, 9, 2, 9, 8, 10, 7, 7, 1, 10, 2, 6, 4, 7, 2, 1, 1, 1, 7, 10, 1, 4, 2, 1, 1, 1, 4, 1, 4, 1, 1, 1, 1, 3, 1, 4, 1, 1, 1, 5, 1, 1, 1, 7, 2, 1, 2, 1, 4, 1, 4, 1]]), discard=False)[source]

Bases: tasks.taskTemplate.Task

Based on the Worthy&Maddox 2007 paper “Regulatory fit effects in a choice task.

Many methods are inherited from the tasks.taskTemplate.Task class. Refer to its documentation for missing methods.

Name

The name of the class used when recording what has been used.

Type:string
Parameters:
  • draws (int, optional) – Number of cards drawn by the participant
  • decks (array of floats, optional) – The decks of cards
  • discard (bool) – Defines if you discard the card not chosen or if you keep it.
feedback()[source]

Responds to the action from the participant

proceed()[source]

Updates the task after feedback

receiveAction(action)[source]

Receives the next action from the participant

Parameters:action (int or string) – The action taken by the model
returnTaskState()[source]

Returns all the relevant data for this task run

Returns:results – A dictionary containing the class parameters as well as the other useful data
Return type:dictionary
storeState()[source]

Stores the state of all the important variables so that they can be output later

class tasks.decks.RewardDecksAllInfo(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks reward for models expecting the reward information from all possible actions

Parameters:
  • maxRewardVal (int) – The highest value a reward can have
  • minRewardVal (int) – The lowest value a reward can have
  • number_actions (int) – The number of actions the participant can perform. Assumes the lowest valued action is 0
Returns:

deckRew – The function expects to be passed a tuple containing the reward and the last action. The reward that is a float and action is {0,1}. The function returns a array of length (maxRewardVal-minRewardVal)*number_actions.

Return type:

function

Name

The identifier of the function

Type:string

Examples

>>> rew = RewardDecksAllInfo(maxRewardVal=10, minRewardVal=1, number_actions=2)
>>> rew.processFeedback(6, 0, 1)
array([1., 1., 1., 1., 1., 2., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
>>> rew.processFeedback(6, 1, 1)
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 2., 1., 1., 1., 1.])
maxRewardVal = 10
minRewardVal = 1
number_actions = 2
processFeedback(reward, action, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.RewardDecksDualInfo(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks reward for models expecting the reward information from two possible actions.

epsilon = 1
maxRewardVal = 10
processFeedback(reward, action, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.RewardDecksDualInfoLogistic(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks rewards for models expecting the reward information from two possible actions.

epsilon = 0.3
maxRewardVal = 10
minRewardVal = 1
processFeedback(reward, action, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.RewardDecksLinear(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks reward for models expecting just the reward

processFeedback(feedback, lastAction, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.RewardDecksNormalised(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks reward for models expecting just the reward, but in range [0,1]

Parameters:maxReward (int, optional) – The highest value a reward can have. Default 10

See also

model.OpAL

maxReward = 10
processFeedback(feedback, lastAction, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.RewardDecksPhi(**kwargs)[source]

Bases: model.modelTemplate.Rewards

Processes the decks reward for models expecting just the reward, but in range [0, 1]

Parameters:phi (float) – The scaling value of the reward
phi = 1
processFeedback(feedback, lastAction, stimuli)[source]
Returns:
Return type:modelFeedback
class tasks.decks.StimulusDecksLinear(**kwargs)[source]

Bases: model.modelTemplate.Stimulus

processStimulus(observation)[source]

Processes the decks stimuli for models expecting just the event

Returns:
  • stimuliPresent (int or list of int) – The elements present of the stimulus
  • stimuliActivity (float or list of float) – The activity of each of the elements