tasks.decks module¶
Author: | Dominic Hunt |
---|---|
Reference: | Regulatory fit effects in a choice task Worthy, D. a, Maddox, W. T., & Markman, A. B. (2007). Psychonomic Bulletin & Review, 14(6), 1125–32. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/18229485 |
-
class
tasks.decks.
Decks
(draws=None, decks=array([[ 2, 2, 1, 1, 2, 1, 1, 3, 2, 6, 2, 8, 1, 6, 2, 1, 1, 5, 8, 5, 10, 10, 8, 3, 10, 7, 10, 8, 3, 4, 9, 10, 3, 6, 3, 5, 10, 10, 10, 7, 3, 8, 5, 8, 6, 9, 4, 4, 4, 10, 6, 4, 10, 3, 10, 5, 10, 3, 10, 10, 5, 4, 6, 10, 7, 7, 10, 10, 10, 3, 1, 4, 1, 3, 1, 7, 1, 3, 1, 8], [ 7, 10, 5, 10, 6, 6, 10, 10, 10, 8, 4, 8, 10, 4, 9, 10, 8, 6, 10, 10, 10, 4, 7, 10, 5, 10, 4, 10, 10, 9, 2, 9, 8, 10, 7, 7, 1, 10, 2, 6, 4, 7, 2, 1, 1, 1, 7, 10, 1, 4, 2, 1, 1, 1, 4, 1, 4, 1, 1, 1, 1, 3, 1, 4, 1, 1, 1, 5, 1, 1, 1, 7, 2, 1, 2, 1, 4, 1, 4, 1]]), discard=False)[source]¶ Bases:
tasks.taskTemplate.Task
Based on the Worthy&Maddox 2007 paper “Regulatory fit effects in a choice task.
Many methods are inherited from the tasks.taskTemplate.Task class. Refer to its documentation for missing methods.
-
Name
¶ The name of the class used when recording what has been used.
Type: string
Parameters: -
receiveAction
(action)[source]¶ Receives the next action from the participant
Parameters: action (int or string) – The action taken by the model
-
-
class
tasks.decks.
RewardDecksAllInfo
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks reward for models expecting the reward information from all possible actions
Parameters: Returns: deckRew – The function expects to be passed a tuple containing the reward and the last action. The reward that is a float and action is {0,1}. The function returns a array of length (maxRewardVal-minRewardVal)*number_actions.
Return type: function
-
Name
¶ The identifier of the function
Type: string
Examples
>>> rew = RewardDecksAllInfo(maxRewardVal=10, minRewardVal=1, number_actions=2) >>> rew.processFeedback(6, 0, 1) array([1., 1., 1., 1., 1., 2., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) >>> rew.processFeedback(6, 1, 1) array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 2., 1., 1., 1., 1.])
-
maxRewardVal
= 10¶
-
minRewardVal
= 1¶
-
number_actions
= 2¶
-
-
class
tasks.decks.
RewardDecksDualInfo
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks reward for models expecting the reward information from two possible actions.
-
epsilon
= 1¶
-
maxRewardVal
= 10¶
-
-
class
tasks.decks.
RewardDecksDualInfoLogistic
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks rewards for models expecting the reward information from two possible actions.
-
epsilon
= 0.3¶
-
maxRewardVal
= 10¶
-
minRewardVal
= 1¶
-
-
class
tasks.decks.
RewardDecksLinear
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks reward for models expecting just the reward
-
class
tasks.decks.
RewardDecksNormalised
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks reward for models expecting just the reward, but in range [0,1]
Parameters: maxReward (int, optional) – The highest value a reward can have. Default 10
See also
-
maxReward
= 10¶
-
-
class
tasks.decks.
RewardDecksPhi
(**kwargs)[source]¶ Bases:
model.modelTemplate.Rewards
Processes the decks reward for models expecting just the reward, but in range [0, 1]
Parameters: phi (float) – The scaling value of the reward -
phi
= 1¶
-
-
class
tasks.decks.
StimulusDecksLinear
(**kwargs)[source]¶ Bases:
model.modelTemplate.Stimulus