From old discussion: Secondly, some of our subjects had stronger failure biases while others showed stronger success biases. Neural recordings of reward processing from the rats’ basal ganglia show asymmetry in processing rewards and failures (Ito and Doya, 2009). Similarly, people assign larger internal reward value to negative compared to positive stimuli such as emotional facial expression (Katahira et al., 2011). Reinforcement learning, on the other hand, generally assumes that subjects evenly valuate both success and failure on the previous trial. It looks appealing to call choice history biases being reinforcement learning algorithms gone wrong on tasks they should not be operational. However, the lack of accounting for sensory strength and symmetrical treatment of failure and success biases makes it incomplete to interpret choice history biases using reinforcement learning. Many daily activities require decision-making under condition of perceptual uncertainty, such as driving in the fog. Using probabilistic choice and related models (Lau and Glimcher, 2005; Frund et al., 2014) present a more encompassing description of the perceptual decision making.