Cited 0 times in
Model-based reinforcement learning under concurrent schedules of reinforcement in rodents.
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huh, N | - |
dc.contributor.author | Jo, S | - |
dc.contributor.author | Kim, H | - |
dc.contributor.author | Sul, JH | - |
dc.contributor.author | Jung, MW | - |
dc.date.accessioned | 2010-11-29T04:45:30Z | - |
dc.date.available | 2010-11-29T04:45:30Z | - |
dc.date.issued | 2009 | - |
dc.identifier.issn | 1072-0502 | - |
dc.identifier.uri | http://repository.ajou.ac.kr/handle/201003/332 | - |
dc.description.abstract | Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's knowledge or model of the environment in model-based reinforcement learning algorithms. To investigate how animals update value functions, we trained rats under two different free-choice tasks. The reward probability of the unchosen target remained unchanged in one task, whereas it increased over time since the target was last chosen in the other task. The results show that goal choice probability increased as a function of the number of consecutive alternative choices in the latter, but not the former task, indicating that the animals were aware of time-dependent increases in arming probability and used this information in choosing goals. In addition, the choice behavior in the latter task was better accounted for by a model-based reinforcement learning algorithm. Our results show that rats adopt a decision-making process that cannot be accounted for by simple reinforcement learning models even in a relatively simple binary choice task, suggesting that rats can readily improve their decision-making strategy through the knowledge of their environments. | - |
dc.format | text/plain | - |
dc.language.iso | en | - |
dc.subject.MESH | Algorithms | - |
dc.subject.MESH | Animals | - |
dc.subject.MESH | Decision Making | - |
dc.subject.MESH | Models, Neurological | - |
dc.subject.MESH | Models, Theoretical | - |
dc.subject.MESH | Rats | - |
dc.subject.MESH | Reinforcement (Psychology) | - |
dc.subject.MESH | Reward | - |
dc.title | Model-based reinforcement learning under concurrent schedules of reinforcement in rodents. | - |
dc.type | Article | - |
dc.identifier.pmid | 19403794 | - |
dc.identifier.url | http://www.learnmem.org/cgi/pmidlookup?view=long&pmid=19403794 | - |
dc.contributor.affiliatedAuthor | 허, 남정 | - |
dc.contributor.affiliatedAuthor | 정, 민환 | - |
dc.type.local | Journal Papers | - |
dc.identifier.doi | 10.1101/lm.1295509 | - |
dc.citation.title | Learning & memory (Cold Spring Harbor, N.Y.) | - |
dc.citation.volume | 16 | - |
dc.citation.number | 5 | - |
dc.citation.date | 2009 | - |
dc.citation.startPage | 315 | - |
dc.citation.endPage | 323 | - |
dc.identifier.bibliographicCitation | Learning & memory (Cold Spring Harbor, N.Y.), 16(5). : 315-323, 2009 | - |
dc.identifier.eissn | 1549-5485 | - |
dc.relation.journalid | J010720502 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.