日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

講演

Generative and discriminative reinforcement learning as model-based and model-free control

MPS-Authors
/persons/resource/persons217460

Dayan,  P
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Dayan, P. (2022). Generative and discriminative reinforcement learning as model-based and model-free control. Talk presented at Second International Conference on Error-Driven Learning in Language (EDLL 2022). Tübingen, Germany. 2022-08-01 - 2022-08-03.


引用: https://hdl.handle.net/21.11116/0000-000A-D612-0
要旨
Substantial recent work has explored multiple mechanisms of decision-making in humans and other animals. Functionally and anatomically distinct modules have been identified, and their individual properties have been examined using intricate behavioural and neural tools. One critical distinction, which is related to many popular psychological dichotomies, is between model-based or goal-directed control, which is reflective and depends on prospective reasoning, and model-free or habitual control, which is reflexive, and depends on retrospective learning. I will show how to see these two systems in generative and discriminative terms, respectively, and discuss their interaction and integration.