Modeling the formation of social conventions from embodied real-time interactions

Publication date

2022-01-27T11:45:21Z

2022-01-27T11:45:21Z

2020-06-22

2022-01-25T06:21:36Z

Abstract

What is the role of real-time control and learning in the formation of social conventions? To answer this question, we propose a computational model that matches human behavioral data in a social decision-making game that was analyzed both in discrete-time and continuous-time setups. Furthermore, unlike previous approaches, our model takes into account the role of sensorimotor control loops in embodied decision-making scenarios. For this purpose, we introduce the Control-based Reinforcement Learning (CRL) model. CRL is grounded in the Distributed Adaptive Control (DAC) theory of mind and brain, where low-level sensorimotor control is modulated through perceptual and behavioral learning in a layered structure. CRL follows these principles by implementing a feedback control loop handling the agent's reactive behaviors (pre-wired reflexes), along with an Adaptive Layer that uses reinforcement learning to maximize long-term reward. We test our model in a multi-agent game-theoretic task in which coordination must be achieved to find an optimal solution. We show that CRL is able to reach human-level performance on standard game-theoretic metrics such as efficiency in acquiring rewards and fairness in reward distribution.

Document Type

Article


Published version

Language

English

Publisher

Public Library of Science

Related items

Reproducció del document publicat a: https://doi.org/10.1371/journal.pone.0234434

Plos One, 2020, vol.15, num. 6, p. e0234434

https://doi.org/10.1371/journal.pone.0234434

info:eu-repo/grantAgreement/EC/H2020/820742/EU//HR-Recycler

info:eu-repo/grantAgreement/EC/H2020/641321/EU//socSMCs

Recommended citation

This citation was generated automatically.

Rights

cc by (c) Freire, Ismael T. et al., 2020

http://creativecommons.org/licenses/by/3.0/es/

This item appears in the following Collection(s)