Search Sites

Transactions of the Institute of Systems, Control and Information Engineers Vol. 35 (2022), No. 3

ISIJ International
belloff
ONLINE ISSN: 2185-811X
PRINT ISSN: 1342-5668
Publisher: THE INSTITUTE OF SYSTEMS, CONTROL AND INFORMATION ENGINEERS (ISCIE)

Backnumber

  1. Vol. 36 (2023)

  2. Vol. 35 (2022)

  3. Vol. 34 (2021)

  4. Vol. 33 (2020)

  5. Vol. 32 (2019)

  6. Vol. 31 (2018)

  7. Vol. 30 (2017)

  8. Vol. 29 (2016)

  9. Vol. 28 (2015)

  10. Vol. 27 (2014)

  11. Vol. 26 (2013)

  12. Vol. 25 (2012)

  13. Vol. 24 (2011)

  14. Vol. 23 (2010)

  15. Vol. 22 (2009)

  16. Vol. 21 (2008)

  17. Vol. 20 (2007)

  18. Vol. 19 (2006)

  19. Vol. 18 (2005)

  20. Vol. 17 (2004)

  21. Vol. 16 (2003)

  22. Vol. 15 (2002)

  23. Vol. 14 (2001)

  24. Vol. 13 (2000)

  25. Vol. 12 (1999)

  26. Vol. 11 (1998)

  27. Vol. 10 (1997)

  28. Vol. 9 (1996)

  29. Vol. 8 (1995)

  30. Vol. 7 (1994)

  31. Vol. 6 (1993)

  32. Vol. 5 (1992)

  33. Vol. 4 (1991)

  34. Vol. 3 (1990)

  35. Vol. 2 (1989)

  36. Vol. 1 (1988)

Transactions of the Institute of Systems, Control and Information Engineers Vol. 35 (2022), No. 3

Centralized and Accelerated Multiagent Reinforcement Learning Method with Automatic Reward Setting

Kaoru Sasaki, Hitoshi Iima

pp. 39-47

Abstract

For multiagent environments, a centralized reinforcement learner can find optimal policies, but it is time-consuming. A method is proposed for finding the optimal policies acceleratingly, and it uses the centralized learner in combination with supplemental independent learners. In order to prevent the failure of learning, the independent learners must stop in a timely manner, which is done through finely tuning a reward. The reward tuning, however, requires additional time and effort. This paper proposes a reinforcement learning method in which the reward is automatically set.

Bookmark

Share it with SNS

Article Title

Centralized and Accelerated Multiagent Reinforcement Learning Method with Automatic Reward Setting

Utility Design via Egalitarian Non-Separable Contribution for Distributed Welfare Games

Ayumi Makabe, Takayuki Wada, Yasumasa Fujisaki

pp. 48-54

Abstract

A distributed welfare game is a game-theoretic model for a resource allocation problem which is to find an allocation to maximize the objective function of the system operator. In order to determine an allocation in a distributed way, each agent is assigned to an admissible utility function such that the resulting game possesses desirable properties, for example, scalability, the efficiency of pure Nash equilibria, and budget balance. For this end, a marginal contribution-based utility design is proposed. This utility function requires less computational effort than the previous research, while it has the same efficiency as those of the conventional utility design via Shapley value.

Bookmark

Share it with SNS

Article Title

Utility Design via Egalitarian Non-Separable Contribution for Distributed Welfare Games

You can use this feature after you logged into the site.
Please click the button below.

Advanced Search

Article Title

Author

Abstract

Journal Title

Year

Please enter the publication date
with Christian era
(4 digits).

Please enter your search criteria.