Search Sites

Transactions of the Institute of Systems, Control and Information Engineers Vol. 35 (2022), No. 3

ISIJ International
belloff
ONLINE ISSN: 2185-811X
PRINT ISSN: 1342-5668
Publisher: THE INSTITUTE OF SYSTEMS, CONTROL AND INFORMATION ENGINEERS (ISCIE)

Backnumber

  1. Vol. 37 (2024)

  2. Vol. 36 (2023)

  3. Vol. 35 (2022)

  4. Vol. 34 (2021)

  5. Vol. 33 (2020)

  6. Vol. 32 (2019)

  7. Vol. 31 (2018)

  8. Vol. 30 (2017)

  9. Vol. 29 (2016)

  10. Vol. 28 (2015)

  11. Vol. 27 (2014)

  12. Vol. 26 (2013)

  13. Vol. 25 (2012)

  14. Vol. 24 (2011)

  15. Vol. 23 (2010)

  16. Vol. 22 (2009)

  17. Vol. 21 (2008)

  18. Vol. 20 (2007)

  19. Vol. 19 (2006)

  20. Vol. 18 (2005)

  21. Vol. 17 (2004)

  22. Vol. 16 (2003)

  23. Vol. 15 (2002)

  24. Vol. 14 (2001)

  25. Vol. 13 (2000)

  26. Vol. 12 (1999)

  27. Vol. 11 (1998)

  28. Vol. 10 (1997)

  29. Vol. 9 (1996)

  30. Vol. 8 (1995)

  31. Vol. 7 (1994)

  32. Vol. 6 (1993)

  33. Vol. 5 (1992)

  34. Vol. 4 (1991)

  35. Vol. 3 (1990)

  36. Vol. 2 (1989)

  37. Vol. 1 (1988)

Transactions of the Institute of Systems, Control and Information Engineers Vol. 35 (2022), No. 3

Centralized and Accelerated Multiagent Reinforcement Learning Method with Automatic Reward Setting

Kaoru Sasaki, Hitoshi Iima

pp. 39-47

Abstract

For multiagent environments, a centralized reinforcement learner can find optimal policies, but it is time-consuming. A method is proposed for finding the optimal policies acceleratingly, and it uses the centralized learner in combination with supplemental independent learners. In order to prevent the failure of learning, the independent learners must stop in a timely manner, which is done through finely tuning a reward. The reward tuning, however, requires additional time and effort. This paper proposes a reinforcement learning method in which the reward is automatically set.

Bookmark

Share it with SNS

Article Title

Centralized and Accelerated Multiagent Reinforcement Learning Method with Automatic Reward Setting

Utility Design via Egalitarian Non-Separable Contribution for Distributed Welfare Games

Ayumi Makabe, Takayuki Wada, Yasumasa Fujisaki

pp. 48-54

Abstract

A distributed welfare game is a game-theoretic model for a resource allocation problem which is to find an allocation to maximize the objective function of the system operator. In order to determine an allocation in a distributed way, each agent is assigned to an admissible utility function such that the resulting game possesses desirable properties, for example, scalability, the efficiency of pure Nash equilibria, and budget balance. For this end, a marginal contribution-based utility design is proposed. This utility function requires less computational effort than the previous research, while it has the same efficiency as those of the conventional utility design via Shapley value.

Bookmark

Share it with SNS

Article Title

Utility Design via Egalitarian Non-Separable Contribution for Distributed Welfare Games

You can use this feature after you logged into the site.
Please click the button below.

Advanced Search

Article Title

Author

Abstract

Journal Title

Year

Please enter the publication date
with Christian era
(4 digits).

Please enter your search criteria.