Skip to content

Latest commit

 

History

History
executable file
·
78 lines (61 loc) · 2.52 KB

README.md

File metadata and controls

executable file
·
78 lines (61 loc) · 2.52 KB

ma-gym

A collection of multi agent environments based on OpenAI gym.

Build Status

Installation

cd ma-gym
pip install -e .

Usage:

import gym
import ma_gym

env = gym.make('Switch2-v0')
done_n = [False for _ in range(env.n_agents)]
ep_reward = 0

obs_n = env.reset()
while not all(done_n):
    env.render()
    obs_n, reward_n, done_n, info = env.step(env.action_space.sample())
    ep_reward += sum(reward_n)
env.close()

Please refer to Wiki for complete usage details

Environments:

  • Checkers
  • Combat
  • PredatorPrey
  • Pong Duel (two player pong game)
  • Switch
  • Lumberjacks
Note : openai's environment can be accessed in multi agent form by prefix "ma_".Eg: ma_CartPole-v0
This returns an instance of CartPole-v0 in "multi agent wrapper" having a single agent. 
These environments are helpful during debugging.

Please refer to Wiki for more details.

Zoo!

Checkers-v0 Combat-v0 Lumberjacks-v0
Checkers-v0.gif Combat-v0.gif Lumberjacks-v0.gif
PongDuel-v0 PredatorPrey5x5-v0 PredatorPrey7x7-v0
PongDuel-v0.gif PredatorPrey5x5-v0.gif PredatorPrey7x7-v0.gif
Switch2-v0 Switch4-v0
Switch2-v0.gif Switch4-v0.gif

Testing:

  • Install: pip install pytest
  • Run: pytest

Reference:

Please use this bibtex if you would like to cite it:

@misc{magym,
      author = {Koul, Anurag},
      title = {ma-gym: Collection of multi-agent environments based on OpenAI gym.},
      year = {2019},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/koulanurag/ma-gym}},
    }

Acknowledgement:

This project was developed to complement my research internship @ SAS.