Page 2 of 3 FirstFirst 1 2 3 LastLast
Results 11 to 20 of 24

Thread: AlphaGo Zero Article

  1. #11
    Basic Member
    Join Date
    Sep 2017
    Posts
    56
    I suppose the only real way of getting machine learning to work for those just downloading a workshop mod is to use httprequest to something you run locally, create a model/matrixes/some set of variables there, write these to a file you put in your bot folder, then load that file. Clumsy, but it could work.
    Or well technically, you could do the learning in lua code, dump some json to the console, copy paste it out and put it into files.

  2. #12
    Basic Member
    Join Date
    Mar 2012
    Posts
    2,017
    Quote Originally Posted by Siesta Guru View Post
    ...write these to a file you put in your bot folder, then load that file.
    There is no io library in the D2 API. So you can't really load that file. And since you don't have GET requests you cna't use HTTPRequest to receive them from a loaded file via an outside app. This is one of the problems raised by the others.
    Explanations on the normal, high and very high brackets in replays: here, here & here
    Why maphacks won't work in D2: here

  3. #13
    Basic Member
    Join Date
    Sep 2017
    Posts
    56
    There's require().
    Write to a .lua file with a function that returns a huge formatted JSON string or a table or something

  4. #14
    Basic Member
    Join Date
    Dec 2016
    Posts
    731
    Quote Originally Posted by The Nomad View Post
    There is no io library in the D2 API. So you can't really load that file. And since you don't have GET requests you cna't use HTTPRequest to receive them from a loaded file via an outside app. This is one of the problems raised by the others.
    You can do a GET via implementing the POST as a polling activity and having the server send data as part of the POST response.

  5. #15
    Basic Member
    Join Date
    Dec 2015
    Posts
    1
    It's a bit sad: I assume there are alot of people interessted in RL who would like to get their hands on dota (including me) but can't, because the API is not suited for this task.
    You need a completely simulated environment, which allows you to

    1) Speed games up
    2) Run multiple games in parallel (e.g for A3C, ES)
    3) Potentially re-run replays for supervised learning
    4) Start and end games as you see fit.
    5) Has native bindings, so you can integrate with whatever you see fit.
    6) Ideally is graphic-less or renders to an offscreen buffer

    Right now, you would need to reverse the game and implement the above mentioned features on your. Something which is out-of-scope for anybody but the very high rollers (deepmind, google brain, openai etc.) which can afford to spend some human ressource for some months on this. For everybody else the entry fee is just too high, especially since its quite unsure whether one can actually produce any results (probably not).

    The workarounds suggested here (even they come with kind a lot of work), are not sufficient at all. Deep RL is just way too data-inefficient as that we can afford to run a game for up to 60 minutes. I think everybody whos into RL and want to get their hands on some outstanding challenges, will skip Dota and move to Starcraft directly. Its just a hell lot more convenient.

  6. #16
    Basic Member
    Join Date
    Dec 2016
    Posts
    731
    Quote Originally Posted by Paranaix View Post
    1) Speed games up
    That exists in bot-games, lobby-games and custom games using the host_timescale console parameter. Sure you can't set it through the API, but you can set it once by hand and it stays set until client is exited.

    Quote Originally Posted by Paranaix View Post
    2) Run multiple games in parallel (e.g for A3C, ES)
    You can do that with multiple accounts and VMs if you have the hardware; however the entry barrier is there for this. My hope is that eventually replays will be able to dump CMsgBotWorldStates and thus allow us to process as many games in parallel as our hardware can handle for training/supervised learning. Otherwise, we are out of luck here unless multiple enthusiasts are able to group up together, work on a common RL approach and thus use their own numbers (and possibly volunteers) to run enough games to provide needed data.

    Quote Originally Posted by Paranaix View Post
    3) Potentially re-run replays for supervised learning
    So, yeah, we can't really do that; however, we do have the world state dumps we can reparse as much as we want for frame by frame dissection and re-interpretation.

    Quote Originally Posted by Paranaix View Post
    4) Start and end games as you see fit.
    That is doable with custom games API, not with bots/lobbies.

    Quote Originally Posted by Paranaix View Post
    5) Has native bindings, so you can integrate with whatever you see fit.
    This is too generic to really address. If you make a list of what you feel is missing perhaps it will be addressed?

    Quote Originally Posted by Paranaix View Post
    6) Ideally is graphic-less or renders to an offscreen buffer
    I swear that a long long time ago @ChrisC posted that there was a headless way to run Dota2, but don't ask me to find that thread.

  7. #17
    Basic Member
    Join Date
    Dec 2016
    Posts
    76
    Quote Originally Posted by Paranaix View Post
    It's a bit sad: I assume there are alot of people interessted in RL who would like to get their hands on dota (including me) but can't, because the API is not suited for this task.
    You need a completely simulated environment, which allows you to

    1) Speed games up
    2) Run multiple games in parallel (e.g for A3C, ES)
    3) Potentially re-run replays for supervised learning
    4) Start and end games as you see fit.
    5) Has native bindings, so you can integrate with whatever you see fit.
    6) Ideally is graphic-less or renders to an offscreen buffer

    Right now, you would need to reverse the game and implement the above mentioned features on your. Something which is out-of-scope for anybody but the very high rollers (deepmind, google brain, openai etc.) which can afford to spend some human ressource for some months on this. For everybody else the entry fee is just too high, especially since its quite unsure whether one can actually produce any results (probably not).

    The workarounds suggested here (even they come with kind a lot of work), are not sufficient at all. Deep RL is just way too data-inefficient as that we can afford to run a game for up to 60 minutes. I think everybody whos into RL and want to get their hands on some outstanding challenges, will skip Dota and move to Starcraft directly. Its just a hell lot more convenient.
    for my proj : https://github.com/lenLRX/Dota2_DPPO_bots
    1)DONE
    2)DONE
    3)no replay now,but i got live for the simulated game, rep is not that tough for impl.
    4,5) what does see fit means
    6)DONE
    https://github.com/lenLRX/Dota2_DPPO_bots ----My ML bot work in progress

  8. #18
    Basic Member
    Join Date
    Sep 2017
    Posts
    56
    So I'm kind of lost reading your code lenlrx,. I see you're using some kind of httprequests, which I'm assuming is from a bot using the dota API. But how are sending info back, and how are you doing the parallel/headless games?

  9. #19
    Basic Member
    Join Date
    Dec 2016
    Posts
    76
    Quote Originally Posted by Siesta Guru View Post
    So I'm kind of lost reading your code lenlrx,. I see you're using some kind of httprequests, which I'm assuming is from a bot using the dota API. But how are sending info back, and how are you doing the parallel/headless games?
    take a look at the last part of main.py, cppSimulator actions for simulators , mp_sim for paralllel simulator, start_env for httpreq the lua codes are in other repo, since i am spending all time in simulator i am not sure if it is broken now
    https://github.com/lenLRX/Dota2_DPPO_bots ----My ML bot work in progress

  10. #20
    Basic Member
    Join Date
    Sep 2017
    Posts
    56
    So what do you mean with simulator exactly? Are you trying to recreate the game?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •