Page 1 of 3 1 2 3 LastLast
Results 1 to 10 of 27

Thread: Dota2 - Web-based AI

  1. #1
    Basic Member
    Join Date
    Dec 2016
    Posts
    644

    Dota2 - Web-based AI

    With the recent fix to the CreateHTTPRequest() I have started developing a web-based dota 2 botting framework.

    Github Repo

    What's the Plan?
    The plan is to have the web-based Python framework control all the high-level/meta decisions for the bots. It will ingest raw data about the game provided by the Dota 2 API (i.e., hero/unit information, building status information, game progression information, etc.) and decided what each bot on the team should be doing at the high-level. Each bot will still have a LUA implementation in-game and knowledge of how to execute directives passed down from the web-based game decision-making system.

    For example - the web-based system might tell the bot "farm BOT_LANE". The bot will know how to do this by issuing appropriate move or teleport commands to get there, then how to properly last hit/deny as the web-based framework will not tell the bot how to last hit, when to last hit, when to deny, etc. At least not for now, maybe later if we truly move into the Reinforcement Learning AI.

    Eventually, I might even have a web-based GUI that will show you a replica of your game as represented by the world state of the web-server and allow you to manual issue commands (or toggle toggles) to get the bots to take certain actions. As an example, I might have a button that you can press on the GUI that will instruct the bots to go to Roshan.

    Where is the AI?
    It won't exist for now until all the plumbing is complete; meaning, until I have basic bot play-ability and hard-coded logic working as orchestrated by the web-based back-end framework. Once that is complete I can start leveraging available AI/Machine Learning/Reinforcement Learning Python libraries to start learning over certain aspects of the game (like formations, fight priority, etc.). The plan is to eventually get there, but it won't be in the near future. However, the whole reason and design choice of this project is TO GET THERE!

    For even the possibility of having future AI in this I needed a back-end server implementation of the logic so that I can in the future allow for proxy servers to feed a final aggregator server (perhaps existing in AWS). Then everyone using this bot can run Python scripts I will write which will redirect their localhost servers to the main server so that it can learn "at scale". This is because no single instance will most likely have enough test data (aka games played) to really train on and analyze.

    What about your Dota2-FullOverwrite Project?
    It is not going away currently, but ultimately it will become this new project. I plan to leverage a lot of the code from that project into this one. There was a reason I did a "full overwrite" and that is to allow for this project to eventually happen. The way that project was coded allows for easy transition of the control logic to occur. Much of that code will become the in-game directive execution API for the bots as instructed by the web-server.

    I needed to port the decision logic to Python to leverage multi-threading, existence of many research-based 3rd-party libraries, etc. in order to eventually reach the dream of Dota 2 AI. It could have been C/C++/Java, but honestly, it essentially will be anyways. I say this because most/many Python AI libraries leverage numpy or pandas which is all C++ code under-the-hood anyways.

    Can I Contribute?
    Sure. As with my other project, you are welcome to help. All I ask is that you drop me a note saying what you are doing and when you expect to be done (and if you happen to decide you don't have time, that's fine too, just let me know). If you have no idea how to help, just ask.

    This is a learning experience for me and a fun one (hopefully) as I'm passionate about AI. I tend to be very active about things I like doing so I am typically around to answer questions, discuss strategy, or just even chat about life.

  2. #2
    Basic Member
    Join Date
    Dec 2016
    Posts
    644
    One more thing - I just started this today and devoted maybe an hour of my time so far. So there isn't much there yet. For now I just have basic bi-directional information flow between Dota 2 and the web-based server with a start of a world instance to represent the world state in Python memory.

  3. #3
    maybe later if we truly move into the Reinforcement Learning AI.

    That would be awesome, since I've never seen a learning AI in-game, plus would be fun to watch or play with/against it.

  4. #4
    Basic Member
    Join Date
    Dec 2016
    Posts
    62
    there are already dota2comm which is enough for Reinforcement Learning AI.
    i hope http can do better

  5. #5
    Basic Member
    Join Date
    Jul 2012
    Posts
    19
    Do you plan to have real-time communication with server during game or just getting weights for different states from server at the start of the game and send sending results back for RL?

  6. #6
    Basic Member
    Join Date
    Dec 2016
    Posts
    644
    Quote Originally Posted by SarCasm View Post
    Do you plan to have real-time communication with server during game or just getting weights for different states from server at the start of the game and send sending results back for RL?
    For now I will have real-time comms. Eventually, when we get to RL we still will have real-time comms with the option of dumping transitions weights to a lua file that can be run on a LAN

  7. #7
    Basic Member
    Join Date
    Dec 2016
    Posts
    644
    Did some initial comms testing and I have no problem keeping up with transmission frequency of 0.1 sec. At frame-rate (60 FPS is 0.016 freq.) I can see some issue with asynchronous socket delays (meaning, we send a second message to the web-server before the first reply from web-server arrived). However, b/c we are doing high-level decision making at the web-server (more like mode transitions and high-level concept directives) a 0.1 sec rate is plenty fast, perhaps overkill even.

  8. #8
    Quote Originally Posted by lenlrx View Post
    there are already dota2comm which is enough for Reinforcement Learning AI.
    i hope http can do better
    It would be awesome to watch a learning AI in action like on an offline match of team fortress 2, left 4 dead 2 or even like gta 5/bully: se (would be interesting to play a multiplayer online gta game with all learning ais, or even 'pro' ais and you can just watch all the police going nuts after them).

  9. #9
    fingers crossed...when I start doing bot stuff again Ill probably try to move to contributing to this rather than working on my own stuff.

    as this seems like a solid plan/foundation.

    having to store 'history' of game-so-far, which was necessary for some decisions, in global lua tables was kind of painful.

    This is very similar idea-wise to what lightbringer was doing/had kind of set-up before the bot api existed.
    https://github.com/lightbringer/dota2ai

    but it would just be using/referring to the bot-api stuff rather than generic custom-game stuff

    python is kind of my bread and butter, I would like to learn/practice something new. If you have no objections I would enjoy trying a parallel version of this project in a lower level language (I would copy algorithms/structure...just raw translation...but I would reference that it was copy of this)


    also I still think the best approach for ML dota bots is going to be learning on pro-players/real players from replay parsing.
    Im going to investigate if I can get some kind of 'sync' between clarity world-states/bot api world states for the inputs, as well as player actions/bot api actions for the outputs


    edit: also although numpy stuff does ignore the GIL. Watch out for that in other places in code, if trying to use python for multi-threading. Pretty sure Guido himself basically just advocates doing multiprocessing, not multi-threading for python
    Last edited by TheP1anoDentist; 05-24-2017 at 04:36 AM.
    https://github.com/ThePianoDentist/t...dentistdotabot Lina bot which pulls small camp when 'laning' (Aim to work on pulling and stacking bots initially)
    https://github.com/ThePianoDentist/dotabots-ml-tools Parsing data from bot games

  10. #10
    Quote Originally Posted by lenlrx View Post
    there are already dota2comm which is enough for Reinforcement Learning AI.
    i hope http can do better
    Also, I'd like to see some 1v1 solo mid bots, since I haven't seen any of them tbh, plus that would be a good start testing bots, since there's no need to worry about other heroes and other lanes.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •