printz Posted June 16, 2016 mgr_inz_rafal said:Currently it runs outside of DooM engine - it's just a separate program which reads a MAP and creates path that is later executed by DooM engine (zDoom, to be precise). Well your idea is already a step ahead of mine. Ideally a bot should only use the output accessible to the human user. We're lucky Doom is open source. 0 Quote Share this post Link to post
actinide2k9 Posted June 28, 2016 How about using a neural network in combination with a genetic algorithm for training? Create a source port that does not render graphics (for maximum speed when training) and try to train the NN for x generations. There are several ways to realize this. One is to use only visual input to train the network. This would probably not work or take ages. Another way is to give monster locations/item locations as an input and visual input to the neural network. Alternatively you could also give the level data/player data/monster data/item data as inputs. I guess this would be the best way to create a NN that could still handle it. It would still take ages to train though, and there is a chance it will never be good enough. Just throwing some random thoughts in here ;) I still have some neural network code lying around for a autonomous robot I build for my Computer Engineering Bachelor's degree... Maybe I could try to do something with that :P The only thing I would not like to do is build a renderless source port for this... Does anyone know if chocolate doom is open source? Maybe I could use that. EDIT: There is a very, very big chance it can not handle traps and stuff like that of course... I'm just curious how a NN would handle it now Thinking about it even more... as Maes suggested using recursively solving tasks might be a very nice way to handle it. In combination with a NN for decision making it would probably be possible to get a decent bot for a few simple levels. MOAR EDIT: I looked at the source of chocolate doom... is there any C++ source? I'm more proficient with OO programming instead of functional programming. 0 Quote Share this post Link to post
Nixx Posted August 3, 2023 Hello, Sorry to resurrect this thread, but the subject interests me and I've been looking into it in a concrete way, it seemed silly to open a thread just for the following: I made a quick little project that collects data at each tick and saves it in a CSV, whether it came from a demo, launched instantly (as fast as the CPU can) and without visuals or launched normally, I was able to recover everything: positions of vertices, linedefs, sectors and the height of the floor and ceiling, everything relating to "things", position, health, "frame" (mobj state) as well as the player's inputs and so on. I was able to do this fairly quickly thanks to the Managed Doom source port (by @Sinshu), made in C# (which is the language I know best and most convenient to do this easily). The CSV format isn't optimized, of course, and besides, my format is a mess and almost unusable. That wasn't the point, but it was the quickest way to see how it looks The CSV output looks like this (~60MB (!!!) for one demo, it takes 16 seconds): the format : gametick,datatype, and other values (for things: mobj type, posX/Y/Z, target, health, frame). Mobj type MiscXX are items (health/armor bonus...) For linedefs and sectors, I can also extract the sector number, action number, direction (front and back sectors) and tags. I haven't gone any further than what's been done so far, I'm a developer, not an AI engineer. I just wanted to verify and demonstrate that it's theoretically possible to train an AI with demo data as input, pretty quickly I guess, obviously with an optimized method and data format. I'm quite surprised that no one has tried, except VizDoom source port. For the test I used a demo on E1M1 UV-Max by Simon Widlake in 3:36. 0 Quote Share this post Link to post
BOOPADOOPDOOP Posted August 4, 2023 I like autodoom, tho it does get stuck a lot in some parts, kinda annoying. 0 Quote Share this post Link to post
andrewj Posted August 4, 2023 12 hours ago, Nixx said: it's theoretically possible to train an AI with demo data as input You need to define exactly what you are training though. For example, one task may be deciding which monster the AI should target at any given time, using the information available to it (available weapons, ammo, health, relative position of enemies). So the input of the NN will be that information, and the output will be which enemy to target -- this is the function to train, and you use the demo data to see what real players did, and train the NN to mimic that as closely as possible. Other tasks are less ameniable to using a NN, e.g. deciding where to go in the map means analysing the map for locked doors, finding their keys, looking for remote switches which open doors or lower platforms, etc, and AutoDoom has that logic -- much easier to do with code than some ML system. 0 Quote Share this post Link to post
Nixx Posted August 4, 2023 Yes, from what I have read, this way of proceeding looks like supervised learning, even semi-supervised. Ideally, I think unsupervised learning is best, with the goal of reaching the end-of-level switch, and can also increase its score with 100% kills and secrets. As for Autodoom, I hope that @printz will get back to it soon to implement jumps and optimize its node builder (the bot tends to follow edges sometimes and fall in a loop), as well as being able to support limit removing maps, because the maps I create crash on startup, but I think it comes from my nodebuilder (ZDBSP - UDMF). But he still did a great job so far, if he improves it, I'll use it as a testing tool, by the way, I recorded a title screen demo for the wad I'm working on with other people, we'll see if we use it. 0 Quote Share this post Link to post
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.