inkoalawetrust Posted December 15, 2023 22 hours ago, Rudolph said: With all the talks these days about whether ChatGPT and other algorithm-based stuff are not actual AI because it is not intelligence and yada yada, I have to ask: is it still appropriate to refer to the monster behaviours in Doom as "AI"? I admit that for the longest time, I have been using "enemy AI" to refer to them myself, but now, I am having doubts. It's all AI, yes even Doom's, but like others have said it's just very primitive AI, and of course has no form of machine learning, so it doesn't self improve in any capacity. Video game AI in general is definitely AI, but it's just (Mostly, depending on implementation) hardcoded, and pretty much always made to only work in the extremely narrow scope of the game, for example Doom's AI wouldn't work in Duke Nukem because of how different the two programs are. Essentially, Doom's AI (And all monster behavior, including actions you wouldn't consider AI like dying animations) are handled by a state machine, where two main AI functions are called (A_Chase() and A_Look()) in the monsters' Spawn: and See: states. The major difference between video game AI like Doom's basic AI functions being ran by a state machine, and more advanced modern approaches to AI like behavior trees. Is that they don't learn at all, they have to be pretty much purely hand crafted, having manually written heuristics to check for conditions like when the NPC is in danger etc. While the AI that's in the news so much nowadays like GPT-4 is ran by neural networks, basically an approximation of the brain itself, and it allows them to learn (Machine learning), so using ChatGPT as an example. It can converse by having actually read god knows how much text to be able to form associations with it (e.g water makes things wet), know written descriptions of what things in the real world look like (e.g a car) etc. But unlike a chat bot from say, 5 years ago, it doesn't know those things because it was hardcoded with that knowledge, or how to recognize objects. It knows them because a very large neural network was given terabytes of text to read through and learn to write from. That being said, the knowledge is still actually kinda hardcoded, since so far these models CANNOT learn after their initial training period. For example IIRC ChatGPT doesn't know about world events past like 2021 (When GPT-3 was probably trained, or the dataset acquired), though I think it is allowed to access the internet now, but it can still only memorize and recall things within a local conversation, and only a very limited about of it. Like a computer with gigabytes worth of stored ROM, and only like 8 kilobytes of RAM. TL;DR Video game AI is hardcoded like every other aspect of a game. The AI models in the news are a big deal because they actually teach themselves through machine learning using an approximation of how the brain works, as a result though they are also black boxes, like planting a seed, we know how to train them but not how or why the fully grown models think or act the way they do, guess that just makes them a bit TOO similar to the brain lol. 21 hours ago, Andrea Rovenski said: AI, in the modern sense, doesnt really mean anything and is just a buzzword for investors and consumers to get hyped over the same things we've had already That's just flat out fucking wrong and it's frankly getting annoying to keep seeing this shit. This type of machine learning is absolutely fucking new, this kind of approach of using GANs and transformers for AI and machine learning didn't exist until literally the 2010s, the only parts of these we have had before are basically just that machine learning in particular (As opposed to trying to hard-code an AI) isn't a brand new concept, and that transformers technically date back to the 90s due to like 2-4 experimental projects. You might as well argue that electric cars aren't that big of a deal or anything new because technically they've existed since the start of the 20th century. 4 Quote Share this post Link to post
plums Posted December 15, 2023 5 hours ago, Rudolph said: How is ML for Machine Learning not an acronym? The Wikipedia article uses "FBI" and "NYPD" as examples of acronyms. It's not wrong to say FBI (or ML) is an acronym but it's more correct to say it's an initialism. https://www.thefreedictionary.com/acronym Quote Usage Note: In strict usage, the term acronym refers to a word made from the initial letters or parts of other words, such as sonar from so(und) na(vigation and) r(anging). The distinguishing feature of an acronym is that it is pronounced as if it were a single word, in the manner of NATO and NASA. Acronyms are often distinguished from initialisms like FBI and NIH, whose individual letters are pronounced as separate syllables. While observing this distinction has some virtue in precision, it may be lost on many people, for whom the term acronym refers to both kinds of abbreviations. 4 Quote Share this post Link to post
LadyMistDragon Posted December 16, 2023 I mean, the pinkies always run in a zig-zag pattern so I assume so, although it's doubtful pathfinding is included in that :P 0 Quote Share this post Link to post
Lila Feuer Posted December 16, 2023 Untold fact: id actually summoned real demons during their D&D campaign and created Doom to contain them. That's why Ishii in Doom 3 says "The devil is real! I know...I built his cage." as a clever inside joke at the company. 1 Quote Share this post Link to post
OniriA Posted December 16, 2023 10 hours ago, Lila Feuer said: Untold fact: id actually summoned real demons during their D&D campaign and created Doom to contain them. That's why Ishii in Doom 3 says "The devil is real! I know...I built his cage." as a clever inside joke at the company. That's 2 deep 4 me. 0 Quote Share this post Link to post
erzboesewicht Posted December 16, 2023 On 12/14/2023 at 9:58 PM, RDETalus said: It probably won't happen anytime soon though because game environments don't have enough freedom to provide for all the possible answers that a generative AI can create. If the AI says something crazy like, "distract the player with a fire extinguisher," your game isn't going to be able to accommodate that action because fire extinguishers don't exist as an interactable object. Well we could create a language model limited to texts that make sense in the Doom universe, more precisely: the universe that can be created by the Doom engine. For example, train a neural network with the DWmegawad club, Cacowards, other WAD reviews, technical descriptions of the Doom engine, and so on. A very interesting text format to train would be probably demos transcribed to text, with descriptions of the movements of player and monsters, stats like health/armor and the environments. These texts would be quite large if they are precise, and they could use the "jargon" used by Doom players and reviewers, so there would be enough data for the AI to learn. That would be actually a fun experiment :) 1 Quote Share this post Link to post
Jakub Majewski Posted December 17, 2023 An excerpt from Ai Game Engine Programming by Brian Schwab(2004). Despite the book's age, I feel it's relevant to the topic at hand. I feel that just because the "AI" in Doom isn't very "intelligent", or doesn't involve neural networks like AI models of today such as ChatGPT, doesn't mean it cannot be referred as an AI. To me, intelligence is demonstrated by the ability to solve practical problems and adapting to various circumstances. The enemies in Doom behave differently, depending on their distance from the player, whether they can see them, and and the shape of terrain they find themselves in. That's good enough for me. Relating to the above, I like to think of my thermostat as intelligent. 🙃 3 Quote Share this post Link to post
roadworx Posted January 25, 2024 an ai posting in a thread about ai. funny. 0 Quote Share this post Link to post
Elendir Posted January 27, 2024 What intelligence is we still don't know. It's as old a question as what is consciousness. The current development of the commercial Artificial Intelligence was definitely a huge leap forward. However when go back to the times of early games, like Mario for example we could have already seen a certain phenomena - a flat sprite behaving in a particular way which provokes us (players) to take actions. Even if this action was as simple as: "move horizontally until you hit the border then change the direction to opposite", it engaged us and we started to apply the term intelligence to even primitive actions only if those actions were somehow interacting with our world. We need this interaction, and if we see it, even in the simplest form, it attracts our attention. Take a moment to realize, that this built-in human behavior is also very simple and very algorithmic. That’s why a simple thermostat which was mentioned by one of you, was at some point described by people as intelligent, just because it was able to noticeably react with a world and human comfort. Those simple actions (vide thermostat) shouldn’t be related to the term intelligence. They are nothing more than decision tree algorithms carefully planned by an engineer. And that’s all. However if we take this idea drastically further to a moment of creating an algorithm having an enormous decision tree we would then be facing a unique machinery (It’s a deep philosophical and physical topic). The current AI have evolved into something else. There’s something that can be viewed as a new kind of decision tree algorithm (although I’m oversimplifying it now). It’s neutral network - mimicking to certain extend a real brain. This time a decision is made in way that is not visible to us anymore. It’s a tool similar to a real brain, that we still don’t know - and perhaps will never know. Machine learning comes into play here. We need data, we examine them, and we get the results. If the results are week, we repeat the process. This is how we get more efficient. How this processing is performed deep inside our brains is still unknown. So we came up with an idea to imitate that kind of functionality with this big question mark underneath. In other words you an imagine a Doom game upgraded to support current AI. Before you can play the game, the AI would need to play thousands of times as an Imp, as a Pinky, as a Baron etc… It would play those games incredibly fast. At each gameplay it would learn how to play every character available in the game including the powerful DoomGuy. With every gameplay it would adapt it’s own tactics. After a few hours of machine learning, you as a player would have a chance to play a new Doom game with all characters behaving in such a way you have never seen before. This is still however something that we use to call “a weak AI” - because it’s aimed at just one simple thing - get better and better in Doom. No other functions. Personally I would love to see that kind of project. I’d like to see my favorite Doom characters behaving in a lethal unpredictable way adapting to all new circumstances. 0 Quote Share this post Link to post
RataUnderground Posted January 27, 2024 Guys, are Doom monsters GAI? 0 Quote Share this post Link to post
MS-06FZ Zaku II Kai Posted March 15, 2024 On 12/15/2023 at 1:58 AM, RDETalus said: None. "Enemy AI" in a video game is no different than any of the other code used in the rest of video game; just simple lines of code written by a human to create certain conditional behaviors. "If player is behind cover, then throw a grenade." The modern generative / deep learning AI stuff is completely different because there isn't a human coder writing every line of code, instead the AI is being fed enormous amounts of literature / artwork and left to its own to figure out the associations between input and output. I don't think anyone has attempted to apply this generative AI process to enemy AI in videogames yet. It probably won't happen anytime soon though because game environments don't have enough freedom to provide for all the possible answers that a generative AI can create. If the AI says something crazy like, "distract the player with a fire extinguisher," your game isn't going to be able to accommodate that action because fire extinguishers don't exist as an interactable object. Could it be possible (in theory) to create a generative AI in a fixed enviroment that acts like an NPC? Like take for an example a generative AI that has to function as an enemy soldier. It has a simple task to locate player character and kill him. You then give this AI limited resources like a weapon and an enviroment to train in. Is such a tech possible and if so, how would it differ from current game AI? 0 Quote Share this post Link to post
inkoalawetrust Posted March 16, 2024 (edited) On 3/15/2024 at 4:13 PM, MS-06FZ Zaku II Kai said: Could it be possible (in theory) to create a generative AI in a fixed enviroment that acts like an NPC? Like take for an example a generative AI that has to function as an enemy soldier. It has a simple task to locate player character and kill him. You then give this AI limited resources like a weapon and an enviroment to train in. Is such a tech possible and if so, how would it differ from current game AI? Well not generative AI, that's AI that creates media like text and images. But yes, there's lots of cases of hobbyists training machine learning algorithms to navigate video game environments and play games, though nothing as complex as training an AI (Or perhaps multiple AI's handling different things, like one for moving and another for shooting, working in unison) to play a whole FPS game. Here's an example of an AI trained on Trackmania courses. Edit: Lol, I forgot to actually write the rest of the text, so I never even specified what's "As complex as training an AI", training an AI to do what ? Edited March 16, 2024 by inkoalawetrust 2 Quote Share this post Link to post
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.