Game AI: How to implement a basic deathmatch shooter

We from the Yuka team have finally developed a more complex showcase to demonstrate the features of the library. The following example demonstrates an implementation of a basic shooter-AI in a very simple deathmatch scenario. You can watch the game flow from an aerial perspective or click on “enable FPS controls” on the control panel in order to participate in the game.

Start Demo

The reasoning and decision-making processes of the AI are based on a so called “goal-driven agent design”. The weapon selection is implemented with fuzzy logic and the path-finding with a navigation mesh. All these features are provided by Yuka, so we were able to focus on the actual app-related logic.

If you play the game by yourself, you start with a blaster but you can collect a shotgun and assault rifle as well as health packs. The controls are quite basic:

  • Movement: WASD
  • Reload: R
  • Shot: Left Mouse
  • Weapon Selection: 1 (Blaster), 2 (Shotgun), 3 (Assault Rifle)

Please keep in mind that it’s not a rendering showcase^^. We’ve focused on AI programming and just provided a simple but (hopefully) clean visual appearance. Rendering is of course done with three.js. All models are glTF assets and we use an unlit MeshBasicMaterial with a light map for the actual level. Most of the models are from Mixamo and Sketchfab, the level was designed by Abdou Bouam.

The project is open-source and MIT licensed, see https://github.com/Mugen87/dive. Of course many features you would normally expect from a modern AAA shooter are missing. It is still a technical demo and not a real product/app. We hope to enhance it with additional iterations and also provide in-depth documentation and tutorials. However, it is matter of time and money if this is doable in the future. In any event, the project should be already a useful resource for everyone how wants to implement more advanced game logic. Contributions and feedback are welcome!

10 Likes

That is awesome.

1 Like

Well, it’s not like the new Rage but at least Open-Source and rendered with three.js:grin:

2 Likes

I could see right off of the top of my head (Since you put it out there…) :slight_smile:

1.) Hit detection (maybe raycaster modified as a particle that could be fired at an object and report back on hit? )
2.) Geometries that have varying damage modifiers when hit.
3.) Any sort of buildup of energy the more you do something.
4.) Cool particle/volumetric effect (otherwise known as an “Alt” or a “Super” ability) takes place after stats reach a certain level/death/respawn.
5.) Make it playable with a controller slight_smile:
Although I know this is targeted at mostly laptop/PC GPU’s, even if this were made into a game API for larger GPU’s (aka Console type GPU’s) this would be huge.

Of course, if you make these changes to the Demo you will be hounded to keep adding things to it for everyone’s game development. You will have no time for general questions on Three JS :worried:

There is always time for community support^^

Just wow. You guys rocks!

1 Like

Looks great. A couple of small points from my side:

  • health bars. It would help understand what’s going on a bit more.
  • color-code weapons projectiles. I can’t tell the difference between blaster and AR
  • enable drawing of back faces on the level mesh, it looks sad without that
  • more aggressive blending of strafe animation. looks quite floaty right now

a few more that might be low-hanging fruits:

  • particle effects for pickups
  • on-hit particle effects, like sparks from level hit or blood spatter on body-hit
  • a couple more bots, the demo feels a bit anemic on action
  • view cone. It’s not obvious who can see whom at a glance. I think it’s super cool that bots physically turn, but it’s easy to miss I feel without seeing view cones/facing direction.

Many thanks for your feedback! I will put your points on a to-do list for further consideration.

If we would ever develop a real product, many things like your suggestions would be important additions. In this particular case however, we wanted to focus on the AI part and verify the features of Yuka with a more complex showcase. If you only develop very simple stuff, certain restrictions of a library won’t be visible. This project provided valuable input to make Yuka more stable.

Besides, it was important for us to demonstrate that you can implement something like this with good performance even for low-end devices. The app is single threaded and we have done no modifications to three.js. One of our most important task for the next months is to further improve the performance by implementing more software optimizations in Yuka.

BTW: The demo just draws the front faces so it’s easier to inspect what’s going on from the spectator view. In FPS mode, it is not an issue at all. Unfortunately, it’s hard for us to optimize the animations of the bots since we have no designer in our team. We are using the assets 1:1 from Mixamo.

2 Likes

Looks great. Studying it now.

I initially tried it on a MacBook using Safari and got a black screen. Console shows this error ReferenceError: Can’t find variable: requestIdleCallback.

Looks great in Chrome.

So Yuka is a Game Engine using THREE.js as the main render component. Is that correct?

Do you generate NavMeshes by hand or do you use a program to generate them?

1 Like

Safari is not supported so far since it does not support window.requestIdleCallback. This could be polyfilled but Safari does also not support Ogg Vorbis. We decided to exclude Safari and invest our time in other stuff. Sometimes I have the feeling Safari becomes the new Internet Explorer :unamused:

Yuka is actually just an AI engine but it provides some game engine related logic so you can easier represent your game entities as renderable components (e.g. meshes). In any event our team is too small to develop a real game engine (we are only two guys right now^^) so we try to focus just on the AI part. It’s also important to note that there is no dependency to three.js. You could also use Yuka with any other 3D engine like Babylon.js.

We generated them in Blender. However, a long-term goal of the project is to implement an own nav-mesh generator. The general approach for doing this is well documented in code and scientific papers. It’s just a matter of time to start with a JavaScript implementation. In this way, you could generating nav meshes on-the-fly if necessary.

If you are interested in the general concepts, read Programming Game AI by Example. We extensively worked with this literature^^. The basic AI architecture of the project is based on the shooter “Raven” which is discussed in detail at the end of the book.

1 Like

Thanks for advice. I’ve been playing with the Blender Navigation Mesh builder. Afraid it isn’t as tidy as the Unity one. Looking for ways to export the Unity one as a 3D Object. Might be easier to create one by hand.

If you can export it to glTF, you can load it with YUKA.NavMeshLoader since the library expects the data in glTF format. This project might be helpful in this context.

Thanks for the tip.