TACStrike: weeks of September 28th - October 11th

Published on Monday, 12 October 2015 11:08
Written by snake5

AI level data processing tech and asset compilation were the two things I found most important in these two weeks. There were some other things, though.


Direct3D11 port is finished

...and it has pretty much replaced the Direct3D9 renderer at this point.


Seems to be working fine too.


Fixed hitbox issues

The frequent bullet misses are no longer a problem


For quite some time I was wondering why it was so hard to hit the enemy. Found out that there were some issues with hitbox raycasts. That's solved now.

There are still plans to improve the controls, making aiming more appropriate for gamepads. Currently it's decent only for mouse users, gamepad users might find it a constant struggle against an ever-changing time/position reference frame. Some might enjoy it but from my observations they appear to be a minority that spent their childhood playing hardcore games. And I'd like the game to be more inclusive than that.


Covers and their absence

If I was an AI soldier, where would I stand...


After some thinking, I came to a conclusion that the most efficient placement for enemies if they wanted to attack is... simply the inverse of standing in places shadowed by covers. The picture shows these places, calculated against player's position.

So I implemented that, as well as somewhat statistically correct point selection within those lines, taking distance into account, and line-sphere clipping to avoid stepping into positions that are already occupied.

This concluded AI tech development, which means that all I have to do now is to assemble the pre-made pieces into a proper virtual adversary.


Asset compiler

How many times should polishing transforms be manually applied to work-in-progress art?


This is not all done just yet, however it shows great promise already, sometimes even more than that.

The goal of the asset compiler is to automate the redundant actions done by the artist (currently, it's only me) to make art editing more pleasant and thus allowing to create more stuff in less time. This is done by creating a tool where it's possible to register all assets and the actions they require.


Currently supported texture filters: color [de]linearization, resizing, sharpening (3 types of kernels), range expansion, brightness/contrast/gamma.

Mipmaps can be pregenerated if the format supports it. For texture input, everything stb_image supports can be loaded. If the format supports extended attributes, they're also saved.

Here's a comparison between the generated texture and the original image:


Red tint was removed, color range expanded and the image sharpened. And thanks to the texture duplication option, I can easily transfer this set of filters to another texture.

As for the models, they're much more simple - data is loaded using Assimp, material parameters and some fixed filters are applied and textures are linked to them.


One unusual little feature for which I got the idea from my breakable glass implementation and the way Assimp handles model parts - I can export the same model part more than once with different settings. This is useful in situations that require multipass rendering (like glass). For example, first pass uses the "multiply" blend mode, rendering the glass tint. The second pass would render the dirt/crack layer, which needs the default blend mode.

Now, this isn't all fun and games - I still have to figure out how to handle asset revisions and caching in a way that deals with changes in source art, changes in script, relocated asset outputs, overwritten assets etc.

It is also likely that I'll generate SVG fonts here and implement some basic file copying. Not sure what to do about maps, particle systems and characters but it's likely that they'll stay unchanged for now.


The current goal - prototype. Having most of my tech prepared for this, I should finally start putting things together. As they say in movies - this is the moment of truth...

Add a comment

TACStrike: weeks of September 14th - 27th

Published on Monday, 28 September 2015 06:32
Written by snake5

Renderer upgrades and miniLD62 were the activities of these 2 weeks, one each.


Renderer upgrades

Everything should look pretty much the same... but be better.



A lot of code was extracted from the D3D9 renderer and generalized to make it applicable to all renderers.

  • Render states
  • Render targets and depth/stencil surfaces
  • Drawable item processing
  • Core structure of the rendering code

This required the creation of some new systems but in the end, renderer code was reduced and made more reliable. These changes also opened some doors regarding options to override mesh data - all materials are now stored in the mesh instance, meshes only have texture paths/shader names/material flags now.

Another interesting product of these changes is the function GR_PreserveResource() - it takes any resource and stores it in a set for exactly 2 frames, preventing constant reloads without an explicit member variable (they aren't a good option in overridable interface callbacks).

Some things still aren't extracted from renderers, such as shader uniform buffers and... I guess that's about it. Also, projectors haven't been fully ported yet. But I'm working on it so upgrades are expected before OpenGL support is finally implemented.



One jam per month seems to be the new standard lately.


Game can be found here.

So this is just a basic unfinished first person shooter. Whoever finds a mistake with it - rest assured that I already know about it and many more.


I decided to build this as a focus shift experiment and it seems to have worked rather well for its purpose. I worked a lot on player action feedback and even implemented a help text renderer to create an intro tutorial. So far no one has said that the game is hard so that's a good thing. Hardness can be built upon when the tools to overcome it are properly presented before.


The project was also an art production experiment. I seem to have mastered two new ways to create textures. One is by drawing a pixelart bumpmap, then scaling and converting it to a normal map. Easy to get fairly decent sci-fi metal surface normal maps.


The other method is to render textures in Blender using composition nodes to route all components to the necessary outputs. This allows geometric/tiled/special surfaces to be rendered easily.


You might also notice that dynamic point lights are now available in the editor. This is another fairly new thing. The editor support was there but level format didn't support it... and it does now.



That's about all for this time. Unfortunately, no advances were made in terms of AI this time. Scripting had some minor upgrades that I didn't get to use, and it looks like there's going to be more of those.

I'd like to keep working on this project just so much to reach a technical-feature-complete version. I'd like to see that the engine can deal with everything I throw at it and more before proceeding.

After that, back to TACStrike AI and perhaps another game jam.

Add a comment

TACStrike: weeks of August 10th - 23rd

Published on Monday, 24 August 2015 07:29
Written by snake5

Even though my LD33 game is still unfinished, it's time to talk about progress. And, well, there's been quite a lot of it.


Incremental lightmap rendering

...it's all about not doing too much of it, all the time.


The goal here was to make it possible to skip the lightmapping step as often as possible (especially while doing minor edits to level, configuring path points and things like that). As I've already tested it while making the level for my LD33 game, the goal has been reached already.

It took about 30h to implement, which was 3x more than my initial estimate, 10h. The system consists of a "level graphics container" that handles all the meshes, surfaces, lights, lightmaps, sample tree and some other things. Entities, patches, and blocks were updated to manage their graphical representations through LGC. This all seems very simple but it required quite a lot of new code so bugs appeared, as expected.

This also didn't include the time spent on multithreading the lightmapper, which was required to run it in parallel to editing. I don't generally edit the level while my lightmaps are being compiled but I've made sure it's very much possible.

Another minor thing to mention - mesh lightmap size is not mesh size-based anymore. This seemed like a good idea in the beginning but usually just leads to lightmaps incompatible with the texture coordinates made for them.


The loading screen

...because plain freezing up is no longer in fashion!


I wish I had more to show here but, looking on the bright side, this image is small enough to download and can be understood quickly. :)

Not sure what to put here yet, currently it just fades in, slightly animates the text and fades out in the end. A progress bar would be welcome, as would some description of expected adventures but all that can come later.


Scripted item physics constraints

Connecting all things scripted...

08-19--0922-scritemphy sr08-19--1005-scritemphy2

Two types of them are currently implemented - hinge and cone/twist constraints. No limits yet but there are 4 constraint slots that can connect any of the (up to 4) bodies with each other or the world.

This was mostly done as research towards ragdolls but who knows where else I could apply it someday. Swinging dynamic lights are quite cool on their own as well.


Character deformation

Not quite ragdolls but of similar importance.

08-18--1012-dyndeform fx

As some could probably guess, this is made to simulate deformation created by applying external forces (gunshots, punches, explosions etc.). In the animation above, the force is applied approximately in the same orientation that the camera has.

This was created to avoid the explosion of animation variations regarding response to external forces. Hopefully it will make gunfights more visually interesting as well.

As for ragdolls, I've actually gotten quite close to having them, the character editor already supports joint editing, the file format contains all the necessary structures. I just have to apply final modifications to the editor, implement a basic generator, add the constraint limits and implement ragdoll behavior in the character system. All goals very reasonable and within reach, though I'm not sure when will I be able to get to them.


LD33 game

...is still in development, though I do have something to show:



In conclusion...

Since I've discovered lots of issues that should be addressed, such as...

  • insufficient entity script bindings
  • strong coupling between things requires me to #ifdef hacks in code for every new project
  • missing editor transformation and grouping capabilities

...I'm probably going to address these first. Maybe I'll try to poke around embedded devices (Android perhaps, this time?) as well, try to make something work for them.

I've got so many games to rate as well so the next update might be delayed due to there not being much to slow after two weeks.

Add a comment

TACStrike: weeks of August 24th - September 13th

Published on Monday, 14 September 2015 08:28
Written by snake5

Ragdolls, AI and refactoring were the main points of interest during these 3 weeks but there were other things that I played with as well.


Most importantly, my LD33 game was finished at the beginning of the week and it looks rather serious.



There were also some attempts to make fog (volumetric) lighting.


It looks somewhat OK but needs some renderer improvements to help me make these effects faster in some easier way. Those, as well as other improvements, are scheduled to happen very soon. I'm also paving the way for cross-platform rendering, both GLES2 and D3D11.


As for refactoring, it can be easily described as moving all scripting and systems out of the game level. This consisted of two very obvious parts:

  • Extract systems (like level mesh data container, scripted sequence runner) from the level and make it possible to optionally attach them.
  • Implement scripting for systems and entities, so that they can provide a C++/BC (binding compiler) -based API for scripts.

I found these necessary when I wanted to implement a cutscene involving a character walking into a room from an elevator, and opening/closing the elevator doors. Didn't have cool scriptable stuff like this so I had to cut the idea and implement basic camera movements and subtitles instead. Well, I still don't quite have it but I'm much closer to it now.



...'cause death animations are so 90s.


Seriously though, it's just another case of overwhelming asset creation costs being solved by technology. I intend to combine the previously created deformer and these ragdolls to produce something that looks as authentic as I can get it to be.

There are some things that had to be done to implement ragdolls:

  • implement visual transform (position, rotation) editing in character editor
  • implement body and joint editing and visualizations
  • covert old and half-done ragdoll animator code to new and fully working code

Some pictures and gifs from the process:



Cover system

There's nothing quite like a thick wall to protect you from gunfire.


The goal of this system is to provide AI with information as to where could they go to avoid gunfire. Inputs: location of attacker and a simplified version of surroundings. Output: lines (and later points) where the character could be protected from gunfire.

For every somewhat advanced behavior, there has to be lots of data to support it. This is my solution for movement with cover areas in mind. After that, I don't think there's anything as data-heavy as covers. Teamwork, fact processing, psychological factors, dialogs - all can be done from within the enemy script. When I'm all done with covers, all these things should appear shortly after.

Things left to be done:

  • picking a random point from the generated lines
  • removing occupied positions from the generated lines (by convex intersection)
  • detecting if cover is low (crouch to hide, stand to shoot) or high (step inside to hide, step outside to shoot)
  • finding positions from which the attacker could be seen, with the option to find flanking paths

Some work-in-progress pictures of cover queries:



As for future plans:

  • implement renderer upgrades
  • finish enemy reasoning helper systems
  • implement AI
  • implement one fighting level with arena-type scripting (respawns and score counter)
  • finish all this in a month or so, making the last folders with a round number (this week it's "wk18", last should be "wk20")

As for that last point, I'd like to move my publishing activities to a game developer forum when I'll have gameplay to show and subsequently downgrade this website to a bunch of plain HTML pages (or minimal PHP at most). I want to make content, not administer websites, so I don't need this complex interactive machine of things called CMS that's just slow and interactive and unsafe for no reason.

There's currently no plans to reboot Steam Greenlight, I'll have to see how fast will content development go, as I'd like to do it when I'm halfway there.

Add a comment

TACStrike: weeks of July 27th - August 9th

Published on Monday, 10 August 2015 06:36
Written by snake5

After another two weeks of mostly doing various fixes for existing tech, it's time to report. Let's start with an image:


Solid glass that cracks on the first shot and, well, bleeds bullets on the next. It's rendered in two layers, one is for darkening and the other is for damage (stains, cracks).


Scripted item

It was time to finish up, so editor support for scripted items was implemented.


What was necessary:

  • A dynamically reloading set of subproperties to be used in scripted item initialization
  • Scripted item enumeration and picking
  • Realtime preview

Additionally, some fixes were done for the whole system:

  • Convex hull generation from meshes
  • Full rigid body support (creating a moving rigid body from a convex hull, applying forces)
  • Upgrading hit event to include force info (position, velocity)

All of this was successfully implemented so the last thing left is just making a game entity that holds a scripted item. It's expected to be done fairly soon as there's just not much to do.


Objective item UI

...is more or less done, after some redesign and some added functions


Designed to support mouse scrolling, clicks, up/down buttons and some keyboard shorcuts. Using the new multiline text rendering function that was ported from sgs-sdl library, which was in turn ported from whatever previous engine I was working on. Has basic UTF8 support, not that I need it yet, but it's nice to know it's there.


GFX (graphical effects)

Transparency, particles and decals.


Lots of things were done to make this look good.

  • Implemented decal system override for overlay (blood) decals on scripted items
  • Fixed decal lighting for scripted items
  • Fixed decal projection shader (using the depth fade texture) to avoid sharp edges
  • Particle collisions, with dynamics and event callbacks (example in first picture)
  • Implemented "multiply" blending mode for glass
  • ...and probably some more things I already forgot


Enemy AI

Facts, vision, hearing.


Enemy AI now is capable of storing bits of recent information about the state of the world. From that, it can partially extrapolate some other data, such as positions of other AI characters and the player.


Coming soon - lightmap upgrades

The reason for the delayed blog post.


Shadow generation was upgraded and optimized. Now high quality soft shadows are unbelievably cheap, thanks to the idea of using raymarching for soft shadows and some hard work on my part, implementing AABB trees for efficient distance sampling.

There's still some optimizations to be done so it might take a bit more time to appear in game screenshots. But I'm getting there.

I should mention that this was not initially the part of the plan, though. I was implementing AABB tree and two-level raycasting (mesh instances/triangles, previously there was one big triangle soup) for the option to disable specific mesh instances when certain samples are rendered (in case I wanted to fix lighting for samples inside meshes they're about to represent). But I do like the results and would like to have them in the game.


Some technical things

  • Asset compilation - needed a simple system for file transformation (texture packing, convex hull generation) that would generate all missing/outdated files with a single command
  • Load time optimization - used texture packing to avoid regenerating lightmaps all the time, (warm-start) load times for the test level are somewhere around ~0.5 seconds after the optimization


Since LD33 is coming soon, I'm inclined to participate (in the Jam this time, can't release the source yet, and hopefully by making a FPS). This means I have 1.5 weeks left to prepare for it. I have a quick checklist for things that I should really implement, starting from most important to least important, with the expected amount of time it could take.

  • Scripted item entity (1h) [done]
  • Basic enemy AI (3-8h)
  • Level transitions (3h, more with loading screens) [done]
  • Animation markers (2h) [done]
  • Weapon rendering (different depth range, same scene) (3h)
  • Incremental lightmap rendering (10h) [done, took ~30h]
  • Environment mapping (3-10h, depending on targeted feature set)
  • Ragdolls (10h)

Having about 40h of free time left, it should be possible to do all of this, if that's all I do and there are no issues to deal with on the way. Realistically, the last one or two tasks will probably not get done.

We'll find out how it goes soon enough.

Add a comment


Latest images