AI level data processing tech and asset compilation were the two things I found most important in these two weeks. There were some other things, though.
...and it has pretty much replaced the Direct3D9 renderer at this point.
Seems to be working fine too.
The frequent bullet misses are no longer a problem
For quite some time I was wondering why it was so hard to hit the enemy. Found out that there were some issues with hitbox raycasts. That's solved now.
There are still plans to improve the controls, making aiming more appropriate for gamepads. Currently it's decent only for mouse users, gamepad users might find it a constant struggle against an ever-changing time/position reference frame. Some might enjoy it but from my observations they appear to be a minority that spent their childhood playing hardcore games. And I'd like the game to be more inclusive than that.
If I was an AI soldier, where would I stand...
After some thinking, I came to a conclusion that the most efficient placement for enemies if they wanted to attack is... simply the inverse of standing in places shadowed by covers. The picture shows these places, calculated against player's position.
So I implemented that, as well as somewhat statistically correct point selection within those lines, taking distance into account, and line-sphere clipping to avoid stepping into positions that are already occupied.
This concluded AI tech development, which means that all I have to do now is to assemble the pre-made pieces into a proper virtual adversary.
How many times should polishing transforms be manually applied to work-in-progress art?
This is not all done just yet, however it shows great promise already, sometimes even more than that.
The goal of the asset compiler is to automate the redundant actions done by the artist (currently, it's only me) to make art editing more pleasant and thus allowing to create more stuff in less time. This is done by creating a tool where it's possible to register all assets and the actions they require.
Currently supported texture filters: color [de]linearization, resizing, sharpening (3 types of kernels), range expansion, brightness/contrast/gamma.
Mipmaps can be pregenerated if the format supports it. For texture input, everything stb_image supports can be loaded. If the format supports extended attributes, they're also saved.
Here's a comparison between the generated texture and the original image:
Red tint was removed, color range expanded and the image sharpened. And thanks to the texture duplication option, I can easily transfer this set of filters to another texture.
As for the models, they're much more simple - data is loaded using Assimp, material parameters and some fixed filters are applied and textures are linked to them.
One unusual little feature for which I got the idea from my breakable glass implementation and the way Assimp handles model parts - I can export the same model part more than once with different settings. This is useful in situations that require multipass rendering (like glass). For example, first pass uses the "multiply" blend mode, rendering the glass tint. The second pass would render the dirt/crack layer, which needs the default blend mode.
Now, this isn't all fun and games - I still have to figure out how to handle asset revisions and caching in a way that deals with changes in source art, changes in script, relocated asset outputs, overwritten assets etc.
It is also likely that I'll generate SVG fonts here and implement some basic file copying. Not sure what to do about maps, particle systems and characters but it's likely that they'll stay unchanged for now.
The current goal - prototype. Having most of my tech prepared for this, I should finally start putting things together. As they say in movies - this is the moment of truth...
Add a commentRenderer upgrades and miniLD62 were the activities of these 2 weeks, one each.
Everything should look pretty much the same... but be better.
A lot of code was extracted from the D3D9 renderer and generalized to make it applicable to all renderers.
This required the creation of some new systems but in the end, renderer code was reduced and made more reliable. These changes also opened some doors regarding options to override mesh data - all materials are now stored in the mesh instance, meshes only have texture paths/shader names/material flags now.
Another interesting product of these changes is the function GR_PreserveResource() - it takes any resource and stores it in a set for exactly 2 frames, preventing constant reloads without an explicit member variable (they aren't a good option in overridable interface callbacks).
Some things still aren't extracted from renderers, such as shader uniform buffers and... I guess that's about it. Also, projectors haven't been fully ported yet. But I'm working on it so upgrades are expected before OpenGL support is finally implemented.
One jam per month seems to be the new standard lately.
So this is just a basic unfinished first person shooter. Whoever finds a mistake with it - rest assured that I already know about it and many more.
I decided to build this as a focus shift experiment and it seems to have worked rather well for its purpose. I worked a lot on player action feedback and even implemented a help text renderer to create an intro tutorial. So far no one has said that the game is hard so that's a good thing. Hardness can be built upon when the tools to overcome it are properly presented before.
The project was also an art production experiment. I seem to have mastered two new ways to create textures. One is by drawing a pixelart bumpmap, then scaling and converting it to a normal map. Easy to get fairly decent sci-fi metal surface normal maps.
The other method is to render textures in Blender using composition nodes to route all components to the necessary outputs. This allows geometric/tiled/special surfaces to be rendered easily.
You might also notice that dynamic point lights are now available in the editor. This is another fairly new thing. The editor support was there but level format didn't support it... and it does now.
That's about all for this time. Unfortunately, no advances were made in terms of AI this time. Scripting had some minor upgrades that I didn't get to use, and it looks like there's going to be more of those.
I'd like to keep working on this project just so much to reach a technical-feature-complete version. I'd like to see that the engine can deal with everything I throw at it and more before proceeding.
After that, back to TACStrike AI and perhaps another game jam.
Add a commentRagdolls, AI and refactoring were the main points of interest during these 3 weeks but there were other things that I played with as well.
Most importantly, my LD33 game was finished at the beginning of the week and it looks rather serious.
There were also some attempts to make fog (volumetric) lighting.
It looks somewhat OK but needs some renderer improvements to help me make these effects faster in some easier way. Those, as well as other improvements, are scheduled to happen very soon. I'm also paving the way for cross-platform rendering, both GLES2 and D3D11.
As for refactoring, it can be easily described as moving all scripting and systems out of the game level. This consisted of two very obvious parts:
I found these necessary when I wanted to implement a cutscene involving a character walking into a room from an elevator, and opening/closing the elevator doors. Didn't have cool scriptable stuff like this so I had to cut the idea and implement basic camera movements and subtitles instead. Well, I still don't quite have it but I'm much closer to it now.
...'cause death animations are so 90s.
Seriously though, it's just another case of overwhelming asset creation costs being solved by technology. I intend to combine the previously created deformer and these ragdolls to produce something that looks as authentic as I can get it to be.
There are some things that had to be done to implement ragdolls:
Some pictures and gifs from the process:
There's nothing quite like a thick wall to protect you from gunfire.
The goal of this system is to provide AI with information as to where could they go to avoid gunfire. Inputs: location of attacker and a simplified version of surroundings. Output: lines (and later points) where the character could be protected from gunfire.
For every somewhat advanced behavior, there has to be lots of data to support it. This is my solution for movement with cover areas in mind. After that, I don't think there's anything as data-heavy as covers. Teamwork, fact processing, psychological factors, dialogs - all can be done from within the enemy script. When I'm all done with covers, all these things should appear shortly after.
Things left to be done:
Some work-in-progress pictures of cover queries:
As for future plans:
As for that last point, I'd like to move my publishing activities to a game developer forum when I'll have gameplay to show and subsequently downgrade this website to a bunch of plain HTML pages (or minimal PHP at most). I want to make content, not administer websites, so I don't need this complex interactive machine of things called CMS that's just slow and interactive and unsafe for no reason.
There's currently no plans to reboot Steam Greenlight, I'll have to see how fast will content development go, as I'd like to do it when I'm halfway there.
Add a comment