The goal is to have a bunch of different playable animals in We Of The Woods.
Each will have different goals and survival mechanics. Can’t get much more different than a giant spider 🙂
The goal is to have a bunch of different playable animals in We Of The Woods.
Each will have different goals and survival mechanics. Can’t get much more different than a giant spider 🙂
It’s quite fun to work on two games at once. It can provide some relief to switch projects when things are a bit busy. A palette cleanser, so to speak. Especially when switching from a VR game like SuperMegaMega.
So I’ve spent some more time working on another team project called We Of The Woods recently. It’s quite a fun project to play with. Procedural generation is one of my favourite things these days and the entire world is generated fresh each time it’s played.
One of the things the game has been lacking is any form of decent AI for the animals populating the world. The main reason being there was no knowledge of the collisions or paths available.
So it’s obvious what had to be done first:
The nav grid is being re-generated whenever the player moves a specified distance. This should let the animals navigate through the colliders easier, giving the appearance of intelligence. Hunting should also feel better if the animals react properly to encountering a wall. Currently they just turn around and run the other direction but don’t consider where the player is coming from.
Now the navigation works it should just be a matter of tweaking some reaction behaviours for each of the animal types. The goal is to make them react convincingly when being chased by a wolf or bear…. 🙂
fun fun!
-Ryan
So once again it’s been far too long since I wrote a devlog… mainly because I haven’t had anything to say! But also because I’ve been very busy on other projects and going to GDC and moving house/office.
Now things have settled down, development can kick back into gear. Super Mega Mega is coming up to a cool point in development. The core of the coding is done and the majority of work now lies in content generation.
I’m currently working on getting a more complete, playable version of the game ready to demo and get feedback on. I might even release this one publicly.
The game has evolved to be an interesting blend of games like MegaMan, Zelda and The Binding of Isaac. I want a highly replay-able, short session experience that works both with and without VR.
Now comes the point where the real game design stuff needs to be sorted… I’m hoping to have a version working well for the upcoming Beer & Pixels event… but I’m not sure I’ll have the time. I really need to get some feedback on this soon! Game design is not my strong suit…..yet! I’m going to have to focus and knuckle down to pull something cool out here. A challenge!
Cheers!
-Ryan
Revisiting randomness
My recent work on SuperMegaMega has revolved around figuring out a lot of the desired game play and story.
Part of that process has led me back to considering procedural generation again.
I want to have a game that won’t require massive play sessions but is still appealing each time it’s played. The sense of progression is also important.
For examples of this I have been playing a lot of the recent roguelike evolutions that have become popular.
Some of them use random content in a way that is still very controlled. For example “The Binding of Isaac” uses a large set of hand made rooms (over 1000) that are chosen from as building blocks for the larger level layouts. “Rogue Legacy” does a similar thing too whilst Spelunky uses a more fine-grained version but it’s essentially the same idea.
So now SMM is using this pattern with its own twist. Pre made rooms and structures give more control over the player experience. The main restriction is all the rooms are based around a cylindrical structure. As long as there are a shit load of variants to select from it will seem very random. Just need to make sure the quality level is maintained over the entire set of rooms.
The actual gaming loop is currently being experimented with. So far leaning towards using a permadeath system with unlockable weapons and events. Don’t really know what will work until I get some more play testing done!
Lots of enemies,weapons,items and levels to make!
Also, my new phone is awesome! Note3 ftw
-Ryan
One thing I’ve been thinking about recently is motion sickness and VR. My wife tried the ‘Titans Of Space’ demo and got really sick in record time.
This really surprised me! I figured this demo was one of the easiest for first timers. Perhaps my perceptions of what has the potential to make someone sick is a bit out of whack.
I think it’s important to try and minimise the barf inducing effects but I fear it’s (at least partially) unavoidable.
Super Mega Mega has a pretty small percentage of people who feel sick I think. The one part I’m worried about is the screen-shake effect.
There are lots of technical things I’ve read and acted upon and I don’t want to repeat them here but it’s certainly a potential wall between VR tech going mainstream. At least it seems the consumer version of the Rift will be targeting a much lower latency… which could fix most of the problems and make everyone happy. Games just need to keep up their end of the bargain.
Anyways I think I’ve decided to have an options menu to allow the potential chunder busters to be switched off. That should hopefully cater to most people.
Back to work! Hoping to have a new playable demo available in the next 2-3 weeks. It all depends on how much time I get to work on it.
-Ryan
Now something a bit more technical and rambling… But not too bad 😉 … again this will help me more than anyone else 😛
With the initial prototype for Super Mega Mega, the world is completely dynamically rotating around the centre point of the world.
This was chosen to ensure the VR head tracking camera wouldn’t have any major issues with things like motion sickness.
Also, the player movement code becomes a lot simpler when the player isn’t actually moving and the level just rotates on a single axis.
The problem comes now with performance. I’ve been starting to push the detail levels of all the levels up a lot and that’s increased the draw call and vertex count significantly.
Deferred rendering is being used to get some nice lighting and post effects but it really hammers the draw call count.
To reduce this I have a few options:
– Reduce the vertex count of the individual blocks significantly (Unity dynamic batching doesn’t allow more than 900 vertex attributes which means about 200-300 vertices for me) which would make this look shitty.. potentially only an option if the detail is replaced with normal maps. This would add a lot more work into my pipeline…. not cool but has potential.
– Leave it where it is and just decrease the level sizes. Not cool either! Seeing the tall tower stuff is one of the coolest bits.
– Switch all the mesh blocks to static objects and change the movement code to adapt. This is the first option I’m currently testing out. Draw calls are reduced significantly but now the camera has to be rotated to follow the player. This might screw with your head when wearing the VR headset…. must really test this ASAP. If it doesn’t cause any problems I’ll switch all the levels into this static mode. Re-writing the movement will be a pain but it’s a once off thing.
If anyone has any pearls of wisdom here.. feel free to drop them on me @bluntgames
Anyway it’s good to be back in the middle of it all. Hoping to be able to share some awesome new stuff early in the new year!
-Ryan
p.s.
After writing this I think I’ve decided to embrace the ‘all normal-mapped future’. This will make the vertex count of the blocks restricted but it should be nicely compensated with the normal detail I hope. My only concern is the lack of quality in the silhouettes. I’ve also solved the non-manifold mesh generation that was plaguing the tool used for current models. Perhaps the best solution is a happy mix of dynamically batches models and auto-combined models.
The attraction of having everything in the world dynamic (destructible and movable) is far too appealing to ignore.
test test test
The word of the day is Random.
This is what the levels are going to be in Super Mega Mega.
The initial intention was to create hand crafted levels with a story flowing through them. Instead now it’s going to be a randomly generated cylindrical level and story elements will be woven into the game play without directly controlling the flow.
A couple of games out there are already doing the randomly generated level thing… (not just a couple… lots) so it’s not an unsolved problem. I don’t see there being too many problems…… but I’ve been wrong before.
The first thing to do is adapt the current level editor structures to allow me to create the smaller level building blocks to use in the generator. I think this is the way Spelunky handles the problem and it should be a good fit.
I haven’t decided on the blocks/level dimensions yet. It’s a bit hard to decide until I’ve tried it out a bit. The nature of the 360 degree levels means my first try might not work at all. It’s just a matter of giving it a go. Time to code up the system and plug some numbers in with some test data and see if it makes a decent level.
Needed to make a better logo so here’s a new revision: I haven’t decided if it’s the final yet.
I took a little development break this past couple of weeks, mainly to prepare for the birth of my new kid (it’s a boy yay)… but I’ve had some time to start getting back into some work…
– One of the problems I noticed when using ray casts for the “Is the player looking at something?” test seems to be it’s high accuracy. Most of the time the player is not actually looking directly at their desired target and it’s not obvious why the it isn’t being highlighted or it can be frustrating moving the head around to try get the exact positioning.
I’m trying to ditch the ray casting and use the Unity trigger system to filter out the objects that enter/exit/stay in players view. First tests say it will work well. The technical parts work fine this way (so did ray casting) … but now there’s more room for a fudge factor to play with. Previously I would have had to cast a bunch more rays to try and get a better indication of what the player could possibly be looking at. I won’t know for sure until I get some more play testing done.
– A quick improvement for the demo is to make the player character visible when behind blocks….. adding this will greatly improve the playability in a few of the parts of the game. It will also mean I don’t have to worry about hiding the player as much. Could also maybe use it for enemies and things.. dont know yet.
– The game still needs more VR specific game mechanics. I’m experimenting with a few different things still. I’ll have to add them in for the next play test for sure.
out!
-Ryan