VR DevLog 9 : Post Prototype

By July 22, 2013 Journal, VR
SuperMegaMega

Prototype complete.

Now that’s done it’s time to start to flesh out more of the mechanics and level structures.

I’ll be avoiding anything theme/art related for a little while longer and just stick to adding a bunch of the platformer ‘tools’ (moving platforms, variable physics, enemy types, bosses etc) this should provide a good base to build the rest of the game upon.

I’ve created a few hacks during the prototype phase and I’ll need to throw a bunch of it away and re-make it in a nicer more user friendly way. For example, each unity scene stands on it’s own with no shared data or configuration files. That makes it nice and easy to load/reset a level but it also means all the levels need to be updated if any of the global configs change. This is first on the list!

The level editor will need to flow a lot better than it currently does. I’ve had to manually place a few pieces of the levels because I didn’t have time to create the editor support. Most of it is non-trivial it’s just a matter of getting in there. Manually placing tiles instead of painting them is some kind of hell.

During my short time at PAX AUS I managed to snag some time on the newer HD Oculus Rift. First thing noticeable is the massive improvement in clarity. Especially in the key portion directly at the effective viewing area in front of the eye. You have to look for the screen door effects and they are quickly forgotten about. Text is actually visible now, making it a bit easier to get some useful menu interfaces working.

I’ll be ordering the next dev kit whenever it’s ready. I don’t know what specs the consumer Rift will have but I don’t think it would be far off. I’d be happy just to include positional tracking with the HD screen. More improvements can come later!

So far I’d say any version of the Rift will have a limited session time for most people. The biggest factor here is related to the brains exhaustion levels with having everything in the scene in sharp focus all the time. We need some eye tracking support to allow dynamic depth of field calculations. Hopefully that’s not too far off.

Back to it!
-Ryan