23 March 2016

Trying out the HTC Vive

On the way back from a meeting in London yesterday I was able to find time to queue up at the Curry's on Tottenham Court Road to try out the HTC Vive. There queue was about a dozen people longer (and longer by the time I left) and with a 15 min demo each I had about an hours wait. There was an amiable and knowledgeable HTC expert supervising the whole thing, and ready to answer questions, and on his monitor (above) you could see what the Vive user was experiencing. The space was about 5m x 5m with the positional sensors in each corner.

So what was the experience like?

First the graphics was actually underwhelming. There was a definite step change from DK1 to DK2, but now DK2 is worse than the best smartphone solutions, and Vive really was no different from those. Still a slight "gauze" over the screen, and edges of the screen visible (perhaps better headset adjustment?), so no real sense of higher rez or wider field of view. It may be that the HTC is a few percentage points better, but not enough difference that you (or at lest I'd) notice. I also thought that the headset was heavier than the DK2, but apparently I was the only person who'd commented on its weight.

Where the HTC did score was in the integration of the peripherals. The handset controllers were OK, a little oddly shaped, but the meant that whatever you were holding in world appeared where your hands/controllers were. The first demo is a simple shoot-em up, and you can hold either a shield or gun in each hand - just flinging a hand back over your shoulder to change. Using the shield/guns was totally natural, just point and shoot. You could also move around the platform a bit (although there was little need), and a green fence appeared when you began to stray to the limit of the area (no evidence of the forward facing camera bleed-through of reality).

The second demo was Job Simulator. You're in a cartoony office cubicle just full of stuff to play with. The controllers now appear as white gloved cartoony hands (no arms), and pulling the trigger causes them to grip an object (although its then more a case of it moving with the hands rather than being gripped by them). I had great fun trying to build a jenga stack out of donuts and coffee cups. You could fill a  mug with coffee and tip the coffee all over the store. Throw physics worked well, both for donuts and paper-planes. There was a computer that you could access, but just very large font text (forgot to see what was the smallest font fize I could see to read - a good test of resolution). When I dropped a donut on the floor I could kneel down and peer under the virtual desk and the trackers did their job of keeping everything in sync - all very natural.

And that was it. I only realised later that the person before me (image above) had also had the chance to use the painting app, but I guess the guy through the queue was building too long. Pity as I'd hoped to see if I could draw a solid looking house.

Conclusions?

Graphics are OK but not wonderous, and the £200+ price hike over the Oculus probably wont be worth it for that (or possibly even over the Samsung or Cardboard with a decent phone - and can't say I noticed lag was significantly better). The peripheral integration was great through, but in reality you could build that into any system - but it is hassle to do, and having it all working out the box is pretty neat.

Thinking of use cases though, to get the fullest benefit you do need that 5m x 5m area and a minder to make sure you don't trip over the cable - so apart from the geekiest of gamer geeks I'm stil not sure its going to be a mass market entertainment system - and particularly at that price. And if you just want a sit-in-a-chair VR experience the other solutions are a lot cheaper.

Where I am excited though is in our core area of training. People spend a lot of money on cave type 3D set-ups. But with a £700+ Vive you can have a fully portable spatial trainer. Just think of projects we've done such as this nursing trainer:

With HTC Vive a student could actually walk around the bed, place and adjust equipment, do basic medical inspections etc. There would still be challenges with reading patient notes and talking to the patient ( a real need for voice recognition), but all do'able. We could probably even make it multi-user.

So, great to have tried the Vive out. A pity its not another step beyond the current display quality, but really neat peripheral integration.

17 March 2016

Cardboard Carding Mill

By: Sean Vieira

Arriving a month into my employment, the DadenU day on the 29th of January came with uncertainty over the expectation of what I should produce on such a free form working day. What should I do? What could I do? I had spent the week prior thinking up ideas of what to pursue, only to throw them away due to the scope extending beyond the one day deadline we had. Some discussion later and one suggestion stuck: “Why don’t you see if you can create a Google Cardboard version of our Carding Mill Valley scene?”.

This suggestion was relevant on both a personal and professional level, and so I decided to follow it up. Having an interest in virtual reality (VR) technology and having never developed for Cardboard before, I considered this a good opportunity to get hands on with it.

SimpleCardboard.PNG

Getting my phone, and our Cardboard headset, out I then jumped on to the internet to read up on this particular brand of VR. Google Cardboard is an ingenious, low end solution that allows anyone with an Android mobile phone to experience Virtual Reality. Google has adopted an open source stance on the viewer, meaning anyone can develop their own Cardboard compatible headset (leading to a nice variety of available viewers). There are official viewers that are literally a cardboard box with some velcro (not forgetting the all important lenses) that come as cheaply as £10, meaning this is definitely a product for everyone.

The idea behind Cardboard is that you strap your phone into the viewer, rotating it to landscape, and start up a VR app. The display is split into two halves, a left and right, where each line up with the lenses in the viewer. This is what provides the ‘virtual reality’ illusion to the user when they look into the headset.

My first step was to get an understanding of how the Google Cardboard SDK works. Fortunately, it already comes with a Unity integrated SDK so this provided me with an easier way of moving our Carding Mill Valley scene, itself a scene within a Unity project, to the format required. So I downloaded the SDK, and fired up the Demo project provided.

Consisting of a Cube and a menu on the floor the Demo doesn’t look particularly impressive, but it was interesting to experience it first hand and get a feel for the design. On inspection of the project, all of the important parts were bundled together in Unity prefabs, meaning that it would be very easy to get a project up and running in a VR enabled form.

A prefab named ‘CardboardMain’ does most of the work. Within it are two GameObjects, one named ‘Head’ and another named ‘Stereo Render’. Unsurprisingly, the ‘Head’ object acts as the player character’s head, and contains the script that applies the device’s rotation and position to the transform of the Unity gameobject, allowing the user to influence where the camera looks just by moving their head. The object contains the Main Camera, which is a standard Unity Camera with a couple of scripts attached - one which controls whether we render in Stereo or Mono and one that provides spatial enhancements to the audio listener. This camera is used to render the scene normally (i.e in mono) if we have disabled the VR mode. It is the parent to two child cameras, a left and right, which are used to render the scene in stereo when we have enabled VR mode.

These cameras represent the left and right eyes of the user, and are offset ever so slightly from the main camera to provide the distortion necessary for the trick that is Virtual Reality to work. Each eye contains a script that alters the projection of each eye camera and applies this to the stereo render in the controller script on the Main Camera.

BoxTest.PNG

CardboardMain also contains the ‘Stereo Render’ object which is where the output from the script attached to the Main Camera goes. This object contains two cameras, one of which is the pre-render camera, which provides us with a solid black background, in front of which the post-render camera, the other camera attached to the ‘Stereo render’ object, renders. This is an orthographic camera that will display our stereo output from our two eye cameras side by side, which is then used in conjunction with the lenses in the physical viewer to create the VR effect.

Migrating this to the Carding Mill wasn’t too difficult. After importing the asset package, the MainCamera in the scene was replaced with the CardboardMain prefab, and it was as easy as that to get it to render in stereo. Testing on our development device confirmed that this switch was mostly successful. It seemed that the device was really struggling to handle the size of the landscape and the foliage, as the framerate was suffering. The foliage was duly removed from the terrain, the landscape detail reduced in distant areas and collision barriers erected so that the user couldn’t wander into the hideous, barren lands. The frame rate was much more palatable now when ran on the device. Some tinkering in the future would perhaps find a nice balance between efficiency and quality, but for now this would do.

In the Demo project provided by Google the user is stationary, whereas in our scene the user would be moving. How would I initiate movement for the player, considering that there is only one form of input when using our Google Cardboard viewer? Initially the idea was to let the user hold down the button (which pressed the phone’s screen) to move forwards. This unfortunately didn’t work out as after the initial contact, the touch that was held was never continually registered by the device.

Until a better method was designed, I figured I’d make the button presses toggle movement as a stop gap solution, so that I had a slightly more impressive demo to show off once the day was over. The direction of movement was determined by where the user was looking. This worked fairly well and wasn’t too disorienting when in use, though it could get annoying if you wanted to turn around to look at a view of the valley but forgot to press the button to stop.

BasicValley.PNG

By the end of the DadenU day, I had successfully converted our Carding Mill Valley scene into a walkable, virtual reality landscape for Google Cardboard. Most of this is down to the ease of use of the Unity SDK for the technology and that is a huge bonus for the hopes of Google Cardboard, now and in the future. This venture was so successful that we have decided to pursue it further. Let’s hope this road will continue to be both interesting and fruitful!

Note: After Sean's work on this we decided to push ahead with this as an early release of FieldscapesVR. Sean has been doing a lot more work on the navigation aspects and we've further enhanced the terrain, and the application should be on the Playstore by the end of March, and the iOS version on the AppStore by end of April.

1 March 2016

Daden UDay: The Swift Programming Language

By: Nash MBaya

swift.png

For our second Daden U Day I decided to look at the relatively new programming language Swift developed by Apple. Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. Although swift is meant to be a general purpose programming it is mainly associated with iOS app development. Apple touts it as a ‘powerful and intuitive programming language for iOS, OS X, tvOS and watchOS.

As a way of getting to know the Swift I decided to write a small iOS application using one of Apple’s tutorials for Swift. I am no stranger to iOS development and XCode the integrated development environment (IDE) used to develop software for Apple products. I have previously made attempts to development iOS apps using Objective C and XCode. Admittedly these attempts have all failed as i have found Objective C a very different programming language to C#; the language I am adept in and my programming language of choice.

Before I began the DadenU project my aim was to discover if Swift was a programming I could learn easily and pick up quickly so that I could switch between C# and Swift without too much effort just as I often do with C# and Java. After spending a couple of hours writing Swift code the initials signs were promising. Although the Syntax between C# and Swift is not as closely matched as I would have hoped the code I was writing based on the example was easy enough to understand. 

There were some odd quirks here and there like how Swift doesn’t have a line terminator for statements. In other popular languages lines are terminated by a semicolon.

In conclusion I would say that my experience using Swift was quiet pleasant though only scratched the surface. Granted if I was to continue developing iOS apps using Swift I would be learning an entirely new language. That being said it wouldn't be an uphill struggle but rather a gentle incline up the hill towards iOS programming prowess.