30 August 2017

Gartner Hype Cycle 2017 - A Critique

Every year the Gartner Group (well known tech analysts) publish their "hype-cycle" - showing whereabouts emergent technologies are on the journey from first conception to productive tool. We've watched Virtual Worlds ( and then Virtual Reality) work its way along the curve over the last decade, but this year's chart has a number of interesting features which we thought might be worth discussing. We focus here only on the areas of keenest interest to us at Daden, namely AI/chatbots and 3D immersive environments.

First off, it's interesting to see that they have VR now pulling well out of the Trough of Disillusionment, and only 2-5 years to mainstream adoption. This seems reasonable, although a more detailed analysis (which we may do later) would probably put VR in different sectors at different points on the cycle - so whilst this position seems OK for gaming and training I'd be tempted to put it still up on the Peak of Inflated Expectations when it comes to mass media entertainment or personal communications.

As a side-line it's interesting to look at these two Gartner charts from 2012 and 2013. Spot the difference?

2012 Hype Cycle

Clue - look at the Trough of Disillusionment....

2013 Hype Cycle

In 2012 Virtual Worlds (Second Life and it's ilk) were at the bottom of the Trough, in 2013 (as the Oculus Rift hype started) they were replaced by Virtual Reality! Virtual Worlds (and SL) are still around - although often rechristened Social Virtual Realities - and we'd guess they are still lingering in the Trough as their potential is still a long way from being realised.

One tech that was in 2016 but is missing from 2017 is Virtual Personal Assistants. Now if we take this to mean Siri, Alexa and co that seems reasonable - I have Siri in my pocket and Alexa on my desk as I write. But they are a far cry from the virtual PAs that we were promised in the mobile phone videos of the 90s and 00s. In fact if we compare the 2012/2013 and 2017 charts we can see that "Virtual Assistants" 4-5 years ago were just over the Peak, but now in 2017 "Virtual Assistants" is actually just approaching the Peak! So Gartner appear to have split the old Virtual Assistant into a simpler, now mainstream, Virtual Personal Assistant, and a new Virtual Assistant representing those still hard to do elements of the 1990s vision.

Back to 2017 - the new entrants on the Hype Cycle of interest since 2016 are Artificial General Intelligence and Deep Learning. Deep Learning is really just a development of Machine Learning, and its interesting that they have them both clustered together at the peak. In fact I'd have thought that Machine Learning is probably approaching the plateau as it appears to crop up everywhere and with good results, and Deep Learning is not far behind. Interestingly neither appeared on the 2012/13 charts!

Artificial General Intelligence is far more interesting. It's been mooted for years, decades, and progress is certainly slow. We'll be writing far more about it in coming months but it is a lot closer to what most people call "AI" than the things currently being touted as "AI" (which are typically just machine learning algorithms). As its name suggests its an AI which can apply general intelligence (aka common sense) to a wide variety of problems and situations. Gartner have it about right on the chart as its still a back room focus and hasn't yet hit the mainstream media in order to be hyped - and still seems decades away from achievement.

There are some other technologies of interest on that initial slope too.

It's interesting that Speech Recognition has now gone off the chart as a mainstream technology - whilst it may not be 100% yet its certainly come on in leaps and bounds over the last 4-5 years. But was is in the initial slope is Conversational User Interfaces (aka chatbots) - divorcing what was seen as the technical challenge of speech recognition from the softer but harder challenge of creating a Turing capable chatbot interface. I'd have thought that the Peak for CUI was probably some years ago (indeed Gartner had Natural Language Query Answering near Peak in 2013) and that we've spent the last few years in the trough, but that intention based CUI as we're seeing with Alexa and Messenger are now coming of age, and even free text CUI driven by technology such as Chatscript and even AIML are now beginning to reach Turing capable levels (see our recent research on a covert Turing Test where we achieved a 100% pass rate). So I'd put CUI as beginning to climb the slope up out of the Trough.

By the way, we got excited when we saw "Digital Twin" on the chart, as it's a subject that we have a keen interest and some involvement in. But reading their definition they are talking about Internet of Things "digital twins" - where a piece of physical equipment has a virtual simulation of itself which can be used to predict faults and ease maintenance and fault finding. Our interest is more in digital twins of real people - cyber-twins as they have been called - perhaps we'll see those on later charts!

The final technology of interest is Brain Computer Interfaces. Putting them only just behind Conversational Interfaces reinforces the point that CUI should be a lot farther through the cycle! Useful Brain Interfaces (I'm not talking Neurosky type "brain-wave" headsets here - Gartner may differ!) still seem to be decades away, so sits about right on the chart. In fact it's moved a bit forward since 2013, but still at 10+ years to mainstream - can't argue with that.

So all this is pretty subjective and personal, and despite its flaws the hype cycle is a useful model. As mentioned though the same technology (eg VR) may have different cycles in different industries, and we also feel that each point on the curve is a bit of a fractal - so composed of smaller versions of the cycle as each step forward gets heralded as a great leap, but then falls back as people actually get their hands on it!

We look forward to reviewing the 2018 chart!






25 August 2017

Project Sansar - First Impressions


I've been signed up the Sansar Closed Beta for months but other projects meant I never had the time to go play. Now it's in Open Beta (and so we can talk about it) I thought it was about time I checked it out.

What Sansar doesn't offer (in comparison to SL) is a single shared world - this is far more a "build your space and let people visit" model. It also doesn't offer in-world building (just placement of imported or bought object), or in-world scripting (and scripting is in C++ and needs to be re-imported everytime you make a change, so it looks like a very long development cycle!). What is does offer is (as did SL) is multi-user (well at least multi-avatar), and VR support out of the box. Avatar choices are limited but look OK with some nice facial customisations, but only about 8 outfits (and no colour options!).

VR and using the teleport movement - see light beam

The navigation model is horrendous (in my view) - the camera usually giving a sideways view til you'd been walking for ages. You couldn't use cursor keys to rotate your view on the spot - very much built for gamers with keyboard in one hand, first person view and mouse in the other. Couldn't find a run or fly control so walking around too ages. There is a nearby-TP option where you can point to a place and jump there - but a very short range.



The actual spaces looked pretty good - but they are just imports of 3D models so no reason not to. But interactivity was non-existent in the ones I saw - probably due to the complexity of the scripting. Almost all of them also seemed very dark - they give you lights for your scenes but it seems like many people aren't using them well.

The one location that was stunning (especially in VR) was the Apollo Museum - with a really nicely done earth-moon trajectory and little CM/LEM models all along it and audio to show you what was going on - a superb VR demo.


Having done a quick tour I decided to try building so chose one of the ~8 base locations. Rather than buy from the store I decided to upload some FBX models - which was pretty smooth except for the fact that it appears they only get textured if the textures are PNG - and even then the ones I tried ended up all candy-striped!


One of the biggest issues for me though was that you had no avatar when building - so lose all sense of scale. No issue if you're a 3D artist, but for an SL renegade like me I can never get on with building without myself as a reference. Once you've done the build you save and then "build" - which can take a minute or so, before you can play (another minute or so), so again a slow iterative build process (and the "professional" builds were taking ~5 mins each to download.

Finally I wanted to try scripting. Before I started this morning (as as I tweeted) I thought I might be able to get a Sansar script talking to the bot I was working on, or even one of our PIVOTE APIs. No chance! Sansar scripts are pure C#. It seems at the moment you must edit outside (Notepad or Visual Studio), then import, attach, then build then run - it would take ages to do anything. The C# calls to interact with the environment also look non-trivial (subscribing to changes etc), and only a small subset of Mono/C# functions are supported - not the range that Unity has - so no web calls! There's no way that people will have an easy transition from LSL to Sansar CS - it's a whole extra level up.



So overall - massively underwhelmed. High Fidelity certainly looks far more interesting from a technical standpoint and is closer to an SL#2 - but even that doesn't have the single world thing. AltSpaceVR (if you added object import/placement) is far closer to what I thought that Sansar would be - and the WebGL enclosure idea was/is a superb way to create interactive 3D/VR content at minimal effort. The whole Sansar experience felt like working through treacle, whether exploring or building, - although in first person in VR at least the explore bit was quick.

What it did make me appreciate is what we've done with Fieldscapes. Using that has never felt slow. Its very quick to layout, add interactivity, test and explore - things just flow. And if people want to spend the time then there is no reason why you shouldn't have the same level of eye-candy as the Sansar spaces. But there is just no way that I can see Sansar being a training/education tool - you'd use native Unity or Fieldscapes or something similar, more power or higher ease of use, rather than Sansar which appears to cripple both. And the spaces don't have the immediacy of the AltSpaceVR ones, or the ease of build of the SL ones, so I don't see more casual users taking to it in great numbers. Perhaps if I had loads of time, was a coder/3d artist and wanted to build some sort of fan-space it might be a place to do it, but somehow I doubt even that.


14 August 2017

James - Work Experience

James is a year 10 student who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by James.

After a team meeting on Monday, I set to work getting to grips with Fieldscapes, using the tutorials to create a quiz that takes the user through the world answering various questions, which turned out to be useful later on (my geography knowledge was tested in a quiz mid-week, so knowing that Ulaanbaatar was the capital of Mongolia from my own project was very helpful!)

 I was then set the task of importing files from Paint 3D into Fieldscapes, which provoked research into the numerous 3D file types available, their uses, as well as how to model a 3D object.

Some default models in Paint3D in 3D mode


Finally, I was then able to export Paint 3D files as an fbx into Unity, then to create an asset bundle to be imported into Fieldscapes; although we encountered problems with offsets and colours along the way, which proved to also be a great learning experiences. The asset bundle I made featured artistic marvels such as a coffee cup with 3D text and a rainbow.

Paint3D models imported into Fieldscapes


In addition, I was present at a meeting that showed me the many uses of virtual reality and 3D, as well as how business between two companies is carried out.

Then on Wednesday, I made an inventory of all the computers in the office, prompting discussion about aspect ratios, computer specs and anti-virus software, as well as having to use the computers’ BIOS’ and learning about the financial side of things with discussions about the cost of the computers.

Next on Thursday I was involved in testing, giving me insight into how it is carried out, along with the gratifying feeling of discovering a funny bug, in this case props being placed in the sky and avatars floating into the air, seemingly ascending to heaven.

I then participated in the testing of a virtual mentor, which again showed the need for and the process of testing and both the positives and negatives of using VR and 3D in the classroom. Next I tried programming a chat bot, adding an input box to it, which greatly improved my JavaScript, as well as allowing me to practice HTML and CSS in a practical environment, not just a classroom and all throughout the week I had a go at C# programming, which I learned from scratch.

Finally on Friday, I continued with programming a chat bot, improving and optimising already existing code. I used JavaScript to present contacts, as well as CSS to improve the appearance of the bot in general adding an input area, an enter button and a scroll bar if the chat overflows.


Delving into SpatialOS

SpatialOS is a cloud computing platform developed by the UK based Improbable that can be used for running large-scale simulated worlds, such as a massively multiplayer game (MMO), a virtual city, or a model of the brain. It is a technology that I first heard of in early 2016 and it has been on my radar since, and so I decided to look into it on the most recent DadenU day by working through some of the tutorials to see what it was all about.

There are a few core concepts to SpatialOS that are essential to understanding how it works. The two main concepts are Entities and Workers.

Each object that is simulated in an SpatialOS world are represented by what are called Entities. This could be a tree, a rock, a nerve cell, or a pirate ship. Each of these entities can be made up of components, which define certain persistent properties, events, and commands. An example would be a player character entity that defined a "health component" - this would have a value property, an event for what happened when it reached 0, and perhaps some commands that can modify the property in specific ways.

My ship in the watery world
All of the processing performed in the simulated world, such as visualising the world or modifying component properties, is performed by Workers.These are services that can be scaled by SpatialOS depending on resource demands. There are both server-side workers, handled by SpatialOS, and client side workers - the application that a user will interact with.

You are able to develop, debug, and test applications developed on SpatialOS on your local machine, allowing for small scale messing around to be done fairly painlessly. My plan was to work through the tutorials in the documentation so that I could get a feel of how to use the technology. The first lesson in the Pirates Tutorial series focuses on setting up the machine to run a local instance of SpatialOS and the tutorial project itself.

A command line package manager called chocolatey is used to install the SpatialOS command line interface (CLI) and stores the location in an environment variable. The source code for the tutorial includes a Unity Worker and a Unity Client. Included in the project is a scene with an empty ocean environment. All other objects, such as the islands and the fish are generated by a worker when the project is launched, and the player ship is generated by a client when it connects. The CLI was used to build the worker and launch SpatialOS locallyWith that the 'server-side' of the game was running and all that was left was for a client to connect to it. 

There are several ways that a client can be run, but the most useful for local development using Unity is to run through the editor interface. Pressing play will launch a local client that allows you to sail around an ocean as a ship. 


Observing pirate ships and fish using the Inspector tool
SpatialOS has an interesting web-base tool called the Inspector that lets you see all of the entities and workers in the running simulation. It displays the areas of the game world that each individual worker and client are currently processing - you even have the ability to remove a worker from the simulation, however SpatialOS will start a new worker instance if it feels that it needs one - and as there is only one required in the tutorial a new one was launched if I deleted the existing worker.

All of the entity types listed can be colour coded so that they are easier to follow when observed in the 2D top down view. There is a 3D option but I couldn't seem to get it to work on my browser. All of the components that make up the entity can be viewed as well, which leads me to believe that the inspector could be a fairly useful monitoring tool during development. The inspector is available on deployments on the cloud as well as locally. 

Other lessons in the tutorial take you through the basics step by step. The world was very empty to begin with and was in dire need of some more entities, so the second lesson takes you through the process of creating one from scratch. This is a two step process - the first is to write an entity template, and the last is to then use the template to spawn the entity within the game world.

Building the pirate ship entity template
The tutorial project uses a factory method pattern to generate the templates for each entity, so to create our AI pirate ships all we needed to do was create our own factory method for it. The entity object is generated using the builder pattern, and there are some components that are required in every entity generated - a position and a metadata component. The pattern also requires that you set the persistence of the entity, and that you set the permissions on the access control list (ACL) before any additional components are added.

Spawning of the entities in the tutorial occur at two distinct stages - at runtime when a player connects, and at the beginning when the world is created in what is known as a snapshot. A snapshot is a representation of the state of the game world at any specific point in time, and when you launch the project to SpatialOS you can define a snapshot to load from.

Every game world requires an initial load state and this is what a snapshot provides. In the case of the tutorial, the player ship template is used to spawn a ship when a user connects, and the pirate ship template is used to spawn ships in the snapshot we defined as default. To define a snapshot we created a custom Unity menu item to populate a dictionary with a list of all of the entities we want to spawn, including a whole bunch of our new pirate ships. Once the worker is rebuilt the client will not be able to see a whole host of static pirate ships within the ocean environment.

Generating a snapshot that includes pirate ships
Getting the pirate ships to move in the environment was next. The tutorial focused on the manipulation of a component's properties by creating a script that will write values to the ShipControls component of the pirate ship entity.

Access restrictions defined when attaching a component to an entity template determine what kind of worker can read from or write to the component. We can use a custom attribute to determine what worker type we want the script to be available for - i.e, the pirate ship is an NPC so we only want it to be controlled on the server side, so we lock the script using the attribute to only appear on UnityWorker instances.

Only one worker, or client, can have write access to a component at any given time, though more than one worker can read from the component. We add a writer component to the script we have created and ensure that it has the [Require] attribute - this means that the script will only be enabled if the current worker has write access to the component.  

To write to a component you use a send method that takes an update structure, which should contain any updates to the component values that need to happen - in the case of the pirate ship we want to update the speed and the steering values of the ShipControls component to get it to move. The worker was rebuilt again, the local client relaunched, and we had moving pirate ships! There was no decision making so they were rather aimless, but at least they were moving now.

Event data flow
Another important aspect of the components are the ability to fire of events. These are transient and are usually used for one-off or infrequent changes, as there is less bandwidth overhead than modifying properties, which are persistent. To learn about events we were tasked with converting locally spawned cannonballs to be visible on other clients.



Adding events to a component first requires knowledge of how a component is defined in the first place. SpatialOS uses a schema to generate code that workers can then use to read and write to components. These are written in what is called schemalang, which is SpatialOS' own proprietary language. An event is defined in this language using the structure: event type name. For example we defined an event that will be fired when a cannon is fired on the left of the ship like so: event FireLeft fire_left. 

Using our new FireLeft and FireRight events instead
of locally firing cannons
Events are defined within the component, and FireLeft is defined as an empty type outwith the component definition in the following fashion: type FireLeft {}. The custom types are capable of storing data, but that wasn't required for the purposes of the tutorial.

The code needs to be generated once the schema for the component has been written so that we can access the component from within our Unity project. The CLI can generate code in multiple languages (currently C#, C++ and Java). To be able to fire events we need access to the component writer so that when we detect that the user has pressed the "fire cannonballs" key we can fire an event by using the component update structure, like we have done when moving the pirate ships.

The script that contains callbacks that fire the cannons
hen an event is received
Firing an event is only half of the story as nothing will happen if nothing is reacting to the event being fired. In the case of Unity it's as easy as creating a new MonoBehaviour script and giving it a component reader as well as a couple of methods that will contain the code we want to run when we receive an event. These methods must be registered as callbacks to the event through the component reader in the MonoBehaviour script's OnEnable method, and must be removed as a callback in the OnDisable method. This is mostly to prevent unexpected behaviour and stop the script from receiving event information when it is disabled.

Next was a short tutorial that discussed how components are accessed by workers and clients. One of the key terms to understand is checked out. Workers don't know about the entire simulated environment in SpatialOS and instead only know about an allocated sub-set of the environment, called a checkout area. They have read access to, and can receive updates from, any entity within this designated area. I mentioned earlier that more than one worker can have read access to a component, and this is because the checkout areas of a worker can overlap with that of another worker; meaning that an entity may be within the area of multiple workers. This is also the reason that only one worker can have write access to a component at any given time.

The ShipControls component's full schema
The final tutorial that I managed to complete before the day ended walked me through the basics of creating a new component from scratch, in this case a "health" component that could be applied to ships so that cannonball hits would affect them on contact.

As mentioned before the component is defined in schemalang. In the schema file you define the namespace of the component as well as the component itself. Each component must have a unique ID within the project and this is define in the schema file. The properties and events of the component are all defined here (eg the Health component has a "current_health" integer property). You can also define commands here but I believe those are covered in the final tutorial.

After defining the component the code has to be generated once again so that the new component can be accessed within the project. Adding the component to an entity is as easy as modifying the template for whichever entity you wish to add it to. Reducing the health of a ship in the tutorial was as simple as updating the current health of the health component whenever a collision was detected between the ship and a cannonball - using a mixture of Unity's OnTriggerEnter method and a writer to the health component I defined.

Writing to the new Health component
In conclusion I think that SpatialOS was actually fairly simple to use once it was all set up. I did attempt to launch the project locally but I never managed to get it consistently working in the short time I had left. The biggest drawback to the pirates tutorial is it didn't give me much of an idea of the main attraction of SpatialOS, which is the ability for there to be multiple workers running a simulation in tandem; for the entirety of the tutorials there was need for only one worker. I'm very curious to see how SpatialOS as a platform develops in the future, as I feel it could have some interesting applications.