14 August 2017

James - Work Experience

James is a year 10 student who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by James.

After a team meeting on Monday, I set to work getting to grips with Fieldscapes, using the tutorials to create a quiz that takes the user through the world answering various questions, which turned out to be useful later on (my geography knowledge was tested in a quiz mid-week, so knowing that Ulaanbaatar was the capital of Mongolia from my own project was very helpful!)

 I was then set the task of importing files from Paint 3D into Fieldscapes, which provoked research into the numerous 3D file types available, their uses, as well as how to model a 3D object.

Some default models in Paint3D in 3D mode


Finally, I was then able to export Paint 3D files as an fbx into Unity, then to create an asset bundle to be imported into Fieldscapes; although we encountered problems with offsets and colours along the way, which proved to also be a great learning experiences. The asset bundle I made featured artistic marvels such as a coffee cup with 3D text and a rainbow.

Paint3D models imported into Fieldscapes


In addition, I was present at a meeting that showed me the many uses of virtual reality and 3D, as well as how business between two companies is carried out.

Then on Wednesday, I made an inventory of all the computers in the office, prompting discussion about aspect ratios, computer specs and anti-virus software, as well as having to use the computers’ BIOS’ and learning about the financial side of things with discussions about the cost of the computers.

Next on Thursday I was involved in testing, giving me insight into how it is carried out, along with the gratifying feeling of discovering a funny bug, in this case props being placed in the sky and avatars floating into the air, seemingly ascending to heaven.

I then participated in the testing of a virtual mentor, which again showed the need for and the process of testing and both the positives and negatives of using VR and 3D in the classroom. Next I tried programming a chat bot, adding an input box to it, which greatly improved my JavaScript, as well as allowing me to practice HTML and CSS in a practical environment, not just a classroom and all throughout the week I had a go at C# programming, which I learned from scratch.

Finally on Friday, I continued with programming a chat bot, improving and optimising already existing code. I used JavaScript to present contacts, as well as CSS to improve the appearance of the bot in general adding an input area, an enter button and a scroll bar if the chat overflows.


Delving into SpatialOS

SpatialOS is a cloud computing platform developed by the UK based Improbable that can be used for running large-scale simulated worlds, such as a massively multiplayer game (MMO), a virtual city, or a model of the brain. It is a technology that I first heard of in early 2016 and it has been on my radar since, and so I decided to look into it on the most recent DadenU day by working through some of the tutorials to see what it was all about.

There are a few core concepts to SpatialOS that are essential to understanding how it works. The two main concepts are Entities and Workers.

Each object that is simulated in an SpatialOS world are represented by what are called Entities. This could be a tree, a rock, a nerve cell, or a pirate ship. Each of these entities can be made up of components, which define certain persistent properties, events, and commands. An example would be a player character entity that defined a "health component" - this would have a value property, an event for what happened when it reached 0, and perhaps some commands that can modify the property in specific ways.

My ship in the watery world
All of the processing performed in the simulated world, such as visualising the world or modifying component properties, is performed by Workers.These are services that can be scaled by SpatialOS depending on resource demands. There are both server-side workers, handled by SpatialOS, and client side workers - the application that a user will interact with.

You are able to develop, debug, and test applications developed on SpatialOS on your local machine, allowing for small scale messing around to be done fairly painlessly. My plan was to work through the tutorials in the documentation so that I could get a feel of how to use the technology. The first lesson in the Pirates Tutorial series focuses on setting up the machine to run a local instance of SpatialOS and the tutorial project itself.

A command line package manager called chocolatey is used to install the SpatialOS command line interface (CLI) and stores the location in an environment variable. The source code for the tutorial includes a Unity Worker and a Unity Client. Included in the project is a scene with an empty ocean environment. All other objects, such as the islands and the fish are generated by a worker when the project is launched, and the player ship is generated by a client when it connects. The CLI was used to build the worker and launch SpatialOS locallyWith that the 'server-side' of the game was running and all that was left was for a client to connect to it. 

There are several ways that a client can be run, but the most useful for local development using Unity is to run through the editor interface. Pressing play will launch a local client that allows you to sail around an ocean as a ship. 


Observing pirate ships and fish using the Inspector tool
SpatialOS has an interesting web-base tool called the Inspector that lets you see all of the entities and workers in the running simulation. It displays the areas of the game world that each individual worker and client are currently processing - you even have the ability to remove a worker from the simulation, however SpatialOS will start a new worker instance if it feels that it needs one - and as there is only one required in the tutorial a new one was launched if I deleted the existing worker.

All of the entity types listed can be colour coded so that they are easier to follow when observed in the 2D top down view. There is a 3D option but I couldn't seem to get it to work on my browser. All of the components that make up the entity can be viewed as well, which leads me to believe that the inspector could be a fairly useful monitoring tool during development. The inspector is available on deployments on the cloud as well as locally. 

Other lessons in the tutorial take you through the basics step by step. The world was very empty to begin with and was in dire need of some more entities, so the second lesson takes you through the process of creating one from scratch. This is a two step process - the first is to write an entity template, and the last is to then use the template to spawn the entity within the game world.

Building the pirate ship entity template
The tutorial project uses a factory method pattern to generate the templates for each entity, so to create our AI pirate ships all we needed to do was create our own factory method for it. The entity object is generated using the builder pattern, and there are some components that are required in every entity generated - a position and a metadata component. The pattern also requires that you set the persistence of the entity, and that you set the permissions on the access control list (ACL) before any additional components are added.

Spawning of the entities in the tutorial occur at two distinct stages - at runtime when a player connects, and at the beginning when the world is created in what is known as a snapshot. A snapshot is a representation of the state of the game world at any specific point in time, and when you launch the project to SpatialOS you can define a snapshot to load from.

Every game world requires an initial load state and this is what a snapshot provides. In the case of the tutorial, the player ship template is used to spawn a ship when a user connects, and the pirate ship template is used to spawn ships in the snapshot we defined as default. To define a snapshot we created a custom Unity menu item to populate a dictionary with a list of all of the entities we want to spawn, including a whole bunch of our new pirate ships. Once the worker is rebuilt the client will not be able to see a whole host of static pirate ships within the ocean environment.

Generating a snapshot that includes pirate ships
Getting the pirate ships to move in the environment was next. The tutorial focused on the manipulation of a component's properties by creating a script that will write values to the ShipControls component of the pirate ship entity.

Access restrictions defined when attaching a component to an entity template determine what kind of worker can read from or write to the component. We can use a custom attribute to determine what worker type we want the script to be available for - i.e, the pirate ship is an NPC so we only want it to be controlled on the server side, so we lock the script using the attribute to only appear on UnityWorker instances.

Only one worker, or client, can have write access to a component at any given time, though more than one worker can read from the component. We add a writer component to the script we have created and ensure that it has the [Require] attribute - this means that the script will only be enabled if the current worker has write access to the component.  

To write to a component you use a send method that takes an update structure, which should contain any updates to the component values that need to happen - in the case of the pirate ship we want to update the speed and the steering values of the ShipControls component to get it to move. The worker was rebuilt again, the local client relaunched, and we had moving pirate ships! There was no decision making so they were rather aimless, but at least they were moving now.

Event data flow
Another important aspect of the components are the ability to fire of events. These are transient and are usually used for one-off or infrequent changes, as there is less bandwidth overhead than modifying properties, which are persistent. To learn about events we were tasked with converting locally spawned cannonballs to be visible on other clients.



Adding events to a component first requires knowledge of how a component is defined in the first place. SpatialOS uses a schema to generate code that workers can then use to read and write to components. These are written in what is called schemalang, which is SpatialOS' own proprietary language. An event is defined in this language using the structure: event type name. For example we defined an event that will be fired when a cannon is fired on the left of the ship like so: event FireLeft fire_left. 

Using our new FireLeft and FireRight events instead
of locally firing cannons
Events are defined within the component, and FireLeft is defined as an empty type outwith the component definition in the following fashion: type FireLeft {}. The custom types are capable of storing data, but that wasn't required for the purposes of the tutorial.

The code needs to be generated once the schema for the component has been written so that we can access the component from within our Unity project. The CLI can generate code in multiple languages (currently C#, C++ and Java). To be able to fire events we need access to the component writer so that when we detect that the user has pressed the "fire cannonballs" key we can fire an event by using the component update structure, like we have done when moving the pirate ships.

The script that contains callbacks that fire the cannons
hen an event is received
Firing an event is only half of the story as nothing will happen if nothing is reacting to the event being fired. In the case of Unity it's as easy as creating a new MonoBehaviour script and giving it a component reader as well as a couple of methods that will contain the code we want to run when we receive an event. These methods must be registered as callbacks to the event through the component reader in the MonoBehaviour script's OnEnable method, and must be removed as a callback in the OnDisable method. This is mostly to prevent unexpected behaviour and stop the script from receiving event information when it is disabled.

Next was a short tutorial that discussed how components are accessed by workers and clients. One of the key terms to understand is checked out. Workers don't know about the entire simulated environment in SpatialOS and instead only know about an allocated sub-set of the environment, called a checkout area. They have read access to, and can receive updates from, any entity within this designated area. I mentioned earlier that more than one worker can have read access to a component, and this is because the checkout areas of a worker can overlap with that of another worker; meaning that an entity may be within the area of multiple workers. This is also the reason that only one worker can have write access to a component at any given time.

The ShipControls component's full schema
The final tutorial that I managed to complete before the day ended walked me through the basics of creating a new component from scratch, in this case a "health" component that could be applied to ships so that cannonball hits would affect them on contact.

As mentioned before the component is defined in schemalang. In the schema file you define the namespace of the component as well as the component itself. Each component must have a unique ID within the project and this is define in the schema file. The properties and events of the component are all defined here (eg the Health component has a "current_health" integer property). You can also define commands here but I believe those are covered in the final tutorial.

After defining the component the code has to be generated once again so that the new component can be accessed within the project. Adding the component to an entity is as easy as modifying the template for whichever entity you wish to add it to. Reducing the health of a ship in the tutorial was as simple as updating the current health of the health component whenever a collision was detected between the ship and a cannonball - using a mixture of Unity's OnTriggerEnter method and a writer to the health component I defined.

Writing to the new Health component
In conclusion I think that SpatialOS was actually fairly simple to use once it was all set up. I did attempt to launch the project locally but I never managed to get it consistently working in the short time I had left. The biggest drawback to the pirates tutorial is it didn't give me much of an idea of the main attraction of SpatialOS, which is the ability for there to be multiple workers running a simulation in tandem; for the entirety of the tutorials there was need for only one worker. I'm very curious to see how SpatialOS as a platform develops in the future, as I feel it could have some interesting applications.

24 July 2017

Mind maps in 3D

I’ve often wondered whilst working with Datascape (Daden’s 3D visualization tool) whether it would be possible to produce 3D mind maps. Mind maps are widely used across Daden for planning, brain storming and keeping information in an accessible way. Fortunately, Noda arrived just in time for a recent Daden U day which gave me the opportunity to see how well (or not) 3D mind mapping would work.


Noda is available through the Steam store and makes extensive use of Oculus Rift and the Oculus Touch controllers. First impressions weren’t good, the lack of any tutorial means that the user must learn the UI through trial and error (Google Blocks uses a great intro tutorial to overcome this problem). I’m also not sure that the controls are particularly intuitive, for example whilst the teleport facility is good for large movements I couldn’t find a way to take a step back when I was too close to a node to comfortably work with it.



Control issues aside, the biggest problem seems to be the cognitive load and effort spent adding nodes, linking nodes, and particularly editing nodes (i.e. changing the text) would be better spent on thinking about the problem that you are trying to map out. This is illustrated by the fact that I prefer to use Mindmup (simple mind mapping in a browser) in preference to XMind (powerful desktop app) when brainstorming. I prefer Mindmup because it’s always ready in the browser and the limited options (e.g. no icons) means time is spent concentrating on the problem, not making the map look fancy.

However, I can see that once brainstorming is over Noda could provide a great way to communicate ideas based on a mind map, but until the UI is improved I wouldn’t want to be the one building it! Finally does 3D add anything to mind maps? I’d like to think that it does, but unfortunately, I didn’t see any mind maps in Noda that took advantage of it and I didn’t have the patience with the UI to build a mind map complex enough to need 3D.

Whilst this all seems very critical of Noda I must say well done for having the ambition to build this, and it should be remembered that Noda still Early access and therefore likely to improve rapidly. 

17 July 2017

Benjamin - Work Experience

Benjamin is a year 11 student from King Edward’s School who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by Benjamin.

Work Experience Blog Post

Monday:
After experimenting with Fieldscapes in the morning, I began to write a C# script for a new widget, that could be used to measure height in the environments. I’d only recently began learning C# and so this was a very useful experience for me. I learnt a lot about how to properly construct code by looking at the other widgets, and with help from Sean I learnt how to do more complicated things, like accessing other scripts and returning different variables from a method.

Tuesday:
I finished the widget in the morning, and then began to document it. I cleaned up the code, adding notes, before writing a wiki article for it. With that completed I learnt about Source Control, and how the project was managed and updated, eventually uploading my changes to the server.

Wednesday & Thursday:
I had created some game assets in the past and so got to work creating some Norman props for a Motte and Bailey Castle environment in Fieldscapes. I rarely have an opportunity to practice throughout the day like I did, and so being able to create multiple assets with performance in mind was also a very beneficial experience.

Friday: I finished modelling a Battle Axe and then finally, I showed the assets in a meeting. Overall, I have learnt a lot about programming, modelling, and the way a project is managed in general. I thoroughly enjoyed my time at Daden.


3 July 2017

iLRN Immersive Learning Conference - Coimbra June 2017 - and Fieldscapes wins best demo!


I was lucky enough to get to spend last week in Portugal attending the Immersive Learning Network's annual conference in Coimbra, Portugal. I delivered both a paper on Fieldscapes, and a couple of hands-on demos of the system - and I'm proud to say that Fieldscapes won best demo!

If you search Twitter for #iLRN2017 then you'll get a pretty good feel of the conference and I tweeted out most of my highlights, but here's a few key takeaways.

iLRN Itself


iLRN looks to be establishing itself as the "go to" place for research into immersive learning environments, which includes 3D, VR, AR, MR etc (in fact one of the calls at the end of the conference was to have a decent definition of immersive learning!). They've also identified a trivium (we were in an old University!) of base subjects (computer science, game science, pedagogic science), and a quadrivium for higher study (multiple perspectives, NPIRL, situtaion/context, transfer).

We working up our own "guide to Immersive Learning", but it certainly looks like iLRN will be a key reference point and we're keen to get involved in the annual horizon scan and gap analysis their keen to do.

OpenSim



I've not seen this much OpenSim in years! Seems like every other presentation was talking about research using it. What it does highlight is that there is still a lot off valid 3D immersive research going on and people haven't all jumped on the VR bandwagon. OpenSim is I think primarily used because it is a) open, b) cheap/free and c) easy to use (c.f. Unity3D). But there is a recognition that the visual quality (at least to the standard shown here ) is now falling below what people find acceptable. Talking to people at the Fieldscapes demos there may be a real opportunity  for us wit Fieldscapes here as we are a) cheap, b) easy (easier?) to use c) higher graphic quality. We're not open, but we have had elements of PIVOTE as open source in the past and are certainly keen to talk about opening the PIVOTE standard, and maybe even the PIVOTE engine.

3D and VR



In relation to the OpenSim/VR issue there was one interesting paper that showed that the differences in learning from 3D and VR were not actually that great - and in some cases the move to VR reduced it! We'd love to do some more solid research in this area.

Other Snippets

Some other papers and demos that caught my eye:

  • Using Kinect as a presentation trainer - capturing your body movements and audio levels and commenting as you go!
  • Using Unity3D to create a visual memory palace in 3D/VR
  • A good longitudinal study in the use of MiRTLE (blast from the past) for delivering immersive classrooms
  • Leonel and his team developing an ontology for immersive learning authoring - will be interesting to see what links there are with Fieldscapes
  • Using AR posters around the classroom walls for the kids to trigger content - especially languages. The speakers whole house is AR'd!
  • Great use of www.menti.com by Hanan (see below) for audience participation
  • A nice scottish Empire exhibition, build and use of VSim with primary kids
  • Nice Communicate! authoring tool for dialog based trainers
  • On the down side, far too much use of 3rd party promo videos in some keynotes


Old Friends



Elements of the conference were certainly like an SL/OpenSim meet up from the late noughties. In particular it was great to meet VRider/Hanan (centre) face to face having known and worked with him in SL for over 8 years or so!

Next Time?

Overall the event is certainly worth going to again. Montana in 2018 may be a bit off a long haul but London in 2019 won't! My recommendations for an even better event:


  • More discussion, less presentations (perhaps have far more posters instead). 
  • Look at using techniques such as Delphic Oracle and Fish-bowl which worked so well at OU's ReLive conference on Immersive Learning
  • Use immersive technologies to let people attend and participate remotely
  • Ditto between conferences to broaden out SIG and local meetings - and more of those?
  • More use of menti.com and similar
Overall, a great week, top'd and tial'd by trips to Bussaco and Porto, and a lot to reflect on, some of which will make its way into later posts and writings.





19 June 2017

AR and VR in data visualisation – can it ever be useful to our puny human minds?


The Register has just published a post on VR and Dataviz featuring quotes from our MD highlighting the fact that VR may be more suitable for the "communicate" phase of a dataviz exercise than the "explore" phase.

The bit about "spent six months trying to get his firm's software to work in VR, but eventually decided to stick with monitors" is not 100% true - we just switched focus from native Oculus to WebGL, which then gave us the added benefit of browser support - as well as Cardboard (that people can actually afford!). We'll soon support Oculus through WebVR, and if there's the interest a Unity based player that reads Datascape output.

Also the bit about "it just needs a good engine to prepare the data first." is spot on though - and applies to every data viz system, 3D, VR or plain old 2D! We still spend more time massaging data than we do actually generating the visualisation from cleaned and enriched data.

Otherwise the general thrust is right - we think VR is better suited to the end-of-pipeline task of sharing and communicating your data. If you do want to use 3D (which for a lot of uses cass we think you should) then you're better off doing it on an ordinary monitor but in a 3D (flight-sim style) environment - that you can work in all day without getting nauseous and whilst still being able to communicate with colleagues. And then of course just click the Publish button to generate a web and VR version to share!

Why not download Datascape now to give both modes a try!





Birmingham Open Data - Traffic Levels



For some time we've been meaning to plot some of the data coming out of Birmingham City Council's excellent open data initiative. So today we finally got around to downloading some datasets from their Open Data Factory - and there certainly seems to be a lot of good and usable data there.

The first dataset we've tried is the annual vehicle traffic counts for about 160 sites across the city. The only real issue was that locations were given a Nation Grid References, so we did a simple linear conversion to Lat/Long based on some known co-ordinates in the city. Since some of the data points represent 4km of road we don't think that any error is significant!

We used a simple geotemporal plot, with a discs for each year's data stacked on top of each other - so each site produces a column of varying width discs - the width/radius being proportional to traffic levels. To aid in immediate visual understanding we also mapped traffic levels to colour in a simple heat map.

The resultant visualisation is at: http://live.datascapevr.com/viewer/?wid=4e5d4cd4-c987-43a2-bdda-24ef747bc57b

Just click the link to fly through in 3D in your browser, or 3D/VR on your smartphone + Google Cardboard.

The most immediate comment from the visualisation is how little the data has changed over 15 years. There is no major sense of traffic levels around the city blooming. Some minor increase in some of the sites - but by no means in all. It's also obvious that the M6 and A34(M) are, hardly surprisingly carrying the biggest traffic loads, and then down through the Bristol Road. The main arterial routes are next.

Using 3D to stack the data does also help to highlight artefacts from data collection - something that Datascape always appears to make easy to find. In this case it's sites like the one below, for instance, where a single sensor is replaced by two sensors in order to get better resolution.



There are also some quite complex changes, such as when the M6 toll opened in 2003, with one sensor being replaced by several, and then some further consolidation.



We can also see significant changes in inner city monitoring with several sites being phased out.



And finally this M6 sensor appear to show a massive drop (111497 to 19753), but could be due to a change in the A47/J5 layout?.


 Other M6 sensors don't show a big drop post 2003/M6 Toll so it's unlikely to be that - in fact none of the M6 sensors show any big post-toll change except possibly a minor drop, soon recovered for the ones straight after the junction. Here's just the M6 sites for reference.