14 August 2017

James - Work Experience

James is a year 10 student who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by James.

After a team meeting on Monday, I set to work getting to grips with Fieldscapes, using the tutorials to create a quiz that takes the user through the world answering various questions, which turned out to be useful later on (my geography knowledge was tested in a quiz mid-week, so knowing that Ulaanbaatar was the capital of Mongolia from my own project was very helpful!)

 I was then set the task of importing files from Paint 3D into Fieldscapes, which provoked research into the numerous 3D file types available, their uses, as well as how to model a 3D object.

Some default models in Paint3D in 3D mode


Finally, I was then able to export Paint 3D files as an fbx into Unity, then to create an asset bundle to be imported into Fieldscapes; although we encountered problems with offsets and colours along the way, which proved to also be a great learning experiences. The asset bundle I made featured artistic marvels such as a coffee cup with 3D text and a rainbow.

Paint3D models imported into Fieldscapes


In addition, I was present at a meeting that showed me the many uses of virtual reality and 3D, as well as how business between two companies is carried out.

Then on Wednesday, I made an inventory of all the computers in the office, prompting discussion about aspect ratios, computer specs and anti-virus software, as well as having to use the computers’ BIOS’ and learning about the financial side of things with discussions about the cost of the computers.

Next on Thursday I was involved in testing, giving me insight into how it is carried out, along with the gratifying feeling of discovering a funny bug, in this case props being placed in the sky and avatars floating into the air, seemingly ascending to heaven.

I then participated in the testing of a virtual mentor, which again showed the need for and the process of testing and both the positives and negatives of using VR and 3D in the classroom. Next I tried programming a chat bot, adding an input box to it, which greatly improved my JavaScript, as well as allowing me to practice HTML and CSS in a practical environment, not just a classroom and all throughout the week I had a go at C# programming, which I learned from scratch.

Finally on Friday, I continued with programming a chat bot, improving and optimising already existing code. I used JavaScript to present contacts, as well as CSS to improve the appearance of the bot in general adding an input area, an enter button and a scroll bar if the chat overflows.


Delving into SpatialOS

SpatialOS is a cloud computing platform developed by the UK based Improbable that can be used for running large-scale simulated worlds, such as a massively multiplayer game (MMO), a virtual city, or a model of the brain. It is a technology that I first heard of in early 2016 and it has been on my radar since, and so I decided to look into it on the most recent DadenU day by working through some of the tutorials to see what it was all about.

There are a few core concepts to SpatialOS that are essential to understanding how it works. The two main concepts are Entities and Workers.

Each object that is simulated in an SpatialOS world are represented by what are called Entities. This could be a tree, a rock, a nerve cell, or a pirate ship. Each of these entities can be made up of components, which define certain persistent properties, events, and commands. An example would be a player character entity that defined a "health component" - this would have a value property, an event for what happened when it reached 0, and perhaps some commands that can modify the property in specific ways.

My ship in the watery world
All of the processing performed in the simulated world, such as visualising the world or modifying component properties, is performed by Workers.These are services that can be scaled by SpatialOS depending on resource demands. There are both server-side workers, handled by SpatialOS, and client side workers - the application that a user will interact with.

You are able to develop, debug, and test applications developed on SpatialOS on your local machine, allowing for small scale messing around to be done fairly painlessly. My plan was to work through the tutorials in the documentation so that I could get a feel of how to use the technology. The first lesson in the Pirates Tutorial series focuses on setting up the machine to run a local instance of SpatialOS and the tutorial project itself.

A command line package manager called chocolatey is used to install the SpatialOS command line interface (CLI) and stores the location in an environment variable. The source code for the tutorial includes a Unity Worker and a Unity Client. Included in the project is a scene with an empty ocean environment. All other objects, such as the islands and the fish are generated by a worker when the project is launched, and the player ship is generated by a client when it connects. The CLI was used to build the worker and launch SpatialOS locallyWith that the 'server-side' of the game was running and all that was left was for a client to connect to it. 

There are several ways that a client can be run, but the most useful for local development using Unity is to run through the editor interface. Pressing play will launch a local client that allows you to sail around an ocean as a ship. 


Observing pirate ships and fish using the Inspector tool
SpatialOS has an interesting web-base tool called the Inspector that lets you see all of the entities and workers in the running simulation. It displays the areas of the game world that each individual worker and client are currently processing - you even have the ability to remove a worker from the simulation, however SpatialOS will start a new worker instance if it feels that it needs one - and as there is only one required in the tutorial a new one was launched if I deleted the existing worker.

All of the entity types listed can be colour coded so that they are easier to follow when observed in the 2D top down view. There is a 3D option but I couldn't seem to get it to work on my browser. All of the components that make up the entity can be viewed as well, which leads me to believe that the inspector could be a fairly useful monitoring tool during development. The inspector is available on deployments on the cloud as well as locally. 

Other lessons in the tutorial take you through the basics step by step. The world was very empty to begin with and was in dire need of some more entities, so the second lesson takes you through the process of creating one from scratch. This is a two step process - the first is to write an entity template, and the last is to then use the template to spawn the entity within the game world.

Building the pirate ship entity template
The tutorial project uses a factory method pattern to generate the templates for each entity, so to create our AI pirate ships all we needed to do was create our own factory method for it. The entity object is generated using the builder pattern, and there are some components that are required in every entity generated - a position and a metadata component. The pattern also requires that you set the persistence of the entity, and that you set the permissions on the access control list (ACL) before any additional components are added.

Spawning of the entities in the tutorial occur at two distinct stages - at runtime when a player connects, and at the beginning when the world is created in what is known as a snapshot. A snapshot is a representation of the state of the game world at any specific point in time, and when you launch the project to SpatialOS you can define a snapshot to load from.

Every game world requires an initial load state and this is what a snapshot provides. In the case of the tutorial, the player ship template is used to spawn a ship when a user connects, and the pirate ship template is used to spawn ships in the snapshot we defined as default. To define a snapshot we created a custom Unity menu item to populate a dictionary with a list of all of the entities we want to spawn, including a whole bunch of our new pirate ships. Once the worker is rebuilt the client will not be able to see a whole host of static pirate ships within the ocean environment.

Generating a snapshot that includes pirate ships
Getting the pirate ships to move in the environment was next. The tutorial focused on the manipulation of a component's properties by creating a script that will write values to the ShipControls component of the pirate ship entity.

Access restrictions defined when attaching a component to an entity template determine what kind of worker can read from or write to the component. We can use a custom attribute to determine what worker type we want the script to be available for - i.e, the pirate ship is an NPC so we only want it to be controlled on the server side, so we lock the script using the attribute to only appear on UnityWorker instances.

Only one worker, or client, can have write access to a component at any given time, though more than one worker can read from the component. We add a writer component to the script we have created and ensure that it has the [Require] attribute - this means that the script will only be enabled if the current worker has write access to the component.  

To write to a component you use a send method that takes an update structure, which should contain any updates to the component values that need to happen - in the case of the pirate ship we want to update the speed and the steering values of the ShipControls component to get it to move. The worker was rebuilt again, the local client relaunched, and we had moving pirate ships! There was no decision making so they were rather aimless, but at least they were moving now.

Event data flow
Another important aspect of the components are the ability to fire of events. These are transient and are usually used for one-off or infrequent changes, as there is less bandwidth overhead than modifying properties, which are persistent. To learn about events we were tasked with converting locally spawned cannonballs to be visible on other clients.



Adding events to a component first requires knowledge of how a component is defined in the first place. SpatialOS uses a schema to generate code that workers can then use to read and write to components. These are written in what is called schemalang, which is SpatialOS' own proprietary language. An event is defined in this language using the structure: event type name. For example we defined an event that will be fired when a cannon is fired on the left of the ship like so: event FireLeft fire_left. 

Using our new FireLeft and FireRight events instead
of locally firing cannons
Events are defined within the component, and FireLeft is defined as an empty type outwith the component definition in the following fashion: type FireLeft {}. The custom types are capable of storing data, but that wasn't required for the purposes of the tutorial.

The code needs to be generated once the schema for the component has been written so that we can access the component from within our Unity project. The CLI can generate code in multiple languages (currently C#, C++ and Java). To be able to fire events we need access to the component writer so that when we detect that the user has pressed the "fire cannonballs" key we can fire an event by using the component update structure, like we have done when moving the pirate ships.

The script that contains callbacks that fire the cannons
hen an event is received
Firing an event is only half of the story as nothing will happen if nothing is reacting to the event being fired. In the case of Unity it's as easy as creating a new MonoBehaviour script and giving it a component reader as well as a couple of methods that will contain the code we want to run when we receive an event. These methods must be registered as callbacks to the event through the component reader in the MonoBehaviour script's OnEnable method, and must be removed as a callback in the OnDisable method. This is mostly to prevent unexpected behaviour and stop the script from receiving event information when it is disabled.

Next was a short tutorial that discussed how components are accessed by workers and clients. One of the key terms to understand is checked out. Workers don't know about the entire simulated environment in SpatialOS and instead only know about an allocated sub-set of the environment, called a checkout area. They have read access to, and can receive updates from, any entity within this designated area. I mentioned earlier that more than one worker can have read access to a component, and this is because the checkout areas of a worker can overlap with that of another worker; meaning that an entity may be within the area of multiple workers. This is also the reason that only one worker can have write access to a component at any given time.

The ShipControls component's full schema
The final tutorial that I managed to complete before the day ended walked me through the basics of creating a new component from scratch, in this case a "health" component that could be applied to ships so that cannonball hits would affect them on contact.

As mentioned before the component is defined in schemalang. In the schema file you define the namespace of the component as well as the component itself. Each component must have a unique ID within the project and this is define in the schema file. The properties and events of the component are all defined here (eg the Health component has a "current_health" integer property). You can also define commands here but I believe those are covered in the final tutorial.

After defining the component the code has to be generated once again so that the new component can be accessed within the project. Adding the component to an entity is as easy as modifying the template for whichever entity you wish to add it to. Reducing the health of a ship in the tutorial was as simple as updating the current health of the health component whenever a collision was detected between the ship and a cannonball - using a mixture of Unity's OnTriggerEnter method and a writer to the health component I defined.

Writing to the new Health component
In conclusion I think that SpatialOS was actually fairly simple to use once it was all set up. I did attempt to launch the project locally but I never managed to get it consistently working in the short time I had left. The biggest drawback to the pirates tutorial is it didn't give me much of an idea of the main attraction of SpatialOS, which is the ability for there to be multiple workers running a simulation in tandem; for the entirety of the tutorials there was need for only one worker. I'm very curious to see how SpatialOS as a platform develops in the future, as I feel it could have some interesting applications.

24 July 2017

Mind maps in 3D

I’ve often wondered whilst working with Datascape (Daden’s 3D visualization tool) whether it would be possible to produce 3D mind maps. Mind maps are widely used across Daden for planning, brain storming and keeping information in an accessible way. Fortunately, Noda arrived just in time for a recent Daden U day which gave me the opportunity to see how well (or not) 3D mind mapping would work.


Noda is available through the Steam store and makes extensive use of Oculus Rift and the Oculus Touch controllers. First impressions weren’t good, the lack of any tutorial means that the user must learn the UI through trial and error (Google Blocks uses a great intro tutorial to overcome this problem). I’m also not sure that the controls are particularly intuitive, for example whilst the teleport facility is good for large movements I couldn’t find a way to take a step back when I was too close to a node to comfortably work with it.



Control issues aside, the biggest problem seems to be the cognitive load and effort spent adding nodes, linking nodes, and particularly editing nodes (i.e. changing the text) would be better spent on thinking about the problem that you are trying to map out. This is illustrated by the fact that I prefer to use Mindmup (simple mind mapping in a browser) in preference to XMind (powerful desktop app) when brainstorming. I prefer Mindmup because it’s always ready in the browser and the limited options (e.g. no icons) means time is spent concentrating on the problem, not making the map look fancy.

However, I can see that once brainstorming is over Noda could provide a great way to communicate ideas based on a mind map, but until the UI is improved I wouldn’t want to be the one building it! Finally does 3D add anything to mind maps? I’d like to think that it does, but unfortunately, I didn’t see any mind maps in Noda that took advantage of it and I didn’t have the patience with the UI to build a mind map complex enough to need 3D.

Whilst this all seems very critical of Noda I must say well done for having the ambition to build this, and it should be remembered that Noda still Early access and therefore likely to improve rapidly. 

17 July 2017

Benjamin - Work Experience

Benjamin is a year 11 student from King Edward’s School who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by Benjamin.

Work Experience Blog Post

Monday:
After experimenting with Fieldscapes in the morning, I began to write a C# script for a new widget, that could be used to measure height in the environments. I’d only recently began learning C# and so this was a very useful experience for me. I learnt a lot about how to properly construct code by looking at the other widgets, and with help from Sean I learnt how to do more complicated things, like accessing other scripts and returning different variables from a method.

Tuesday:
I finished the widget in the morning, and then began to document it. I cleaned up the code, adding notes, before writing a wiki article for it. With that completed I learnt about Source Control, and how the project was managed and updated, eventually uploading my changes to the server.

Wednesday & Thursday:
I had created some game assets in the past and so got to work creating some Norman props for a Motte and Bailey Castle environment in Fieldscapes. I rarely have an opportunity to practice throughout the day like I did, and so being able to create multiple assets with performance in mind was also a very beneficial experience.

Friday: I finished modelling a Battle Axe and then finally, I showed the assets in a meeting. Overall, I have learnt a lot about programming, modelling, and the way a project is managed in general. I thoroughly enjoyed my time at Daden.


3 July 2017

iLRN Immersive Learning Conference - Coimbra June 2017 - and Fieldscapes wins best demo!


I was lucky enough to get to spend last week in Portugal attending the Immersive Learning Network's annual conference in Coimbra, Portugal. I delivered both a paper on Fieldscapes, and a couple of hands-on demos of the system - and I'm proud to say that Fieldscapes won best demo!

If you search Twitter for #iLRN2017 then you'll get a pretty good feel of the conference and I tweeted out most of my highlights, but here's a few key takeaways.

iLRN Itself


iLRN looks to be establishing itself as the "go to" place for research into immersive learning environments, which includes 3D, VR, AR, MR etc (in fact one of the calls at the end of the conference was to have a decent definition of immersive learning!). They've also identified a trivium (we were in an old University!) of base subjects (computer science, game science, pedagogic science), and a quadrivium for higher study (multiple perspectives, NPIRL, situtaion/context, transfer).

We working up our own "guide to Immersive Learning", but it certainly looks like iLRN will be a key reference point and we're keen to get involved in the annual horizon scan and gap analysis their keen to do.

OpenSim



I've not seen this much OpenSim in years! Seems like every other presentation was talking about research using it. What it does highlight is that there is still a lot off valid 3D immersive research going on and people haven't all jumped on the VR bandwagon. OpenSim is I think primarily used because it is a) open, b) cheap/free and c) easy to use (c.f. Unity3D). But there is a recognition that the visual quality (at least to the standard shown here ) is now falling below what people find acceptable. Talking to people at the Fieldscapes demos there may be a real opportunity  for us wit Fieldscapes here as we are a) cheap, b) easy (easier?) to use c) higher graphic quality. We're not open, but we have had elements of PIVOTE as open source in the past and are certainly keen to talk about opening the PIVOTE standard, and maybe even the PIVOTE engine.

3D and VR



In relation to the OpenSim/VR issue there was one interesting paper that showed that the differences in learning from 3D and VR were not actually that great - and in some cases the move to VR reduced it! We'd love to do some more solid research in this area.

Other Snippets

Some other papers and demos that caught my eye:

  • Using Kinect as a presentation trainer - capturing your body movements and audio levels and commenting as you go!
  • Using Unity3D to create a visual memory palace in 3D/VR
  • A good longitudinal study in the use of MiRTLE (blast from the past) for delivering immersive classrooms
  • Leonel and his team developing an ontology for immersive learning authoring - will be interesting to see what links there are with Fieldscapes
  • Using AR posters around the classroom walls for the kids to trigger content - especially languages. The speakers whole house is AR'd!
  • Great use of www.menti.com by Hanan (see below) for audience participation
  • A nice scottish Empire exhibition, build and use of VSim with primary kids
  • Nice Communicate! authoring tool for dialog based trainers
  • On the down side, far too much use of 3rd party promo videos in some keynotes


Old Friends



Elements of the conference were certainly like an SL/OpenSim meet up from the late noughties. In particular it was great to meet VRider/Hanan (centre) face to face having known and worked with him in SL for over 8 years or so!

Next Time?

Overall the event is certainly worth going to again. Montana in 2018 may be a bit off a long haul but London in 2019 won't! My recommendations for an even better event:


  • More discussion, less presentations (perhaps have far more posters instead). 
  • Look at using techniques such as Delphic Oracle and Fish-bowl which worked so well at OU's ReLive conference on Immersive Learning
  • Use immersive technologies to let people attend and participate remotely
  • Ditto between conferences to broaden out SIG and local meetings - and more of those?
  • More use of menti.com and similar
Overall, a great week, top'd and tial'd by trips to Bussaco and Porto, and a lot to reflect on, some of which will make its way into later posts and writings.





19 June 2017

AR and VR in data visualisation – can it ever be useful to our puny human minds?


The Register has just published a post on VR and Dataviz featuring quotes from our MD highlighting the fact that VR may be more suitable for the "communicate" phase of a dataviz exercise than the "explore" phase.

The bit about "spent six months trying to get his firm's software to work in VR, but eventually decided to stick with monitors" is not 100% true - we just switched focus from native Oculus to WebGL, which then gave us the added benefit of browser support - as well as Cardboard (that people can actually afford!). We'll soon support Oculus through WebVR, and if there's the interest a Unity based player that reads Datascape output.

Also the bit about "it just needs a good engine to prepare the data first." is spot on though - and applies to every data viz system, 3D, VR or plain old 2D! We still spend more time massaging data than we do actually generating the visualisation from cleaned and enriched data.

Otherwise the general thrust is right - we think VR is better suited to the end-of-pipeline task of sharing and communicating your data. If you do want to use 3D (which for a lot of uses cass we think you should) then you're better off doing it on an ordinary monitor but in a 3D (flight-sim style) environment - that you can work in all day without getting nauseous and whilst still being able to communicate with colleagues. And then of course just click the Publish button to generate a web and VR version to share!

Why not download Datascape now to give both modes a try!





Birmingham Open Data - Traffic Levels



For some time we've been meaning to plot some of the data coming out of Birmingham City Council's excellent open data initiative. So today we finally got around to downloading some datasets from their Open Data Factory - and there certainly seems to be a lot of good and usable data there.

The first dataset we've tried is the annual vehicle traffic counts for about 160 sites across the city. The only real issue was that locations were given a Nation Grid References, so we did a simple linear conversion to Lat/Long based on some known co-ordinates in the city. Since some of the data points represent 4km of road we don't think that any error is significant!

We used a simple geotemporal plot, with a discs for each year's data stacked on top of each other - so each site produces a column of varying width discs - the width/radius being proportional to traffic levels. To aid in immediate visual understanding we also mapped traffic levels to colour in a simple heat map.

The resultant visualisation is at: http://live.datascapevr.com/viewer/?wid=4e5d4cd4-c987-43a2-bdda-24ef747bc57b

Just click the link to fly through in 3D in your browser, or 3D/VR on your smartphone + Google Cardboard.

The most immediate comment from the visualisation is how little the data has changed over 15 years. There is no major sense of traffic levels around the city blooming. Some minor increase in some of the sites - but by no means in all. It's also obvious that the M6 and A34(M) are, hardly surprisingly carrying the biggest traffic loads, and then down through the Bristol Road. The main arterial routes are next.

Using 3D to stack the data does also help to highlight artefacts from data collection - something that Datascape always appears to make easy to find. In this case it's sites like the one below, for instance, where a single sensor is replaced by two sensors in order to get better resolution.



There are also some quite complex changes, such as when the M6 toll opened in 2003, with one sensor being replaced by several, and then some further consolidation.



We can also see significant changes in inner city monitoring with several sites being phased out.



And finally this M6 sensor appear to show a massive drop (111497 to 19753), but could be due to a change in the A47/J5 layout?.


 Other M6 sensors don't show a big drop post 2003/M6 Toll so it's unlikely to be that - in fact none of the M6 sensors show any big post-toll change except possibly a minor drop, soon recovered for the ones straight after the junction. Here's just the M6 sites for reference.




Daniel, Humza and Julio - Work Experience

During 5-9 Jun we played host to three Yr10 students from Aston University Engineering Academy students on work experience. Two of them (Daniel and Humza) had already spent a week with us earlier in the year. Here is their collective report (verbatim).

HUMZA 
Chatbot & Chatscript

Monday - We were assigned our tasks and our mentors to help us with our tasks. Adam showed me what a chatbot is and an example of one. Then he showed me the coding behind the bot.

Tuesday - I started to write some of my own code for the Henry Bot. Just some of the basics. Towards the end of the day Adam asked me to think of a topic for me to program my own bot on.

Wednesday - I couldn’t think of a topic so Adam chose one for me. I then had the rest of the day to code my bot and occasionally getting help from Adam. The image below shows me chatting to the bot.
Thursday - I was close to finishing my bot, Adam wanted me to start learning HTML and CSS coding, so I could create a webpage to talk to the bot online.
Friday: Adam was not here so I was not able finish my web page for the bot. Overall, I have had a good week and learnt a lot of skills that I can use later in life.

DANIEL 
Fieldscapes 

Monday - Out of the three options we were given, I chose to complete the Fieldscapes task. For this I had to download the Fieldscapes app onto my phone and was given a few hours to test out most of the maps and report any bugs I found. Alongside this, I gave feedback of suggestions or any improvements they could make. The image shows some of my suggestions.


Tuesday - I had the task of searching for 3D models of props that would replace existing props. However, while looking for models I had to consider the different file types that Unity accepts and if they fit into the criteria.

Wednesday - On Wednesday, I created a word document that explains to a new user how each widget works and what it does. For example, the Tape Measure. For me to complete this task, I had to use Fieldscapes and try out each widget. Towards the end of the day, Humza and I created videos showing how to play Fieldscapes on mobile.

Thursday - After David purchased the props, I imported them into Unity and began resizing and texturing them. However, I mainly focused on an idea that David came up with regarding an intractable chess board. This required me to texture, resize and add a collider to each chess piece. I resized the new props by using various old props as reference to what size ‘roughly’ they should be.

Friday - On the last day, I finished up previous scripts and wrote my blog.

JULIO 
Amazon Alexa Skill & Javascript 

Monday - On the first day at Daden I had to learn how to code with JavaScript, this meant that I had to use code academy and complete the course on JavaScript

Tuesday - On the next day with my new knowledge of JavaScript I was given the task of producing a new skill for Alexa, this was supposed to take in a question which would then output a response, the coding was in JavaScript. The skill was based on the fact that Alexa had to say, “Hello world” due to a trigger which would have been my question. The image is a screen shot of the Alexa skill building app.


Wednesday - Being halfway through the week I was tasked with exploring a new objective which was to use Zap works to create some augmented reality content. This website allowed the user to create a storage of the information which a marker would trigger the “pop up” of the context which could either be images, videos, soundtracks, or information which would be linked to websites. There were different stages that had different levels of difficulty in the making but there was more manipulation over the content which meant that augmented reality could be created.

Thursday - On Thursday I carried on with the building of three new intents for Al exa, this time alone, when I finished this I had to learn how to use switch statements to reduce the amount of code there was and to make the code more efficient, once this was done I had to change the utterances of the skills to make grammatically correct so that it makes sense to the person saying the skill. Finally, I had to find out why the response that Alexa gave started with “undefined” but I didn’t manage to do it so I had to use a replace method in order to replace undefined with “The answer is ”. The image shows a switch statement I wrote in my Alexa skill.


Friday - On the last day of my work experience I had to create this blog as my final task.

15 June 2017

Fieldscapes Android App now on Play Store



Fieldscapes is now available for Android. The Android app will let you play any* Fieldscapes exercise. It supports both 1st and 3rd person views, and single and multi-user modes, and even flying!

You must have a (free) Fieldscapes account in order to use it.

You can download the app from the Play Store by searching for Fieldscapes, or view the page at: https://play.google.com/store/apps/details?id=com.DadenLimited.Fieldscapes

Video at: https://www.youtube.com/watch?v=OXeBDZW1yYA



Thanks to our work experience students Humza, Daniel and Julio for putting the raw video footage together and acting as hand models!

*Location and Inventory creators must upload separate Android versions of their assets in order for an exercise to work on Android - but this is a 5 minute process, and does not need to be done for each exercise.

8 June 2017

Fieldscapes and Trainingscapes Live



Today we're pleased to announce that Fieldscapes is formally live.

This represents a culmination of over 2 years work that started with the InnovateUK Design for Impact Feasibility Study, and was then followed by Development funding from InnovateUK through until October last year - and then our closed and open beta since then.



During the Design for Impact project we worked closely with our partners The Open University, the Field Studies Council and service design consultancy Design Thinkers, numerous schools and universities, and some lovely physical field-trips, to create a service (not just the software!) that would let educators create and share 3D and VR immersive field-trips - and almost any other lesson - without needing specialist 3D skills. We've done this by clearly separating the two key tasks involved - creation of the 3D assets (which is likely to remain a technical task for some time to come - but not beyond the ability of a keen amateur), and the creation of the lesson plan and pedagogy.



For this second element we think we've created an easy to use and intuitive forms based, what-you-see-is-what-you-get editor, where educators can just walk out onto their chosen location, place the props which the student will interact with, and then define the interactions that the student will have.



The resulting lesson can then be accessed, in single or multi-user mode, from a PC, Mac or Android (iOS) to come, and also with an Oculus Rift VR headset if available (other VR headsets to follow).



To us, though, the key is that once you have created a lesson you can share it with other educators, anywhere in the globe. And you can give them permission to copy your exercise so that they can customise it to the needs of their students, and put it in their language.



So please, check out our videos of how to use Fieldscapes, see our gallery of example locations and lessons (and we stress these are examples only - we want you to be the creators of content, not us!), and then register and download the app and try out existing lessons and start to create your own. We only charge once you start to open exercises up to your students.



For those who want a dedicated system rather than a shared system - be they educators or commercial trainers  or L&D staff - we are also now offering Trainingscapes, the same technology as Fieldscapes but offered as a dedicated instance for each client, branded to that client, and where the client is in complete control over access. We'll have a dedicated Trainingscapes demo shortly, but in the mean time sign-up for Fieldscapes to get a feel for the system.



If you need any help in getting started we're running regular webinars, but also please don't hesitate to email support@fieldscapesvr.com and we can set up a Skype or similar session to get you going.

And if you'd like more information on Fieldscapes or Trainingscapes, or a face-to-face or web demo, then again please just call us on +44 121 250 5678, or email info@daden.co.uk or use the contact form.

26 May 2017

General Election: 2015 Data - Turnout Data


Link for fly-through: http://live.datascapevr.com/viewer/?wid=c26b2318-ec14-457a-9ae5-dc64bc0fc6e1


Just testing out a minor update for Datascape when I decided to plot the 2015 General Election results data on a scatter plot, not geographically. I chose:


  • X: electorate
  • Y: Margin/majority
  • Z: Turnout
  • Size: Size of UKIP vote
  • Colour: Winning party

A quick fly around of the data revealed a number of interesting points:


In the high margin/majority space it is the Conservatives that dominate. More big majority, safe homeland seats.


It is at the small majority end where we tend to see the smaller parties - few of them have massive majorities in their seats.



 The SNP wins are generally associated with very high turnouts. The dots (yellow) are small as there was minor/no UKIP voting and that is what we've mapped to radius.


And the most striking - looking at the turnout axis the Conservatives dominate in the high turnout seats and Labour in the low turnout seats. Cause or effect?

Update: 7 June 17


Here's another visualisation we really like - just showing the vote for each of the main parties across all the constituencies. The fly-through link is:

http://live.datascapevr.com/viewer/?wid=7959ce9e-5cce-4b7d-8dd4-44adeddea7e7

18 May 2017

London Undergound Data in Datascape



Inspired by the TfL talk at VRWorld I thought it was about time we got some TfL data in Datascape, so I managed to put this together before I got home from the conference.

Doogal's site has a downloadable list of tube station locations (lat/long) (https://www.doogal.co.uk/london_stations.php) which was a good start. The TfL Open Data site after registration then let me download the station entrance counts for weekdays for 2016 (as daily averages I assume) for 15 minute slots through the day. A bit of Perl scripting combined the two files, and split the TfL table into a single column, and some tidying to get station names to match! Then into Datascape. In all these visualisations height is the 24 hour day, each disc is a data sample for a 15 min period, and width and colour is proportional to the number of people entering the station at the time.

You can fly through a subset of the data yourself in 3D or VR (Cardboard) by following this link in any WebGL capable browser: 


Full dataset, the column with a fat base at the centre of the map is Waterloo

Flying in to Oxford Circus - bulging at home time

Flying out along the Northern Line - note some stations have an evening bump as well as a morning one


Showing a sample every 3 hours in the Web/VR version to get general trend


Close up of Web/VR version - we limit to 5-10k points at the moment for performance


We'll explore some of the other TfL datasets over the next few months, and we can support multiple datasets in a visualisation so you could switch between entrance and exit data, or weekdays and weekends.

Replacing the Open Street Map map with a 3D model is certainly possible - we can do model imports ourselves but haven't added it yet as a public function. And moving the whole experience into Hololens is certainly something we'd like to do in the future.

But for now you can just download a trial of Datascape for free and start publishing your data visualisations in 3D on the PC, web and VR.

A short unedited video flythrough is below - but try the link at top for the full 3D/VR experience.





All the data is (c) 2017 Transport for London and used under the Open Government Licence v2.0.