18 December 2017

Geospatial AR - Building Your Own!



For ages now I've wanted to have an app that would display points of interest on my smartphone screen overlayed with the real world view. The new OS AR layer (above) and Yelp Monocle (below) do the sort of thing I want, but I want to be able to define my own locations, and maybe some custom functionality.



After a couple of fruitless web and App/PlayStore searches I couldn't find what I wanted. Wikitude was closest, and so were several GIS related offerings, but it was going to be several hundreds of pound to "publish" my dataset. I then looked at mobile development frameworks (e.g. Corona) several of which appeared to offer an AR option, but really only just marker based, not geospatial AR. So by about 1030 on DadenU day I realised I was going to have to roll-my-own. I'd found a nice tutorial (Part 1 and Part 2) and so, without having ever developed a Java app or mobile app decided to give it a go.

It took about the next 3 hours to install Android Studio and all its updates. Moving the tutorial across didn't take long, but the app kept crashing on start-up. I then realised that I needed to manually set the GPS and Camera positions! Did that and the app worked.

All the app did though was put a marker on the centre of the screen when the camera pointed in a particular direction. I wanted the OS style though - markers that stayed on screen and slid around as you moved the phone, and a whole host of them too.

A bit of maths, some searching on CodeProject and I soon had the basic operating mode changed, one marker, sliding around and just about aligning with the target. The hardcoded about  dozen markers and got them working. Here's a screenshot of that stage, with the icon that came with the tutorial. The other thing I added was that markers were smaller if more distant.




That was about the end of DadenU day, I had the core app working but I wanted more. So over the next few evenings I:


  • Moved from hard coded markers to an internally selectable dataset
  • Added some on-marker text
  • Made new markers of different shapes
  • Added ability to set marker shape and colour based on fields in the data
  • Added option to move markers in the vertical plane based on distance as (OS/Yelp)
  • Added the ability to filter on a type parameter (again as OS)
That got to about here:






I also added the ability to spoof the phone GPS location, so it would pretend you were in the middle of the dataset - so here the centre of the Battle of Waterloo - but physically on the office car park.

I then wanted to add some very specific functionality. As you might guess from the last sentence one of my use cases for this is battlefield tours. So not only are there fixed locations but also moving troops. So I wanted a time slider that you could use to set a time in the battle, and then have the unit pointers point to the right place. The time slider is at the bottom of the screen for relevant datasets, with the "current time" displayed see below.




A final tidy up of the menu icons and this is the current version:



I've even added voice description for the icons, and arrows that point you in the direction of a featured icon, with description linked to time so using the slider and some VCR controls it can walk you through a spatial narrative.

Next up on the to do list are:
  • Click on an icon to see (or hear) full data
  • Load data from  a web server
And that should pretty much give me my minimum viable product.

We're already talking to one potential client about a project based around this, and can see how it might fit into two other projects (giving an alternative to viewing the data in VR if actually on-site). We'll keep you posted as it develops. At the very least I'll start using it for some of my own battlefield walks.


8 December 2017

Automating Camera Directing in Fieldscapes

Fieldscapes has the potential to be a very flexible platform when it comes to content creation, and when David was busy recording a video of it a potential use case for the application crossed my mind - would it be possible to direct a video within a Fieldscapes exercise by automating camera positioning? This would allow for a variety of uses, such as exercise flythroughs to video recordings, and that's why I decided to look into the idea for the last Daden U day.

Firstly I had to remind myself about the current capabilities of the cameras before I added any new functionality. I created a new exercise and added a new camera to it. On the camera panel, you can see some of the options we have for setting up the camera - we can rotate horizontally and vertically, as well as adjusting the field of view of the camera. There is the option to change to orthographic projection, but I wasn't going to be needing that.

Camera menu.
The first idea that came to mind was that being able to alter the field of view via PIVOTE actions would be very powerful. That feature isn't currently implemented but I put it on my list of potential improvements. The other idea that popped into my head was the ability to alter individual rotation axes via PIVOTE actions, to allow more subtle control of the camera than is currently available.

Now that I had looked at the camera set up options it was time to remind myself of what PIVOTE can do with the cameras. So I went to edit one of the default system nodes to see the available actions. As
]#u can see from the image below, it is very limited - you can only alter whether or not the camera is active. This would have to change drastically if I was to be able to do what I wanted to.

Old camera actions.
Automating the camera position and movement would require cameras be able to use some of the actions available to props, such as the ability to teleport to, or move to, the position of a prop or the ability to look at a prop within the environment. Some new actions would also nice, such as one to change the field of view as previously mentioned.

To help determine what actions I was needing I decided to choose an existing exercise in Fieldscapes and design a camera 'flythrough' of the exercise in the same way some video games would perform an overview of the level before the user begins. After much deliberation the exercise that I chose to use was the Apollo Explore exercise developed to allow users to walk about the moon and learn about the Apollo 11 moon landing. This exercise has props spread around the environment which makes it easy to define a path we want the camera, or cameras, to follow.

Intended camera positions are circled.
Mapping out the positions I wanted the cameras to be at during the flythrough was the first step I took. I decided on placing two extra cameras in the environment - one to be used to look at the user avatar and one to move around the props. This would give a nice fade to black and back when switching between them. I wanted to slowly pan around the user avatar at the start, followed by the camera showing each of the main pieces of equipment before ending on the lunar lander itself. After this, the exercise would start and the camera would go back to a third person view.

After plotting out all of the positions and directions I wanted the cameras to go to I decided on how I wanted to transition between the positions so that I could determine what actions I would require. As I wanted to smoothly pan the camera at the start around the avatar, the most obvious action I would require is one that would move the camera from one position to another over time. I added 3 actions to the cameras that are present in props that I felt were most useful - MoveTo, TeleportTo, and LookAt. To linearly interpolate from the current camera position to that of the arrow I placed that points to the avatar's head we would use the MoveTo command.

New camera actions.
I set up a timer that would trigger the camera facing the avatar to move to one of the arrows after 1 second had passed, and I would use the same timer to move the camera to other positions after more time had passed. Unfortunately this is where I hit a snag - there was a bug with the way the camera was trying to raycast to the ground when moving between positions, causing it to slowly move upwards into space forever. I ran out of time and had to head home before I managed to find the cause of the issue, so it was at this point where my experiment had to stop for the time being.

In conclusion, I do believe that if the breaking bug I discovered towards the end of the day can be fixed then there is a great chance that the ability to automatically move cameras will be functional within Fieldscapes. I'd also like to develop a PIVOTE action that could transition the field of view of the camera over time - perhaps then we will see a dolly zoom replicated in Fieldscapes!

Oops!


1 December 2017

Fieldscapes Exercise Visualiser - Daden U Day




For the Daden U day I decided to create what I call the Fieldscapes Exercise Visualizer. The aim of the visualizer is to create a graphical representation of a Fieldscapes exercise. The idea came about while creating a complex Fieldscapes exercise, I was struggling to quickly recall the structure of the exercise, the Props, Prop Commands and their relationship with Systems Nodes. Another reason for creating the visualizer was to have a way of explaining the flow of the exercise from begin to end. If it a non-linear exercise they different paths can also be illustrated.


Before beginning work on the visualizer I had a few ideas of how to illustrate the exercise using symbols and shapes. Fortunately I discovered a web application called draw.io whilst trying to created a flow diagram manually using pre existing tools. Initially I had attempted use Umlet which is windows application for drawing UML diagrams but decided against it. Reason being that a web application would be more accessible. As a web application I could integrate it into the Fieldscapes Content Manager reducing the number of tools content creators have to access to make full use of the Fieldscapes ecosystem.


Unfortunately draw.io does not have an API (Application Programming Interface). In my attempt to find a the API I discovered that it uses library called mxGraph. mxGraph is a JavaScript diagramming library that enables interactive graph and charting applications to be quickly created that run natively in most major browsers. mxGraph has a backend which supports the javascript application running on a server. This backend software can either use Java, C-Sharp(C#) or PHP. For the purpose of the U Day I used the C# backend as Fieldcapes is written in C#.


After downloading the source code for mxGraph from the Github repository. The source code contained an example C# .Net website. The solution worked right out of the box without an issues. Fortunately because of work done for the Fieldscapes Editor most of the code needed to read Fieldscapes exercises stored in XML was already written so all I needed to do was write a couple of functions that extracted the data I needed to represent the various elements of an exercise as geometry with connecting lines. Extracting the data was a breeze however progress grinded to a halt when I tried to draw different shapes to represent the various elements of an exercise such as props, system nodes etc. After some trial and error and lots of googling I managed to understand how to style the vertex which is the word used in mxGraph for the geometry that are draw on the canvas.


From what I saw of the documentation and my brief time using mxGraph, mxGraph is a powerful library that has many affordances for those who want to create diagrams of any nature. It allowed me to create a diagram(see image below) that showed all the different elements of a Fieldscapes exercise with line indicating their relationships with to each other. The next step create some form of structure for the diagram. Development on the Fieldscapes Exerciser Visualizer is not a priority at the moment but it something I intend to continue working on until it becomes more useful at which point it will be integrated into the Fieldscapes Content Manager.



2 November 2017

C# as a Scripting Language - and in Fieldscapes?

By: Iain Brazendale

Daden U days give me the opportunity to play with ideas that have been floating around at the back of my mind for some time and this Daden U day was no exception.

I’ve been reading good things about the changes Microsoft have been doing to their .NET complier platform “Roslyn” particularly with regards to adding scripting support. It was this combined with thoughts of how we could easily add additional functionality to Fieldscapes that made me decide it was time to take a deeper look at scripting.

The advantage of scripting provided through .NET compiler is that there’s no need to learn yet another scripting language, scripts are simply written in C# but with looser syntax requirements.


Following the article quickly gave me a rough working console application to do some testing with:






This example shows accessing an array containing the names of images for different days of the week. The third query to me shows the real power of the script – here the day of the week becomes the index into the images and returns the correct images for the day of the week (You guessed it – I ran this example on a Tuesday). The ability to show a different image for each day of the week is not something that is ever likely to be baked into Fieldscapes, however, adding scripting support allows this new functionality to be added with a single line of script.

This example below shows how picking of images at random can be achieved.





However, with great power comes great responsibility… if we let users add their own scripts then they could take advantage:








Here we can see that users may access functionality that wasn’t intended, in this case displaying inappropriate messages. Also of note, as you can see from my several attempts to get the syntax right (C# is case sensitive), it could be fiddly for people to write scripts without the use of a good IDE correcting those minor typos. These issues suggest that scripting is something that should initially be left to the “advanced” users and more thought and design will be needed before it becomes a “typical” user feature.

So, when will scripting be added Fieldscapes? Unfortunately, scripting requires .NET 4.6 and Roslyn. Unity still uses .NET 3.5 with a non-Roslyn compiler so as wonderful as this technology looks it won’t be a scripting solution for Fieldscapes until the tech catches up. However, it could be a useful technology for Daden’s other products such as Datascape.







23 October 2017

Trees in Birmingham




The West Midlands Data Discovery Centre is publishing a wide and interesting set of open data on the West Midlands. Over the next few weeks we'll take a look at some of the data sets and use them to create visualisations in Datascape.

First up is a database of all the tress managed by the city's highways department. For each tree we have:


  • Locations (lat/long)
  • Height (shown as height on map)
  • Age (shown as colour, yellow = new, green = mature, brown = old)
  • Form (eg symmetric/non-symmetric - latter more at risk from storms, shown as shape)
  • Species (shown in text)




This gives a low oblique view over the city, showing relative heights - as expected the younger trees tend to be smaller!



Close in on a set of pollarded and unbalanced trees. Bright green are mature, dull green are semi-mature. The small blue one has unspecified data.

We can use the standard Datascape search, filter and scrub features to help analyse the data.

We've published a subset of this visualisation to the web so that you can fly around and investigate it yourself. Just click on the image or link below.




More datasets from WMDDS to follow!

9 October 2017

Fieldscapes 1.4 released




Fieldscapes v1.4 is now out and available for free download and had some good heavy user testing at Malvern recently with Year 7s from across the county. The main new features in v1.3 and v1.4 are:


  • The start of support for NPC characters through an NPC widget. You can now add an NPC avatar as a prop and have it TP from location to location and activate specific animations - eg "sit". Later releases will allow the NPC to glide or walk between locations. We are also close to releasing a "chatbot" widget to hook an NPC up to an external chatbot system so that you can really start to create virtual characters.
  • General improvements to the UI when in VR mode - users found that the "virtual iPAD" just got in the way so we're now putting the UI directly into the scene. We'll make steady improvements to the usability of the VR experience in future releases
  • Added a new Avatar command to change clothes. This only changes between preset outfits but is good if a character needs to change into special kit for a task - for instance our nursing avatars putting on gloves and aprons (see below)
  • Multiple choice questions are now randomised - makes the student think if they repeat the lesson!
  • Increased the inventory limit from 3 to 5 in the editor - so you can bring in props from more inventories for your scene
  • Increased the word count for multiple-choice panels and default popup panel

Various bug fixes were also made and you can see a full list at http://live.fieldscapesvr.com/Home/Download




v1.4 is already available for PC and Mac. The Android version of v1.4 is just under final testing, and we're also still progressing the iOS version of Fieldscapes through the App Store acceptance process.

Remember: We have an ongoing survey for new Fieldscapes features, please take 2-3 minutes to fill it out at: https://www.surveymonkey.co.uk/r/88YL39B

4 October 2017

Fieldscapes at the Malvern Festival of Innovation

Oculus Rift on the Moon

For the second year in a row we ran a set of workshops at the Malvern Festival of Innovation playing host to a succession of groups of 20-30 students (mostly year 7s) from around Worcestershire and giving them a 1 hour introduction to immersive learning. We set up four stands in order to show the range of experiences and lessons that can be created, and the different ways in which they can be delivered. We had:


  • One laptop running the Solar System lesson
  • One laptop running the Apollo Educate lesson
  • One laptop running the Introduction to Carding Mill (which several groups knew from Fieldtrips)
  • A couple of Google Cardboards, one with the Photosphere tour of Carding Mill and one the Apollo Explore lesson
  • Oculus Rift running Apollo Explore
Playing tag on the Moon!


Students were split into groups of about 5 and had 10 minutes on each "stand" - so everyone got to try all the kit.

Looking at the Waterspout Waterfall in a (plastic) Google Cardboard


Student feedback from comments and feedback forms included:
  • "I wish I could spend all afternoon here"
  • "Can I come back later?"
  • "It was really cool"
  • "It was fun to do"
  • "It was memorable"
  • "The realness of it"
  • "I liked the fact that we were not there but we could see everything"
  • "It was like it was real"
  • "It was educating and fun"
Exploring the Moon and Apollo on Google Cardboard


When talking to the teacher we were keen to highlight that:
  • They didn't have to buy any new hardware, like expensive VR headsets, as they can run lessons in 2D/3D mode on existing computers, or in VR on Google Cardboard (one teacher loaded the app onto his phone as we spoke)
  • They could create their own lessons, share them, and use (and customise) those producted by other users
  • With our licencing model they only started paying once they started using it in class, so they could explore and test for free until they were confident in the system and lessons and were ready to use it in class.
Even the teachers got in on the act!


Fieldscapes itself was rock solid on all the devices all day - despite getting a hammering from the kids. What was particularly impressive was when we had the Apollo experiences in multi-user mode so the kids could play tag on the moon - and even using the public wifi at the venue we had no issues with lag and avatar movement was very smooth.



All in all a great day and helped remind us all why we've built Fieldscapes!