18 December 2017

Geospatial AR - Building Your Own!



For ages now I've wanted to have an app that would display points of interest on my smartphone screen overlayed with the real world view. The new OS AR layer (above) and Yelp Monocle (below) do the sort of thing I want, but I want to be able to define my own locations, and maybe some custom functionality.



After a couple of fruitless web and App/PlayStore searches I couldn't find what I wanted. Wikitude was closest, and so were several GIS related offerings, but it was going to be several hundreds of pound to "publish" my dataset. I then looked at mobile development frameworks (e.g. Corona) several of which appeared to offer an AR option, but really only just marker based, not geospatial AR. So by about 1030 on DadenU day I realised I was going to have to roll-my-own. I'd found a nice tutorial (Part 1 and Part 2) and so, without having ever developed a Java app or mobile app decided to give it a go.

It took about the next 3 hours to install Android Studio and all its updates. Moving the tutorial across didn't take long, but the app kept crashing on start-up. I then realised that I needed to manually set the GPS and Camera positions! Did that and the app worked.

All the app did though was put a marker on the centre of the screen when the camera pointed in a particular direction. I wanted the OS style though - markers that stayed on screen and slid around as you moved the phone, and a whole host of them too.

A bit of maths, some searching on CodeProject and I soon had the basic operating mode changed, one marker, sliding around and just about aligning with the target. The hardcoded about  dozen markers and got them working. Here's a screenshot of that stage, with the icon that came with the tutorial. The other thing I added was that markers were smaller if more distant.




That was about the end of DadenU day, I had the core app working but I wanted more. So over the next few evenings I:


  • Moved from hard coded markers to an internally selectable dataset
  • Added some on-marker text
  • Made new markers of different shapes
  • Added ability to set marker shape and colour based on fields in the data
  • Added option to move markers in the vertical plane based on distance as (OS/Yelp)
  • Added the ability to filter on a type parameter (again as OS)
That got to about here:






I also added the ability to spoof the phone GPS location, so it would pretend you were in the middle of the dataset - so here the centre of the Battle of Waterloo - but physically on the office car park.

I then wanted to add some very specific functionality. As you might guess from the last sentence one of my use cases for this is battlefield tours. So not only are there fixed locations but also moving troops. So I wanted a time slider that you could use to set a time in the battle, and then have the unit pointers point to the right place. The time slider is at the bottom of the screen for relevant datasets, with the "current time" displayed see below.




A final tidy up of the menu icons and this is the current version:



I've even added voice description for the icons, and arrows that point you in the direction of a featured icon, with description linked to time so using the slider and some VCR controls it can walk you through a spatial narrative.

Next up on the to do list are:
  • Click on an icon to see (or hear) full data
  • Load data from  a web server
And that should pretty much give me my minimum viable product.

We're already talking to one potential client about a project based around this, and can see how it might fit into two other projects (giving an alternative to viewing the data in VR if actually on-site). We'll keep you posted as it develops. At the very least I'll start using it for some of my own battlefield walks.


8 December 2017

Automating Camera Directing in Fieldscapes

Fieldscapes has the potential to be a very flexible platform when it comes to content creation, and when David was busy recording a video of it a potential use case for the application crossed my mind - would it be possible to direct a video within a Fieldscapes exercise by automating camera positioning? This would allow for a variety of uses, such as exercise flythroughs to video recordings, and that's why I decided to look into the idea for the last Daden U day.

Firstly I had to remind myself about the current capabilities of the cameras before I added any new functionality. I created a new exercise and added a new camera to it. On the camera panel, you can see some of the options we have for setting up the camera - we can rotate horizontally and vertically, as well as adjusting the field of view of the camera. There is the option to change to orthographic projection, but I wasn't going to be needing that.

Camera menu.
The first idea that came to mind was that being able to alter the field of view via PIVOTE actions would be very powerful. That feature isn't currently implemented but I put it on my list of potential improvements. The other idea that popped into my head was the ability to alter individual rotation axes via PIVOTE actions, to allow more subtle control of the camera than is currently available.

Now that I had looked at the camera set up options it was time to remind myself of what PIVOTE can do with the cameras. So I went to edit one of the default system nodes to see the available actions. As
]#u can see from the image below, it is very limited - you can only alter whether or not the camera is active. This would have to change drastically if I was to be able to do what I wanted to.

Old camera actions.
Automating the camera position and movement would require cameras be able to use some of the actions available to props, such as the ability to teleport to, or move to, the position of a prop or the ability to look at a prop within the environment. Some new actions would also nice, such as one to change the field of view as previously mentioned.

To help determine what actions I was needing I decided to choose an existing exercise in Fieldscapes and design a camera 'flythrough' of the exercise in the same way some video games would perform an overview of the level before the user begins. After much deliberation the exercise that I chose to use was the Apollo Explore exercise developed to allow users to walk about the moon and learn about the Apollo 11 moon landing. This exercise has props spread around the environment which makes it easy to define a path we want the camera, or cameras, to follow.

Intended camera positions are circled.
Mapping out the positions I wanted the cameras to be at during the flythrough was the first step I took. I decided on placing two extra cameras in the environment - one to be used to look at the user avatar and one to move around the props. This would give a nice fade to black and back when switching between them. I wanted to slowly pan around the user avatar at the start, followed by the camera showing each of the main pieces of equipment before ending on the lunar lander itself. After this, the exercise would start and the camera would go back to a third person view.

After plotting out all of the positions and directions I wanted the cameras to go to I decided on how I wanted to transition between the positions so that I could determine what actions I would require. As I wanted to smoothly pan the camera at the start around the avatar, the most obvious action I would require is one that would move the camera from one position to another over time. I added 3 actions to the cameras that are present in props that I felt were most useful - MoveTo, TeleportTo, and LookAt. To linearly interpolate from the current camera position to that of the arrow I placed that points to the avatar's head we would use the MoveTo command.

New camera actions.
I set up a timer that would trigger the camera facing the avatar to move to one of the arrows after 1 second had passed, and I would use the same timer to move the camera to other positions after more time had passed. Unfortunately this is where I hit a snag - there was a bug with the way the camera was trying to raycast to the ground when moving between positions, causing it to slowly move upwards into space forever. I ran out of time and had to head home before I managed to find the cause of the issue, so it was at this point where my experiment had to stop for the time being.

In conclusion, I do believe that if the breaking bug I discovered towards the end of the day can be fixed then there is a great chance that the ability to automatically move cameras will be functional within Fieldscapes. I'd also like to develop a PIVOTE action that could transition the field of view of the camera over time - perhaps then we will see a dolly zoom replicated in Fieldscapes!

Oops!


1 December 2017

Fieldscapes Exercise Visualiser - Daden U Day




For the Daden U day I decided to create what I call the Fieldscapes Exercise Visualizer. The aim of the visualizer is to create a graphical representation of a Fieldscapes exercise. The idea came about while creating a complex Fieldscapes exercise, I was struggling to quickly recall the structure of the exercise, the Props, Prop Commands and their relationship with Systems Nodes. Another reason for creating the visualizer was to have a way of explaining the flow of the exercise from begin to end. If it a non-linear exercise they different paths can also be illustrated.


Before beginning work on the visualizer I had a few ideas of how to illustrate the exercise using symbols and shapes. Fortunately I discovered a web application called draw.io whilst trying to created a flow diagram manually using pre existing tools. Initially I had attempted use Umlet which is windows application for drawing UML diagrams but decided against it. Reason being that a web application would be more accessible. As a web application I could integrate it into the Fieldscapes Content Manager reducing the number of tools content creators have to access to make full use of the Fieldscapes ecosystem.


Unfortunately draw.io does not have an API (Application Programming Interface). In my attempt to find a the API I discovered that it uses library called mxGraph. mxGraph is a JavaScript diagramming library that enables interactive graph and charting applications to be quickly created that run natively in most major browsers. mxGraph has a backend which supports the javascript application running on a server. This backend software can either use Java, C-Sharp(C#) or PHP. For the purpose of the U Day I used the C# backend as Fieldcapes is written in C#.


After downloading the source code for mxGraph from the Github repository. The source code contained an example C# .Net website. The solution worked right out of the box without an issues. Fortunately because of work done for the Fieldscapes Editor most of the code needed to read Fieldscapes exercises stored in XML was already written so all I needed to do was write a couple of functions that extracted the data I needed to represent the various elements of an exercise as geometry with connecting lines. Extracting the data was a breeze however progress grinded to a halt when I tried to draw different shapes to represent the various elements of an exercise such as props, system nodes etc. After some trial and error and lots of googling I managed to understand how to style the vertex which is the word used in mxGraph for the geometry that are draw on the canvas.


From what I saw of the documentation and my brief time using mxGraph, mxGraph is a powerful library that has many affordances for those who want to create diagrams of any nature. It allowed me to create a diagram(see image below) that showed all the different elements of a Fieldscapes exercise with line indicating their relationships with to each other. The next step create some form of structure for the diagram. Development on the Fieldscapes Exerciser Visualizer is not a priority at the moment but it something I intend to continue working on until it becomes more useful at which point it will be integrated into the Fieldscapes Content Manager.