21 February 2018

How to make your training more engaging


Ever had the feeling your employees just aren’t motivated by your learning and development programme?
We’ve all been to boring training sessions where the speaker’s droned on for hours on end and it’s taken every ounce of willpower you have not to fall asleep (but hey, at least there’s usually a free lunch). Training like this fails to engage trainees, meaning that information may not be retained or understood.
This is especially problematic when the subject matter is something they really need to know. What if failure to recall the information when they need it results in a serious mistake being made? The risk of such a situation arising can be significantly reduced by making sure your training is as engaging as possible to maximise the chances of it sinking in.
As well as improving information retention and recall, more engaging training helps employees to fully get to grip with the subject the first time around. This means they’re less likely to have to go through the same course repeatedly, thus reducing training costs as well as the loss of productivity incurred by their downtime.
More engaging training is also likely to be more enjoyable, resulting in happier employees - and what’s not to like about that?
Here are some ideas to help make you training more engaging.

Make it interactive




There are few people who can stand to be talked at for several hours without zoning out. One sure way to increase engagement is to make the session interactive. Whether that’s by coming up with activities to help trainees participate or by using tools or props to incorporate an element of ‘learning by doing’, interaction should be more than just a five-minute Q&A session at the end of a day-long course.

Go immersive



Technology has advanced enough that training sessions don’t have to stay in the classroom - even if the participants don’t physically leave the room. Immersive training applications like Trainingscapes create simulated environments and scenarios that allow trainees to gain hands-on experience in a relevant and practical way. These tools provide more spatial, visual, and audio cues as well as environmental and emotional context, so trainees are more prepared for when they have to carry out tasks in the real world.

Group activities



Learning as a group can be more effective than solo training because everyone brings their own skills to the exercise, allowing people to contribute in different ways depending on where their strengths lie. Group activities during training are often more representative of the real world, especially if teamwork is core to a trainee’s job role; incorporating such tasks into training therefore also gives people the chance to work on their interpersonal, team, and cross-cultural skills.

Gamification



The practice of applying game-playing elements to learning and training has been shown to improve motivation, indicating that trainees may learn more by completing reward-based assessments than by simply being lectured. There’s a tremendous positive boost that comes from ticking off a task or achieving an objective, and this psychology can be harnessed to create more engaging courses that motivate trainees to progress. 

Practice



The vast majority of employee training sessions are delivered for a few hours, then the trainees are sent on their way. This seems to be contrary with the way we’re taught at school - practice makes perfect, remember? There’s plenty of scientific evidence to support the notion that we are better able to recall skills we’ve spent time practicing, so why should training in the workplace be any different? Utilising a training method that can also be used for repeated practice will help trainees retain the information better, and they’ll feel more confident in their own ability when they need to call upon the same skills in the real world.

Step out of the classroom



In order to increase trainee engagement, it’s important to think outside the box - the most effective training is unlikely to take the form of a dull classroom-based lecture. By considering what your trainees need to achieve and the skills they need to learn, you can choose a solution that maximises motivation and information uptake while minimising risk.


To see how we’ve helped organisations provide more engaging training with Trainingscapes, take a look at some of our case studies.

20 February 2018

Working with Bournemouth University to build Virtual Avebury Henge and Stone Circle

Checking out the import of the stone meshes


Hot on the heels of our virtual midwifery project at Bournemouth University we're now using Fieldscapes with them on a new heritage research and education project.

Visitors will be able to walk virtually through the ancient Avebury Henge and stone circle, part of the Avebury and Stonehenge World Heritage Site, and experience the sights and sounds of the location as it would have been in the Neolithic period – and well before much of the site was destroyed by the building of Avebury village – thanks to the new experience we're creating with Bournemouth University. The project has been made possible through the Arts and Humanities Research Council’s Next Generation of Immersive Experiences programme. The work involves a collaboration between educators and archaeologists at Bournemouth University, sound specialists Satsymph, the National Trust and ourselves.

The main aim of this project is to bring together researchers in archaeology and virtual environment evaluation with creative partners in immersive technologies, virtual soundscapes and heritage management to develop methods of effective, innovative and fruitful working. In addition, the project aims to develop and explore the potential of virtual historical places to increase engagement with, and understanding of, the development of human cultures through a sense of virtual place.

The project builds on work already done by Professor Liz  Falconer on building a prototype 3D simulation  of the Avebury complex. The new experience is being built with using Fieldscapes, Daden’s platform for immersive learning and training. A key feature of Fieldscapes is that subject matter experts are able to create lessons and experiences from existing 3D assets without the need for any programming skills.



The Avebury complex in North Wiltshire is one of the greatest treasures of prehistoric Britain. Built during the Neolithic period around 4500 years ago, the central monument comprises  a circular bank and ditch approximately 1 kilometre in circumference, encircling an area that includes 3 ancient stone circles, and part of the more recent Avebury village. The central monument sits in a large ritual landscape that includes avenues, burial mounds and the world-famous Silbury Hill. Avebury is  part of the Stonehenge and Avebury World Heritage Site.

By creating the model in Fieldscapes the team will be able to more accurately create the real-world terrain, generate better textured stones, make more use of audio, allow researchers to customise and extend experiences without specialist help, and make the experience available on a wide range of devices including smartphones, tablets and virtual reality headsets.

Bournemouth University Professor and Project Lead Liz Falconer said “We are delighted to be working with Daden, Satsymph and the National Trust on this exciting project. We will shortly be launching a blog and website where we will post regular updates on the work, and give people the opportunity to immerse themselves in Late Neolithic Wiltshire!”

Daden MD, David Burden said – “We’re really pleased and honoured to be a part of this project. We’ve always known that immersive environments can have a significant impact on how we view and understand the past, and this is an ideal opportunity to put our thoughts into practice.”



The virtual experience will be available at Avebury Visitors Centre for the public to evaluate during the summer of 2018, and there will also be an evaluation of remote use for those unable to visit the site. It is hoped that the project will lead to the development of a fuller experience made permanently available to both the public and to schools, and then to the use of the technology for other heritage sites across the globe.

We'll keep you posted on progress as the project progresses.





15 February 2018

Fieldscapes 1.5 released: Web browsing, chatbots and more...





Today we released v1.5 of Fieldscapes. This has a load of new and important features designed to make immersive training and learning experiences even more flexible, rewarding and engaging, as well as a number of more minor fixes and changes.

The highlights are listed below. v1.5 is available on all our supported platforms (Windows, Mac, Android, iOS). Full release notes are available on the wiki.

Web Browsing



There are two new Flat Screen props in the default inventory set. Add the new Web Browser widget to these screens and they become fully capable in-world web browsers - able to view and navigate almost any web page, and even play videos! If you really want to blow your mind you can bring up a WebGL 3D environment in the browser and view it in 2D from a 3D world! You can also set the screen into multi-user mode so that all avatars in the same assignment will see the same thing - ideal for a spot of centralised learning or sharing during an exercise. The URL can be changed by a PIVOTE command from any other prop in an exercise. Future releases will allow almost any surface to become a web browser. Go to the Web Browser wiki page for more information.

Chatbot Integration



Chatbots are computer programmes that simulate natural language conversation. Fieldscapes now has a Chatbot widget which enables you to use the text-chat window to chat with a chatbot provided by an external service. We will be detailing the interface in due course so you can build your own chatbots to talk to the system, and maybe even interfaces to common platforms such as Pandorabots. We have also added a default Daden NPCs inventory with some sample avatars. Future releases will add animation and walking, but you can already use existing PIVOTE commands to teleport and move (glide) the Non-Player Characters you create.  Go to the Chatbot wiki page for more information.

Multi-User Widget




Fieldscapes initially operated in either solo mode (you only see yourself in an exercise) or a hybrid-multi-user mode (you see other people, but when they change something in the environment - eg pick up a rock - you don't see it, the rock is still there for you to pick up). With v1.5 you can now add a "multi-user widget" to any prop which makes that prop multi-user. That means there will only be one instance of the that within an exercise, no matter how many users, and if you move or otherwise change that prop then everyone else will see it move or change to. Implementing multi-user in this way means that you only make multi-user what needs to be multi-user, which massively saves on processing and communications. As well as enabling you to implement more realistic field trips (if you want to!) it also allows you to create more collaborative exercises and also to create multi-user games- check out our VR chess game! Go to the Multi-User Widget wiki page for more information.


New Avatars



We have added 6 new avatars and removed the worst of the older ones. Included in the new set is our first hijab wearing avatar (thanks Sharn!)!


Editor Improvements

Based on six months or so of using Fieldscapes in anger we have added two new features to the editor to help make exercise creation a bit easier:


  • Lock a prop in place so you can't accidentally move it
  • Change a prop for another one, which means you can lay out cubes initially, and then replce them once you have the proper 3D models you need

VR Improvements



Started in 1.4.4 but finished in 1.5 we have now completely overhauled the VR UI to make VR use easier.




Enjoy the new features and do let us know how you get on!

25 January 2018

Newspeak Bot for Wolverhampton Literature Festival


We've been speaking to Seb Groes (now Professor of English Literature, University of Wolverhampton) about chatbot related projects for a while now, and just before Christmas we hit on a great idea for a bot to support the Wolverhampton Literature Festival which runs 26-28 Jan 2018 in Wolverhampton.

With the unveiling of a new statue to him at the BBC at the end of 2017, George Orwell seemed to be everywhere on the media. And with the rise of Trump and fake-news what better time to revisit Newspeak!


The Newspeak Bot turns Twitter feeds, such as those by Donald Trump, BBC News and Number 10 Downing Street, into Newspeak, the language of control Orwell invented for the totalitarian state in his dystopian classic. We currently have a library of 596 words and phrases that are being translated into Newspeak, either using words from 1984 (eg Minitrue), or using the guidance in the Appendix to 1984 to create our own (e.g. UnEurope for Brexit).

Here are some of our favourite retweets so far:












Professor Sebastian Groes said: “Many feel we are currently living in a dystopia not far removed from Nineteen Eighty-Four. We seem to be ruled by megalomaniac world leaders of superstates at perpetual war with one another, who are producing communications radically divorced from reality. Like Big brother, some of these leaders enjoy a cult of personality; they seek, in Orwell’s words, "power entirely for its own sake. [They are] not interested in the good of others." Or we seem governed by elitist political parties whose privileged rule feels tyrannical because other voices are excluded. Just as in Orwell’s nightmare, memory is weakened by information overload and other strategies of distraction; technology thrashes coherent thought; personal and sexual relationships are randomly assembled by computers; we are merely chunks of information collected in databases.” 

You can read the full Festival press release for the project at http://wolvesliteraturefestival.co.uk/newsspeak/4594185019

Seb and I are discussing the project at an event on the Festival on Friday afternoon - see above link for details.

18 December 2017

Geospatial AR - Building Your Own!



For ages now I've wanted to have an app that would display points of interest on my smartphone screen overlayed with the real world view. The new OS AR layer (above) and Yelp Monocle (below) do the sort of thing I want, but I want to be able to define my own locations, and maybe some custom functionality.



After a couple of fruitless web and App/PlayStore searches I couldn't find what I wanted. Wikitude was closest, and so were several GIS related offerings, but it was going to be several hundreds of pound to "publish" my dataset. I then looked at mobile development frameworks (e.g. Corona) several of which appeared to offer an AR option, but really only just marker based, not geospatial AR. So by about 1030 on DadenU day I realised I was going to have to roll-my-own. I'd found a nice tutorial (Part 1 and Part 2) and so, without having ever developed a Java app or mobile app decided to give it a go.

It took about the next 3 hours to install Android Studio and all its updates. Moving the tutorial across didn't take long, but the app kept crashing on start-up. I then realised that I needed to manually set the GPS and Camera positions! Did that and the app worked.

All the app did though was put a marker on the centre of the screen when the camera pointed in a particular direction. I wanted the OS style though - markers that stayed on screen and slid around as you moved the phone, and a whole host of them too.

A bit of maths, some searching on CodeProject and I soon had the basic operating mode changed, one marker, sliding around and just about aligning with the target. The hardcoded about  dozen markers and got them working. Here's a screenshot of that stage, with the icon that came with the tutorial. The other thing I added was that markers were smaller if more distant.




That was about the end of DadenU day, I had the core app working but I wanted more. So over the next few evenings I:


  • Moved from hard coded markers to an internally selectable dataset
  • Added some on-marker text
  • Made new markers of different shapes
  • Added ability to set marker shape and colour based on fields in the data
  • Added option to move markers in the vertical plane based on distance as (OS/Yelp)
  • Added the ability to filter on a type parameter (again as OS)
That got to about here:






I also added the ability to spoof the phone GPS location, so it would pretend you were in the middle of the dataset - so here the centre of the Battle of Waterloo - but physically on the office car park.

I then wanted to add some very specific functionality. As you might guess from the last sentence one of my use cases for this is battlefield tours. So not only are there fixed locations but also moving troops. So I wanted a time slider that you could use to set a time in the battle, and then have the unit pointers point to the right place. The time slider is at the bottom of the screen for relevant datasets, with the "current time" displayed see below.




A final tidy up of the menu icons and this is the current version:



I've even added voice description for the icons, and arrows that point you in the direction of a featured icon, with description linked to time so using the slider and some VCR controls it can walk you through a spatial narrative.

Next up on the to do list are:
  • Click on an icon to see (or hear) full data
  • Load data from  a web server
And that should pretty much give me my minimum viable product.

We're already talking to one potential client about a project based around this, and can see how it might fit into two other projects (giving an alternative to viewing the data in VR if actually on-site). We'll keep you posted as it develops. At the very least I'll start using it for some of my own battlefield walks.


8 December 2017

Automating Camera Directing in Fieldscapes

Fieldscapes has the potential to be a very flexible platform when it comes to content creation, and when David was busy recording a video of it a potential use case for the application crossed my mind - would it be possible to direct a video within a Fieldscapes exercise by automating camera positioning? This would allow for a variety of uses, such as exercise flythroughs to video recordings, and that's why I decided to look into the idea for the last Daden U day.

Firstly I had to remind myself about the current capabilities of the cameras before I added any new functionality. I created a new exercise and added a new camera to it. On the camera panel, you can see some of the options we have for setting up the camera - we can rotate horizontally and vertically, as well as adjusting the field of view of the camera. There is the option to change to orthographic projection, but I wasn't going to be needing that.

Camera menu.
The first idea that came to mind was that being able to alter the field of view via PIVOTE actions would be very powerful. That feature isn't currently implemented but I put it on my list of potential improvements. The other idea that popped into my head was the ability to alter individual rotation axes via PIVOTE actions, to allow more subtle control of the camera than is currently available.

Now that I had looked at the camera set up options it was time to remind myself of what PIVOTE can do with the cameras. So I went to edit one of the default system nodes to see the available actions. As
]#u can see from the image below, it is very limited - you can only alter whether or not the camera is active. This would have to change drastically if I was to be able to do what I wanted to.

Old camera actions.
Automating the camera position and movement would require cameras be able to use some of the actions available to props, such as the ability to teleport to, or move to, the position of a prop or the ability to look at a prop within the environment. Some new actions would also nice, such as one to change the field of view as previously mentioned.

To help determine what actions I was needing I decided to choose an existing exercise in Fieldscapes and design a camera 'flythrough' of the exercise in the same way some video games would perform an overview of the level before the user begins. After much deliberation the exercise that I chose to use was the Apollo Explore exercise developed to allow users to walk about the moon and learn about the Apollo 11 moon landing. This exercise has props spread around the environment which makes it easy to define a path we want the camera, or cameras, to follow.

Intended camera positions are circled.
Mapping out the positions I wanted the cameras to be at during the flythrough was the first step I took. I decided on placing two extra cameras in the environment - one to be used to look at the user avatar and one to move around the props. This would give a nice fade to black and back when switching between them. I wanted to slowly pan around the user avatar at the start, followed by the camera showing each of the main pieces of equipment before ending on the lunar lander itself. After this, the exercise would start and the camera would go back to a third person view.

After plotting out all of the positions and directions I wanted the cameras to go to I decided on how I wanted to transition between the positions so that I could determine what actions I would require. As I wanted to smoothly pan the camera at the start around the avatar, the most obvious action I would require is one that would move the camera from one position to another over time. I added 3 actions to the cameras that are present in props that I felt were most useful - MoveTo, TeleportTo, and LookAt. To linearly interpolate from the current camera position to that of the arrow I placed that points to the avatar's head we would use the MoveTo command.

New camera actions.
I set up a timer that would trigger the camera facing the avatar to move to one of the arrows after 1 second had passed, and I would use the same timer to move the camera to other positions after more time had passed. Unfortunately this is where I hit a snag - there was a bug with the way the camera was trying to raycast to the ground when moving between positions, causing it to slowly move upwards into space forever. I ran out of time and had to head home before I managed to find the cause of the issue, so it was at this point where my experiment had to stop for the time being.

In conclusion, I do believe that if the breaking bug I discovered towards the end of the day can be fixed then there is a great chance that the ability to automatically move cameras will be functional within Fieldscapes. I'd also like to develop a PIVOTE action that could transition the field of view of the camera over time - perhaps then we will see a dolly zoom replicated in Fieldscapes!

Oops!


1 December 2017

Fieldscapes Exercise Visualiser - Daden U Day




For the Daden U day I decided to create what I call the Fieldscapes Exercise Visualizer. The aim of the visualizer is to create a graphical representation of a Fieldscapes exercise. The idea came about while creating a complex Fieldscapes exercise, I was struggling to quickly recall the structure of the exercise, the Props, Prop Commands and their relationship with Systems Nodes. Another reason for creating the visualizer was to have a way of explaining the flow of the exercise from begin to end. If it a non-linear exercise they different paths can also be illustrated.


Before beginning work on the visualizer I had a few ideas of how to illustrate the exercise using symbols and shapes. Fortunately I discovered a web application called draw.io whilst trying to created a flow diagram manually using pre existing tools. Initially I had attempted use Umlet which is windows application for drawing UML diagrams but decided against it. Reason being that a web application would be more accessible. As a web application I could integrate it into the Fieldscapes Content Manager reducing the number of tools content creators have to access to make full use of the Fieldscapes ecosystem.


Unfortunately draw.io does not have an API (Application Programming Interface). In my attempt to find a the API I discovered that it uses library called mxGraph. mxGraph is a JavaScript diagramming library that enables interactive graph and charting applications to be quickly created that run natively in most major browsers. mxGraph has a backend which supports the javascript application running on a server. This backend software can either use Java, C-Sharp(C#) or PHP. For the purpose of the U Day I used the C# backend as Fieldcapes is written in C#.


After downloading the source code for mxGraph from the Github repository. The source code contained an example C# .Net website. The solution worked right out of the box without an issues. Fortunately because of work done for the Fieldscapes Editor most of the code needed to read Fieldscapes exercises stored in XML was already written so all I needed to do was write a couple of functions that extracted the data I needed to represent the various elements of an exercise as geometry with connecting lines. Extracting the data was a breeze however progress grinded to a halt when I tried to draw different shapes to represent the various elements of an exercise such as props, system nodes etc. After some trial and error and lots of googling I managed to understand how to style the vertex which is the word used in mxGraph for the geometry that are draw on the canvas.


From what I saw of the documentation and my brief time using mxGraph, mxGraph is a powerful library that has many affordances for those who want to create diagrams of any nature. It allowed me to create a diagram(see image below) that showed all the different elements of a Fieldscapes exercise with line indicating their relationships with to each other. The next step create some form of structure for the diagram. Development on the Fieldscapes Exerciser Visualizer is not a priority at the moment but it something I intend to continue working on until it becomes more useful at which point it will be integrated into the Fieldscapes Content Manager.