9 October 2017

Fieldscapes 1.4 released




Fieldscapes v1.4 is now out and available for free download and had some good heavy user testing at Malvern recently with Year 7s from across the county. The main new features in v1.3 and v1.4 are:


  • The start of support for NPC characters through an NPC widget. You can now add an NPC avatar as a prop and have it TP from location to location and activate specific animations - eg "sit". Later releases will allow the NPC to glide or walk between locations. We are also close to releasing a "chatbot" widget to hook an NPC up to an external chatbot system so that you can really start to create virtual characters.
  • General improvements to the UI when in VR mode - users found that the "virtual iPAD" just got in the way so we're now putting the UI directly into the scene. We'll make steady improvements to the usability of the VR experience in future releases
  • Added a new Avatar command to change clothes. This only changes between preset outfits but is good if a character needs to change into special kit for a task - for instance our nursing avatars putting on gloves and aprons (see below)
  • Multiple choice questions are now randomised - makes the student think if they repeat the lesson!
  • Increased the inventory limit from 3 to 5 in the editor - so you can bring in props from more inventories for your scene
  • Increased the word count for multiple-choice panels and default popup panel

Various bug fixes were also made and you can see a full list at http://live.fieldscapesvr.com/Home/Download




v1.4 is already available for PC and Mac. The Android version of v1.4 is just under final testing, and we're also still progressing the iOS version of Fieldscapes through the App Store acceptance process.

Remember: We have an ongoing survey for new Fieldscapes features, please take 2-3 minutes to fill it out at: https://www.surveymonkey.co.uk/r/88YL39B

4 October 2017

Fieldscapes at the Malvern Festival of Innovation

Oculus Rift on the Moon

For the second year in a row we ran a set of workshops at the Malvern Festival of Innovation playing host to a succession of groups of 20-30 students (mostly year 7s) from around Worcestershire and giving them a 1 hour introduction to immersive learning. We set up four stands in order to show the range of experiences and lessons that can be created, and the different ways in which they can be delivered. We had:


  • One laptop running the Solar System lesson
  • One laptop running the Apollo Educate lesson
  • One laptop running the Introduction to Carding Mill (which several groups knew from Fieldtrips)
  • A couple of Google Cardboards, one with the Photosphere tour of Carding Mill and one the Apollo Explore lesson
  • Oculus Rift running Apollo Explore
Playing tag on the Moon!


Students were split into groups of about 5 and had 10 minutes on each "stand" - so everyone got to try all the kit.

Looking at the Waterspout Waterfall in a (plastic) Google Cardboard


Student feedback from comments and feedback forms included:
  • "I wish I could spend all afternoon here"
  • "Can I come back later?"
  • "It was really cool"
  • "It was fun to do"
  • "It was memorable"
  • "The realness of it"
  • "I liked the fact that we were not there but we could see everything"
  • "It was like it was real"
  • "It was educating and fun"
Exploring the Moon and Apollo on Google Cardboard


When talking to the teacher we were keen to highlight that:
  • They didn't have to buy any new hardware, like expensive VR headsets, as they can run lessons in 2D/3D mode on existing computers, or in VR on Google Cardboard (one teacher loaded the app onto his phone as we spoke)
  • They could create their own lessons, share them, and use (and customise) those producted by other users
  • With our licencing model they only started paying once they started using it in class, so they could explore and test for free until they were confident in the system and lessons and were ready to use it in class.
Even the teachers got in on the act!


Fieldscapes itself was rock solid on all the devices all day - despite getting a hammering from the kids. What was particularly impressive was when we had the Apollo experiences in multi-user mode so the kids could play tag on the moon - and even using the public wifi at the venue we had no issues with lag and avatar movement was very smooth.



All in all a great day and helped remind us all why we've built Fieldscapes!





2 October 2017

Abi's Back!



Having been off on sabbatical for a few years whilst we rebuilt the web sites Abi, our virtual assistant is back up. We think she forgot a fair amount whilst she was backpacking around the world, but we'll slowly bring her up to speed, and she's got lots to learn about what with Datascape, Fieldscapes and Trainingscapes. One thing we have given her is the ability to load up web pages - we used to do this on client clients but never got around to doing it on our own!

So if you've got a question about Daden or any of our products just talk to Abi.

This new Abi is built on the Chatscript open source platform that we're using for all our chatbot projects now. She's not the smartest bot we've done, but does give you an idea of the basic sort of functionality and interaction style. When she loads she also provides a page which gives you some idea of her scope and some of the "tricks" we're teaching her.

You can read more about our chatbot work on our dedicated website at www.chatbots.co.uk.

Just get in touch if you'd like to know more about chatbot technology and have a project in mind.

For a quick trip down memory lane here are some of Abi's previous incarnations.

The very first ABI - text and static image in Second Life


Abi as an avatar walking around our old Sim in Second Life



Abi moved onto our website with an art-work avatar


Abi before she went on hols as a photoface avatar


28 September 2017

Fieldscapes and Midwifery Training at Bournemouth University


As you may have seen from our Fieldscapes Twitter stream we've just reached the delivery stage of our first Midwifery Fieldscapes lesson (on Urinalysis) for Bournemouth University's Midwifery training team. We had a number of meetings with the team over the summer and then went away and customised our existing Simple Clinic space into "Daden Community Hospital", and added more generic medical props, and also brought in some BU specific posters. Nash then created the first exercise based on an agreed flowchart/storyboard and now we're just getting to the end of the iterations with the team, stakeholders and students. The final step for us will be to train the BU team on how to use Fieldscapes to continue to maintain and develop the exercise (and create other exercises), before they then start their evaluation with the current student cohort.


Response so far from all involved has been excellent with comments such as:

  • "I had such a cool day at work recently – I got to play with the first of my VR healthcare education environments using Oculus Rift"
  • " I absolutely love this! A brilliant way to learn" - student feedback
  • "So amazing to see my project becoming a reality – I hope the students love this way of bridging the gap between classroom theory and clinical practice"
  • "That was brilliant loved it! can’t wait to do more. Very informative" - student feedback
  • "Delighted that the Oculus Rift dramatically altered the look and feel of the clinical room, and that the handheld Haptic feedback controls added to the experience"


Being Fieldscapes the exercise can be experiences on a PC/Mac or Android device, and in VR on Oculus Rift or Google Cardboard on Android. One of our final tasks is integrating one of the £2-£3 hand controllers for the Cardboard go go along with the c.£15-£20 VR-BOX headsets that BU have (VR doesn't have to be expensive!).



We'll keep you posted as developments and evaluation progress and we're already talking to BU about other exciting way to take the training.


You can read more about the BU view on the project on their blog posts at:






26 September 2017

The Three Big Challenges in AI Development: #2 Generalism and #3 Sentience


Following on from the previous post I now want to look at what happens when we try and move out of the "marketing AI" box and towards that big area of "science fiction" AI to the right of the diagram. Moving in this direction we face two major challenges, #2 and #3 of our overall AI challenges:

Challenge #2: Generalism

Probably the biggest "issue" with current "AI" is that it is very narrow. It's a programme to interpret data, or to drive a car, or to play chess, or to act as a carer, or to draw a picture. But almost any human can make a stab at doing all of those, and with a bit of training or learning can get better at them all. This just isn't the case with modern "AI". If we want to get closer to the SF ideal of AI, and also to make it a lot easier to use AI in the world around us, then what we really need is a "general purpose AI" - or what is commonly called Artificial General Intelligence (AGI). There is a lot of research going into AGI at the moment in academic institutions and elsewhere, but it is really early days. A lot of the ground work is just giving the bot what we would call common-sense - just knowing about categories of things, what they do, how to use them - the sort of stuff a kid picks up before they leave kindergarten. In fact one of the strategies being adopted is to try and build a virtual toddler and get it to learn in the same way that a human toddler does.

Whilst the effort involved in creating an AGI will be immense, the rewards are likely to be even greater - as we'd be able to just ask or tell the AI to do something and it would be able to do it, or ask us how to do it, or go away and ask another bot or research it for itself. In some ways we would cease to need to programme the bot.

Just as a trivial example, but one that is close to our heart. If we're building a training simulation and want to have a bunch of non-player characters filling roles then we have to script each one, or create behaviour models and implement agents to then operate within those behaviours. It takes a lot of effort. With an AGI we'd be able to treat those bots as though they were actors (well extras) - we'd just give them the situation and their motivation, give some general direction, shout "action" and then leave them to get on with it.

Not also that moving to an AGI does not imply ANY linkage to the level of humanness. It is probably perfectly possible to have a fully fledged AGI that only has the bare minimum of humanness in order to communicate with us - think R2D2.

Challenge #3: Sentience

If creating an AGI is probably an order of magnitude greater problem than creating "humanness", then creating "sentience" is probably an order of magnitude greater again. Although there are possibly two extremes of view here:


  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there's be "nobody home", no matter much much it appears to show intelligence, emotion or empathy.
  • At the other, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the "extras" example above our anthropological instinct almost immediately starts to ask "well what if the extras don't want to do that..."
We also need to be clear about what we (well I) mean when I talk about sentience. This is more than intelligence, and is certainly beyond what almost all (all?) animals show. So it's more than emotion and empathy and intelligence. It's about self-awareness, self-actualisation and having a consistent internal narrative, internal dialogue and self-reflection. It's about being able to think about "me" and who I am, and what I'm doing and why, and then taking actions on that basis - self-determination.

Whilst I'm sure we code a bot that "appears" to do much of that, would that mean we have created sentience - or does sentience have to be an emergent behaviour? We have a tough time pinning down what all this means in humans, so trying to understand what it might mean (and code it, or create the conditions for the AGI to evolve it) is never going to be easy.




So this completes our chart. To move from the "marketing" AI space of automated intelligence to the science-fiction promise of "true" AI, we face three big challenges, each probably an order of magnitude greater than the last:


  • Creating something that presents as 100% human across all the domains of "humanness"
  • Creating an artificial general intelligence that can apply itself to almost any task
  • Creating, or evolving, something that can truly think for itself, have a sense of self, and which shows self-determination and self-actualisation
It'll be an interesting journey!



25 September 2017

The Three Big Challenges in AI Development: #1 Humanness





In a previous blog post we introduced our AI Landscape diagram. In this post I want to look at how it helps us to identify the main challenges in the future development of AI.

On the diagram we’ve already identified how that stuff which is currently called “AI” by marketeers, media and others is generally better thought of as being automated intelligence or “narrow” AI. It is using AI techniques, such as natural language or machine learning, and applying them to a specific problem, but without actually building the sort of full, integrated, AI that we have come to expect from Science Fiction.

To grow the space currently occupied by today’s “AI” we can grow in two directions – moving up the chart to make the entities seem more human, or moving across the chart to make the entities more intelligent.

MORE HUMAN

The “more human”  route represents Challenge 1. It is probably the easiest of the challenges and the chart we showed previously (and repeated below) shows an estimate of the relative maturity of some of the more important technologies involved.



There are two interesting effects related to work in this direction:


  • Uncanny Valley - we're quite happy to deal with cartoons, and we're quite happy to deal with something that seems completely real, but there's a middle ground that we find very spooky. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we've made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we're now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we're barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism - People rapidly attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. In some ways this can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it's still a lot more basic than is thought.
The upshot is that the next few years will certainly see systems that seem far more human than any around today, even though their fundamental tech is nowhere near being a proper "AI". The question is whether a system could pass the so-called "Gold" Turing Test ( a Skype like conversation with an avatar) without also showing significant progress along the intelligence dimension. Achieving that is probably more about the capability of the chat interface as it seems that CGI and Games will crack the visual and audio elements (although ding them in real-time is still a challenge) - so it really remains the standard Turing challenge. An emotional/empathic version of the Turing Test will probably prove a far harder nut to crack.

We'll discuss the Intelligence dimension in Part 2.





18 September 2017

Automated Intelligence vs Automated Muscle

As previously posted I've long had an issue with the "misuse" of the term AI. I usually replace "AI" with "algorithms inside" and the marketing statement I'm reading still makes complete sense!

Jerry Kaplan speaking on the Today programme last week was using the term "automation" to refer to what a lot of current AI is doing - and actually that fits just as well, and also highlights that this is something more than just simple algorithms, even if it's a long way short of science-fiction AIs and Artificial General Intelligence.

So now I'm happy to go with "automated intelligence" as what modern AI does - it does automate some aspects of a very narrow "intelligence" - and the use of the word automated does suggest that there are some limits to the abilities (which "artificial" doesn't).

And seeing as I was at an AI and Robotics conference last week that also got me to thinking that robotics is in many ways just "automated muscle", giving us a nice dyad with advanced software manifesting itself as automated intelligence (AI), and advanced hardware manifesting as automated muscle (robots).


15 September 2017

AI & Robotics: The Main Event 2017


David spoke at the AI & Robotics: The Main Event 2017 conference yesterday. The main emphasis was far more on AI (well machine learning) rather than robotics. David talked delegates through the AI Landscape model before talking about the use of chatbots/virtual characters/AI within the organisation in roles such as teaching, training, simulation, mentoring and knowledge capture and access.

Other highlights from the day included:


  • Prof. Noel Sharkey talking about responsible robotics and his survey on robots and sex
  • Stephen Metcalfe MP and co-chair of the All Party Parliamentary Group on AI talking about the APPG and Government role
  • Prof. Philip Bond talking about the Government's Council for Science and Technology and its role in promoting investment in AI (apparently there's a lot of it coming!)
  • Pete Trainor from BIMA talking about using chatbots to help avoid male suicides by providing SU, a reflective companion - https://www.bima.co.uk/en/Article/05-May-2017/Meet-SU
  • Chris Ezekial from Creative Virtual talking about their success with virtual customer service agents (Chris and I were around for the first chatbot boom!)
  • Intelligent Assistants showing the 2nd highest growth in interest from major brands in terms of engagement technologies
  • Enterprise chat market worth $1.9bn
  • 85% of enterprise customer engagement to be without human contact by 2020
  • 30% increase in virtual agent use (forecast or historic, timescale - not clear!)
  • 69% of consumers reported that they would choose to interact with a chatbot before a human because they wanted instant answers!
There was also a nice 2x2 matrix (below) looking at new/existing jobs and human/machine workers. 



This chimed nicely with a slide by another presenter which showed how as automation comes in workers initially resist, then accept, then as it takes their job over say the job wasn't worth doing and that they've now found a better one - til that starts to be automated. In a coffee chat we were wondering where all the people from the typing pools went when PCs came in. Our guess is that they went (notionally) to call centres - and guess where automation is now striking! Where will they go next?

14 September 2017

Daden at Number 10


Daden MD David Burden was part of a delegation of Midland's based business owners and entrepeneurs to 10 Downing Street yesterday to meet with one of the PM's advisors on business policy. The group represented a wide range of businesses from watchmakers to construction industry organisations, and social enterprises and charity interests were also well represented. Whilst the meeting of itself was quite short it is hopefully the start of a longer engagement with Government for both this group and Daden (we also submitted evidence to the House of Lord's Select Committee on AI last week and are exploring some other avenues of engagement).




6 September 2017

An AI Landscape


In the old days there used to be a saying that "what we call ‘artificial intelligence’ is basically what computers can’t do yet" - so as things that were thought to take intelligence - like playing chess - were mastered by a computer they ceased to be things that needed "real" intelligence. Today, it's almost as though the situation has reversed, and to read most press-releases and media stories it now appears to be that "what we call 'artificial intelligence'" is basically anything that a computer can do today".

So in order to get a better handle on what we (should) mean by "artificial intelligence" we've come up with the landscape chart above. Almost any computer programme can be plotted on it - and so can the "space" that we might reasonably call "AI" - so we should be able to get a better sense of whether something has a right to be called AI or not.



The bottom axis shows complexity (which we'll also take as being synonymous with sophistication). We've identified 4 main points on this axis - although it is undoubtably a continuum, and boundaries will be blurred and even overlapping - and we are probably also mixing categories too!:


  • Simple Algorithms - 99% of most computer programmes, even complex ERP and CRM systems, they are highly linear and predicatable
  • Complex Algorithms - things like (but not limited to) machine learning, deep learning, neural networks, bayesian networks, fuzzy logic etc where the complexity of the inner code starts to go beyond simple linear relationships. Lots of what is currently called AI is here - but really falls short of a more traditional definition of an AI.
  • Artificial General Intelligence - the holy grail of AI developers, a system which can apply itself using common sense and  general knowledge to a wide range of problems and solve them to a similar laval as a human
  • Artificial Sentience - beloved of science-fiction, code which "thinks" and is "self-aware"



The vertical axis is about "presentation" - does the programme present itself as human (or indeed another animal or being) or as a computer. Our ERP or CRM system typically presents as a computer GUI - but if we add a chatbot in front of it it instantly presents as more human. The position on the axis is influenced by the programmes capability in a number of dimensions of "humanness":

  • Text-to-speech: Does it sound human? TTS has plateaued in recent years, good but certainly recognisably synthetic
  • Speech Recognition: Can it recognise human speech without training. Systems like Siri have really driven this on recently.
  • Natural Language Generation: This tends to be template driven or parroting back existing sentences. Lots more work needed, especially on argumentation and story-telling
  • Avatar Body Realism: CGI work in movies has made this pretty much 100% except for skin tones
  • Avatar Face Realism: All skin and hair so a lot harder and very much stuck in uncanny valley for any real-time rendering
  • Avatar Body Animation: For gestures, movement etc. Again movies and decent motion-capture have pretty much solved this.
  • Avatar Expression (& lip sync): Static faces can look pretty good, but try to get them to smile or grimace or just sync to speech and all realism is lost
  • Emotion: Debatable about whether this should be on the complexity/sophistication axis (and/or is an inherent part of an AGI or artificial sentient), but it's a very human characteristic and a programme needs to crack it to be taken as really human. Games are probably where we're seeing the most work here.
  • Empathy: Having cracked emotion the programme then needs to be able to "read" the person it is interacting with and respond accordingly - lots of work here but face-cams, EEG and other technology is beginning to give a handle on it.
The chart gives a very rough assessment of the maturity of each.

There are probably some alternative vertical dimensions we could use other than "presentation" to give us an view on interesting landscape - Sheridan's autonomy model could be a useful one which we'll cover in a later post.

So back on the chart we can now plot where current "AI" technologies and systems might sit:


The yellow area shows the space that we typically see marketeers and others use the term AI to refer to!

But compare this to the more popular, science-fiction derived, view of what is an "AI".


Big difference - and zero overlap!

Putting them both on the same chart makes this clear.


So hopefully a chart like this will give you, as it has us, a better understanding of what the potential AI landscape is, and where the current systems, and the systems of our SF culture, sit. Interestingly it also raises a question about the blank spaces and the gaps, and in particular how do we move from today's very "disappointing" marketing versions of AI to the one's we're promised in SF from "Humans" to Battlestar Galactica!

4 September 2017

Hurricane Harvey SOS Data


Seeing as we're also doing a project at the moment about evacuation from major disasters we were interested in seeing what data we coudl find around Hurricane Harvey. It so happens that volunteers have been co-ordinating efforts at @HarveyRescue and have been collating the SOS reports from various sources, from which the media has been building maps such as those on the New York Times.

We were able to download the raw data from the @HarveyRescue site and bring it pretty quickly into Datascape. Unfortunately the first ~5000 or ~11000 record all showed the same date and time, so we couldn't use them for a space-time plot, but the remaining records were OK.

Our overview visualisation is shown above. You can launch it in WebGL in 3D in your own browser (and in mobile VR with Google Cardboard on your smartphone) by going to:

http://live.datascapevr.com/viewer/?wid=b051b24b-763e-421e-9c84-cbb26a976ff5

On the visualisation:

  • Height is time, newest at the top
  • Colour is:
    • Cyan: Normal SOS
    • Black: involves visually impaired people
    • Magenta: involves children
    • Green: involves elderly
  • Shape is priority:
    • Sphere = normal
    • Tetraheden = semi-urgent
    • Cube = urgent/emergency
  • Size is # of people effected, roughly logarithmic


You can fly all around and through the data, and hover on a point to see the summary info. We've removed the more detailed information for privacy reasons.

It's a pity that we haven't got the early events data, but you can still see the time effects in a variety of places:

  • The whole Port Arthur area kicks of way later than downtown Houston
  • There is another time limited cluster around Kingwood, peaking around 9/10am on 29th
  • And another lesser one around Baytown at 9/12am on 29th
  • There is some evidence of an over-night lull in reporting, about 2am-6am
The Port Arthur cluster
We're now looking at the Relief stage data and will hopefully get something up on that later in the week.

Don't forget to try the visualisation.


30 August 2017

Gartner Hype Cycle 2017 - A Critique

Every year the Gartner Group (well known tech analysts) publish their "hype-cycle" - showing whereabouts emergent technologies are on the journey from first conception to productive tool. We've watched Virtual Worlds ( and then Virtual Reality) work its way along the curve over the last decade, but this year's chart has a number of interesting features which we thought might be worth discussing. We focus here only on the areas of keenest interest to us at Daden, namely AI/chatbots and 3D immersive environments.

First off, it's interesting to see that they have VR now pulling well out of the Trough of Disillusionment, and only 2-5 years to mainstream adoption. This seems reasonable, although a more detailed analysis (which we may do later) would probably put VR in different sectors at different points on the cycle - so whilst this position seems OK for gaming and training I'd be tempted to put it still up on the Peak of Inflated Expectations when it comes to mass media entertainment or personal communications.

As a side-line it's interesting to look at these two Gartner charts from 2012 and 2013. Spot the difference?

2012 Hype Cycle

Clue - look at the Trough of Disillusionment....

2013 Hype Cycle

In 2012 Virtual Worlds (Second Life and it's ilk) were at the bottom of the Trough, in 2013 (as the Oculus Rift hype started) they were replaced by Virtual Reality! Virtual Worlds (and SL) are still around - although often rechristened Social Virtual Realities - and we'd guess they are still lingering in the Trough as their potential is still a long way from being realised.

One tech that was in 2016 but is missing from 2017 is Virtual Personal Assistants. Now if we take this to mean Siri, Alexa and co that seems reasonable - I have Siri in my pocket and Alexa on my desk as I write. But they are a far cry from the virtual PAs that we were promised in the mobile phone videos of the 90s and 00s. In fact if we compare the 2012/2013 and 2017 charts we can see that "Virtual Assistants" 4-5 years ago were just over the Peak, but now in 2017 "Virtual Assistants" is actually just approaching the Peak! So Gartner appear to have split the old Virtual Assistant into a simpler, now mainstream, Virtual Personal Assistant, and a new Virtual Assistant representing those still hard to do elements of the 1990s vision.

Back to 2017 - the new entrants on the Hype Cycle of interest since 2016 are Artificial General Intelligence and Deep Learning. Deep Learning is really just a development of Machine Learning, and its interesting that they have them both clustered together at the peak. In fact I'd have thought that Machine Learning is probably approaching the plateau as it appears to crop up everywhere and with good results, and Deep Learning is not far behind. Interestingly neither appeared on the 2012/13 charts!

Artificial General Intelligence is far more interesting. It's been mooted for years, decades, and progress is certainly slow. We'll be writing far more about it in coming months but it is a lot closer to what most people call "AI" than the things currently being touted as "AI" (which are typically just machine learning algorithms). As its name suggests its an AI which can apply general intelligence (aka common sense) to a wide variety of problems and situations. Gartner have it about right on the chart as its still a back room focus and hasn't yet hit the mainstream media in order to be hyped - and still seems decades away from achievement.

There are some other technologies of interest on that initial slope too.

It's interesting that Speech Recognition has now gone off the chart as a mainstream technology - whilst it may not be 100% yet its certainly come on in leaps and bounds over the last 4-5 years. But was is in the initial slope is Conversational User Interfaces (aka chatbots) - divorcing what was seen as the technical challenge of speech recognition from the softer but harder challenge of creating a Turing capable chatbot interface. I'd have thought that the Peak for CUI was probably some years ago (indeed Gartner had Natural Language Query Answering near Peak in 2013) and that we've spent the last few years in the trough, but that intention based CUI as we're seeing with Alexa and Messenger are now coming of age, and even free text CUI driven by technology such as Chatscript and even AIML are now beginning to reach Turing capable levels (see our recent research on a covert Turing Test where we achieved a 100% pass rate). So I'd put CUI as beginning to climb the slope up out of the Trough.

By the way, we got excited when we saw "Digital Twin" on the chart, as it's a subject that we have a keen interest and some involvement in. But reading their definition they are talking about Internet of Things "digital twins" - where a piece of physical equipment has a virtual simulation of itself which can be used to predict faults and ease maintenance and fault finding. Our interest is more in digital twins of real people - cyber-twins as they have been called - perhaps we'll see those on later charts!

The final technology of interest is Brain Computer Interfaces. Putting them only just behind Conversational Interfaces reinforces the point that CUI should be a lot farther through the cycle! Useful Brain Interfaces (I'm not talking Neurosky type "brain-wave" headsets here - Gartner may differ!) still seem to be decades away, so sits about right on the chart. In fact it's moved a bit forward since 2013, but still at 10+ years to mainstream - can't argue with that.

So all this is pretty subjective and personal, and despite its flaws the hype cycle is a useful model. As mentioned though the same technology (eg VR) may have different cycles in different industries, and we also feel that each point on the curve is a bit of a fractal - so composed of smaller versions of the cycle as each step forward gets heralded as a great leap, but then falls back as people actually get their hands on it!

We look forward to reviewing the 2018 chart!






25 August 2017

Project Sansar - First Impressions


I've been signed up the Sansar Closed Beta for months but other projects meant I never had the time to go play. Now it's in Open Beta (and so we can talk about it) I thought it was about time I checked it out.

What Sansar doesn't offer (in comparison to SL) is a single shared world - this is far more a "build your space and let people visit" model. It also doesn't offer in-world building (just placement of imported or bought object), or in-world scripting (and scripting is in C++ and needs to be re-imported everytime you make a change, so it looks like a very long development cycle!). What is does offer is (as did SL) is multi-user (well at least multi-avatar), and VR support out of the box. Avatar choices are limited but look OK with some nice facial customisations, but only about 8 outfits (and no colour options!).

VR and using the teleport movement - see light beam

The navigation model is horrendous (in my view) - the camera usually giving a sideways view til you'd been walking for ages. You couldn't use cursor keys to rotate your view on the spot - very much built for gamers with keyboard in one hand, first person view and mouse in the other. Couldn't find a run or fly control so walking around too ages. There is a nearby-TP option where you can point to a place and jump there - but a very short range.



The actual spaces looked pretty good - but they are just imports of 3D models so no reason not to. But interactivity was non-existent in the ones I saw - probably due to the complexity of the scripting. Almost all of them also seemed very dark - they give you lights for your scenes but it seems like many people aren't using them well.

The one location that was stunning (especially in VR) was the Apollo Museum - with a really nicely done earth-moon trajectory and little CM/LEM models all along it and audio to show you what was going on - a superb VR demo.


Having done a quick tour I decided to try building so chose one of the ~8 base locations. Rather than buy from the store I decided to upload some FBX models - which was pretty smooth except for the fact that it appears they only get textured if the textures are PNG - and even then the ones I tried ended up all candy-striped!


One of the biggest issues for me though was that you had no avatar when building - so lose all sense of scale. No issue if you're a 3D artist, but for an SL renegade like me I can never get on with building without myself as a reference. Once you've done the build you save and then "build" - which can take a minute or so, before you can play (another minute or so), so again a slow iterative build process (and the "professional" builds were taking ~5 mins each to download.

Finally I wanted to try scripting. Before I started this morning (as as I tweeted) I thought I might be able to get a Sansar script talking to the bot I was working on, or even one of our PIVOTE APIs. No chance! Sansar scripts are pure C#. It seems at the moment you must edit outside (Notepad or Visual Studio), then import, attach, then build then run - it would take ages to do anything. The C# calls to interact with the environment also look non-trivial (subscribing to changes etc), and only a small subset of Mono/C# functions are supported - not the range that Unity has - so no web calls! There's no way that people will have an easy transition from LSL to Sansar CS - it's a whole extra level up.



So overall - massively underwhelmed. High Fidelity certainly looks far more interesting from a technical standpoint and is closer to an SL#2 - but even that doesn't have the single world thing. AltSpaceVR (if you added object import/placement) is far closer to what I thought that Sansar would be - and the WebGL enclosure idea was/is a superb way to create interactive 3D/VR content at minimal effort. The whole Sansar experience felt like working through treacle, whether exploring or building, - although in first person in VR at least the explore bit was quick.

What it did make me appreciate is what we've done with Fieldscapes. Using that has never felt slow. Its very quick to layout, add interactivity, test and explore - things just flow. And if people want to spend the time then there is no reason why you shouldn't have the same level of eye-candy as the Sansar spaces. But there is just no way that I can see Sansar being a training/education tool - you'd use native Unity or Fieldscapes or something similar, more power or higher ease of use, rather than Sansar which appears to cripple both. And the spaces don't have the immediacy of the AltSpaceVR ones, or the ease of build of the SL ones, so I don't see more casual users taking to it in great numbers. Perhaps if I had loads of time, was a coder/3d artist and wanted to build some sort of fan-space it might be a place to do it, but somehow I doubt even that.


14 August 2017

James - Work Experience

James is a year 10 student who worked at Daden for one week as part of his course. The following is an account of time at Daden as written by James.

After a team meeting on Monday, I set to work getting to grips with Fieldscapes, using the tutorials to create a quiz that takes the user through the world answering various questions, which turned out to be useful later on (my geography knowledge was tested in a quiz mid-week, so knowing that Ulaanbaatar was the capital of Mongolia from my own project was very helpful!)

 I was then set the task of importing files from Paint 3D into Fieldscapes, which provoked research into the numerous 3D file types available, their uses, as well as how to model a 3D object.

Some default models in Paint3D in 3D mode


Finally, I was then able to export Paint 3D files as an fbx into Unity, then to create an asset bundle to be imported into Fieldscapes; although we encountered problems with offsets and colours along the way, which proved to also be a great learning experiences. The asset bundle I made featured artistic marvels such as a coffee cup with 3D text and a rainbow.

Paint3D models imported into Fieldscapes


In addition, I was present at a meeting that showed me the many uses of virtual reality and 3D, as well as how business between two companies is carried out.

Then on Wednesday, I made an inventory of all the computers in the office, prompting discussion about aspect ratios, computer specs and anti-virus software, as well as having to use the computers’ BIOS’ and learning about the financial side of things with discussions about the cost of the computers.

Next on Thursday I was involved in testing, giving me insight into how it is carried out, along with the gratifying feeling of discovering a funny bug, in this case props being placed in the sky and avatars floating into the air, seemingly ascending to heaven.

I then participated in the testing of a virtual mentor, which again showed the need for and the process of testing and both the positives and negatives of using VR and 3D in the classroom. Next I tried programming a chat bot, adding an input box to it, which greatly improved my JavaScript, as well as allowing me to practice HTML and CSS in a practical environment, not just a classroom and all throughout the week I had a go at C# programming, which I learned from scratch.

Finally on Friday, I continued with programming a chat bot, improving and optimising already existing code. I used JavaScript to present contacts, as well as CSS to improve the appearance of the bot in general adding an input area, an enter button and a scroll bar if the chat overflows.


Delving into SpatialOS

SpatialOS is a cloud computing platform developed by the UK based Improbable that can be used for running large-scale simulated worlds, such as a massively multiplayer game (MMO), a virtual city, or a model of the brain. It is a technology that I first heard of in early 2016 and it has been on my radar since, and so I decided to look into it on the most recent DadenU day by working through some of the tutorials to see what it was all about.

There are a few core concepts to SpatialOS that are essential to understanding how it works. The two main concepts are Entities and Workers.

Each object that is simulated in an SpatialOS world are represented by what are called Entities. This could be a tree, a rock, a nerve cell, or a pirate ship. Each of these entities can be made up of components, which define certain persistent properties, events, and commands. An example would be a player character entity that defined a "health component" - this would have a value property, an event for what happened when it reached 0, and perhaps some commands that can modify the property in specific ways.

My ship in the watery world
All of the processing performed in the simulated world, such as visualising the world or modifying component properties, is performed by Workers.These are services that can be scaled by SpatialOS depending on resource demands. There are both server-side workers, handled by SpatialOS, and client side workers - the application that a user will interact with.

You are able to develop, debug, and test applications developed on SpatialOS on your local machine, allowing for small scale messing around to be done fairly painlessly. My plan was to work through the tutorials in the documentation so that I could get a feel of how to use the technology. The first lesson in the Pirates Tutorial series focuses on setting up the machine to run a local instance of SpatialOS and the tutorial project itself.

A command line package manager called chocolatey is used to install the SpatialOS command line interface (CLI) and stores the location in an environment variable. The source code for the tutorial includes a Unity Worker and a Unity Client. Included in the project is a scene with an empty ocean environment. All other objects, such as the islands and the fish are generated by a worker when the project is launched, and the player ship is generated by a client when it connects. The CLI was used to build the worker and launch SpatialOS locallyWith that the 'server-side' of the game was running and all that was left was for a client to connect to it. 

There are several ways that a client can be run, but the most useful for local development using Unity is to run through the editor interface. Pressing play will launch a local client that allows you to sail around an ocean as a ship. 


Observing pirate ships and fish using the Inspector tool
SpatialOS has an interesting web-base tool called the Inspector that lets you see all of the entities and workers in the running simulation. It displays the areas of the game world that each individual worker and client are currently processing - you even have the ability to remove a worker from the simulation, however SpatialOS will start a new worker instance if it feels that it needs one - and as there is only one required in the tutorial a new one was launched if I deleted the existing worker.

All of the entity types listed can be colour coded so that they are easier to follow when observed in the 2D top down view. There is a 3D option but I couldn't seem to get it to work on my browser. All of the components that make up the entity can be viewed as well, which leads me to believe that the inspector could be a fairly useful monitoring tool during development. The inspector is available on deployments on the cloud as well as locally. 

Other lessons in the tutorial take you through the basics step by step. The world was very empty to begin with and was in dire need of some more entities, so the second lesson takes you through the process of creating one from scratch. This is a two step process - the first is to write an entity template, and the last is to then use the template to spawn the entity within the game world.

Building the pirate ship entity template
The tutorial project uses a factory method pattern to generate the templates for each entity, so to create our AI pirate ships all we needed to do was create our own factory method for it. The entity object is generated using the builder pattern, and there are some components that are required in every entity generated - a position and a metadata component. The pattern also requires that you set the persistence of the entity, and that you set the permissions on the access control list (ACL) before any additional components are added.

Spawning of the entities in the tutorial occur at two distinct stages - at runtime when a player connects, and at the beginning when the world is created in what is known as a snapshot. A snapshot is a representation of the state of the game world at any specific point in time, and when you launch the project to SpatialOS you can define a snapshot to load from.

Every game world requires an initial load state and this is what a snapshot provides. In the case of the tutorial, the player ship template is used to spawn a ship when a user connects, and the pirate ship template is used to spawn ships in the snapshot we defined as default. To define a snapshot we created a custom Unity menu item to populate a dictionary with a list of all of the entities we want to spawn, including a whole bunch of our new pirate ships. Once the worker is rebuilt the client will not be able to see a whole host of static pirate ships within the ocean environment.

Generating a snapshot that includes pirate ships
Getting the pirate ships to move in the environment was next. The tutorial focused on the manipulation of a component's properties by creating a script that will write values to the ShipControls component of the pirate ship entity.

Access restrictions defined when attaching a component to an entity template determine what kind of worker can read from or write to the component. We can use a custom attribute to determine what worker type we want the script to be available for - i.e, the pirate ship is an NPC so we only want it to be controlled on the server side, so we lock the script using the attribute to only appear on UnityWorker instances.

Only one worker, or client, can have write access to a component at any given time, though more than one worker can read from the component. We add a writer component to the script we have created and ensure that it has the [Require] attribute - this means that the script will only be enabled if the current worker has write access to the component.  

To write to a component you use a send method that takes an update structure, which should contain any updates to the component values that need to happen - in the case of the pirate ship we want to update the speed and the steering values of the ShipControls component to get it to move. The worker was rebuilt again, the local client relaunched, and we had moving pirate ships! There was no decision making so they were rather aimless, but at least they were moving now.

Event data flow
Another important aspect of the components are the ability to fire of events. These are transient and are usually used for one-off or infrequent changes, as there is less bandwidth overhead than modifying properties, which are persistent. To learn about events we were tasked with converting locally spawned cannonballs to be visible on other clients.



Adding events to a component first requires knowledge of how a component is defined in the first place. SpatialOS uses a schema to generate code that workers can then use to read and write to components. These are written in what is called schemalang, which is SpatialOS' own proprietary language. An event is defined in this language using the structure: event type name. For example we defined an event that will be fired when a cannon is fired on the left of the ship like so: event FireLeft fire_left. 

Using our new FireLeft and FireRight events instead
of locally firing cannons
Events are defined within the component, and FireLeft is defined as an empty type outwith the component definition in the following fashion: type FireLeft {}. The custom types are capable of storing data, but that wasn't required for the purposes of the tutorial.

The code needs to be generated once the schema for the component has been written so that we can access the component from within our Unity project. The CLI can generate code in multiple languages (currently C#, C++ and Java). To be able to fire events we need access to the component writer so that when we detect that the user has pressed the "fire cannonballs" key we can fire an event by using the component update structure, like we have done when moving the pirate ships.

The script that contains callbacks that fire the cannons
hen an event is received
Firing an event is only half of the story as nothing will happen if nothing is reacting to the event being fired. In the case of Unity it's as easy as creating a new MonoBehaviour script and giving it a component reader as well as a couple of methods that will contain the code we want to run when we receive an event. These methods must be registered as callbacks to the event through the component reader in the MonoBehaviour script's OnEnable method, and must be removed as a callback in the OnDisable method. This is mostly to prevent unexpected behaviour and stop the script from receiving event information when it is disabled.

Next was a short tutorial that discussed how components are accessed by workers and clients. One of the key terms to understand is checked out. Workers don't know about the entire simulated environment in SpatialOS and instead only know about an allocated sub-set of the environment, called a checkout area. They have read access to, and can receive updates from, any entity within this designated area. I mentioned earlier that more than one worker can have read access to a component, and this is because the checkout areas of a worker can overlap with that of another worker; meaning that an entity may be within the area of multiple workers. This is also the reason that only one worker can have write access to a component at any given time.

The ShipControls component's full schema
The final tutorial that I managed to complete before the day ended walked me through the basics of creating a new component from scratch, in this case a "health" component that could be applied to ships so that cannonball hits would affect them on contact.

As mentioned before the component is defined in schemalang. In the schema file you define the namespace of the component as well as the component itself. Each component must have a unique ID within the project and this is define in the schema file. The properties and events of the component are all defined here (eg the Health component has a "current_health" integer property). You can also define commands here but I believe those are covered in the final tutorial.

After defining the component the code has to be generated once again so that the new component can be accessed within the project. Adding the component to an entity is as easy as modifying the template for whichever entity you wish to add it to. Reducing the health of a ship in the tutorial was as simple as updating the current health of the health component whenever a collision was detected between the ship and a cannonball - using a mixture of Unity's OnTriggerEnter method and a writer to the health component I defined.

Writing to the new Health component
In conclusion I think that SpatialOS was actually fairly simple to use once it was all set up. I did attempt to launch the project locally but I never managed to get it consistently working in the short time I had left. The biggest drawback to the pirates tutorial is it didn't give me much of an idea of the main attraction of SpatialOS, which is the ability for there to be multiple workers running a simulation in tandem; for the entirety of the tutorials there was need for only one worker. I'm very curious to see how SpatialOS as a platform develops in the future, as I feel it could have some interesting applications.