18 December 2017

Geospatial AR - Building Your Own!



For ages now I've wanted to have an app that would display points of interest on my smartphone screen overlayed with the real world view. The new OS AR layer (above) and Yelp Monocle (below) do the sort of thing I want, but I want to be able to define my own locations, and maybe some custom functionality.



After a couple of fruitless web and App/PlayStore searches I couldn't find what I wanted. Wikitude was closest, and so were several GIS related offerings, but it was going to be several hundreds of pound to "publish" my dataset. I then looked at mobile development frameworks (e.g. Corona) several of which appeared to offer an AR option, but really only just marker based, not geospatial AR. So by about 1030 on DadenU day I realised I was going to have to roll-my-own. I'd found a nice tutorial (Part 1 and Part 2) and so, without having ever developed a Java app or mobile app decided to give it a go.

It took about the next 3 hours to install Android Studio and all its updates. Moving the tutorial across didn't take long, but the app kept crashing on start-up. I then realised that I needed to manually set the GPS and Camera positions! Did that and the app worked.

All the app did though was put a marker on the centre of the screen when the camera pointed in a particular direction. I wanted the OS style though - markers that stayed on screen and slid around as you moved the phone, and a whole host of them too.

A bit of maths, some searching on CodeProject and I soon had the basic operating mode changed, one marker, sliding around and just about aligning with the target. The hardcoded about  dozen markers and got them working. Here's a screenshot of that stage, with the icon that came with the tutorial. The other thing I added was that markers were smaller if more distant.




That was about the end of DadenU day, I had the core app working but I wanted more. So over the next few evenings I:


  • Moved from hard coded markers to an internally selectable dataset
  • Added some on-marker text
  • Made new markers of different shapes
  • Added ability to set marker shape and colour based on fields in the data
  • Added option to move markers in the vertical plane based on distance as (OS/Yelp)
  • Added the ability to filter on a type parameter (again as OS)
That got to about here:






I also added the ability to spoof the phone GPS location, so it would pretend you were in the middle of the dataset - so here the centre of the Battle of Waterloo - but physically on the office car park.

I then wanted to add some very specific functionality. As you might guess from the last sentence one of my use cases for this is battlefield tours. So not only are there fixed locations but also moving troops. So I wanted a time slider that you could use to set a time in the battle, and then have the unit pointers point to the right place. The time slider is at the bottom of the screen for relevant datasets, with the "current time" displayed see below.




A final tidy up of the menu icons and this is the current version:



I've even added voice description for the icons, and arrows that point you in the direction of a featured icon, with description linked to time so using the slider and some VCR controls it can walk you through a spatial narrative.

Next up on the to do list are:
  • Click on an icon to see (or hear) full data
  • Load data from  a web server
And that should pretty much give me my minimum viable product.

We're already talking to one potential client about a project based around this, and can see how it might fit into two other projects (giving an alternative to viewing the data in VR if actually on-site). We'll keep you posted as it develops. At the very least I'll start using it for some of my own battlefield walks.


8 December 2017

Automating Camera Directing in Fieldscapes

Fieldscapes has the potential to be a very flexible platform when it comes to content creation, and when David was busy recording a video of it a potential use case for the application crossed my mind - would it be possible to direct a video within a Fieldscapes exercise by automating camera positioning? This would allow for a variety of uses, such as exercise flythroughs to video recordings, and that's why I decided to look into the idea for the last Daden U day.

Firstly I had to remind myself about the current capabilities of the cameras before I added any new functionality. I created a new exercise and added a new camera to it. On the camera panel, you can see some of the options we have for setting up the camera - we can rotate horizontally and vertically, as well as adjusting the field of view of the camera. There is the option to change to orthographic projection, but I wasn't going to be needing that.

Camera menu.
The first idea that came to mind was that being able to alter the field of view via PIVOTE actions would be very powerful. That feature isn't currently implemented but I put it on my list of potential improvements. The other idea that popped into my head was the ability to alter individual rotation axes via PIVOTE actions, to allow more subtle control of the camera than is currently available.

Now that I had looked at the camera set up options it was time to remind myself of what PIVOTE can do with the cameras. So I went to edit one of the default system nodes to see the available actions. As
]#u can see from the image below, it is very limited - you can only alter whether or not the camera is active. This would have to change drastically if I was to be able to do what I wanted to.

Old camera actions.
Automating the camera position and movement would require cameras be able to use some of the actions available to props, such as the ability to teleport to, or move to, the position of a prop or the ability to look at a prop within the environment. Some new actions would also nice, such as one to change the field of view as previously mentioned.

To help determine what actions I was needing I decided to choose an existing exercise in Fieldscapes and design a camera 'flythrough' of the exercise in the same way some video games would perform an overview of the level before the user begins. After much deliberation the exercise that I chose to use was the Apollo Explore exercise developed to allow users to walk about the moon and learn about the Apollo 11 moon landing. This exercise has props spread around the environment which makes it easy to define a path we want the camera, or cameras, to follow.

Intended camera positions are circled.
Mapping out the positions I wanted the cameras to be at during the flythrough was the first step I took. I decided on placing two extra cameras in the environment - one to be used to look at the user avatar and one to move around the props. This would give a nice fade to black and back when switching between them. I wanted to slowly pan around the user avatar at the start, followed by the camera showing each of the main pieces of equipment before ending on the lunar lander itself. After this, the exercise would start and the camera would go back to a third person view.

After plotting out all of the positions and directions I wanted the cameras to go to I decided on how I wanted to transition between the positions so that I could determine what actions I would require. As I wanted to smoothly pan the camera at the start around the avatar, the most obvious action I would require is one that would move the camera from one position to another over time. I added 3 actions to the cameras that are present in props that I felt were most useful - MoveTo, TeleportTo, and LookAt. To linearly interpolate from the current camera position to that of the arrow I placed that points to the avatar's head we would use the MoveTo command.

New camera actions.
I set up a timer that would trigger the camera facing the avatar to move to one of the arrows after 1 second had passed, and I would use the same timer to move the camera to other positions after more time had passed. Unfortunately this is where I hit a snag - there was a bug with the way the camera was trying to raycast to the ground when moving between positions, causing it to slowly move upwards into space forever. I ran out of time and had to head home before I managed to find the cause of the issue, so it was at this point where my experiment had to stop for the time being.

In conclusion, I do believe that if the breaking bug I discovered towards the end of the day can be fixed then there is a great chance that the ability to automatically move cameras will be functional within Fieldscapes. I'd also like to develop a PIVOTE action that could transition the field of view of the camera over time - perhaps then we will see a dolly zoom replicated in Fieldscapes!

Oops!


1 December 2017

Fieldscapes Exercise Visualiser - Daden U Day




For the Daden U day I decided to create what I call the Fieldscapes Exercise Visualizer. The aim of the visualizer is to create a graphical representation of a Fieldscapes exercise. The idea came about while creating a complex Fieldscapes exercise, I was struggling to quickly recall the structure of the exercise, the Props, Prop Commands and their relationship with Systems Nodes. Another reason for creating the visualizer was to have a way of explaining the flow of the exercise from begin to end. If it a non-linear exercise they different paths can also be illustrated.


Before beginning work on the visualizer I had a few ideas of how to illustrate the exercise using symbols and shapes. Fortunately I discovered a web application called draw.io whilst trying to created a flow diagram manually using pre existing tools. Initially I had attempted use Umlet which is windows application for drawing UML diagrams but decided against it. Reason being that a web application would be more accessible. As a web application I could integrate it into the Fieldscapes Content Manager reducing the number of tools content creators have to access to make full use of the Fieldscapes ecosystem.


Unfortunately draw.io does not have an API (Application Programming Interface). In my attempt to find a the API I discovered that it uses library called mxGraph. mxGraph is a JavaScript diagramming library that enables interactive graph and charting applications to be quickly created that run natively in most major browsers. mxGraph has a backend which supports the javascript application running on a server. This backend software can either use Java, C-Sharp(C#) or PHP. For the purpose of the U Day I used the C# backend as Fieldcapes is written in C#.


After downloading the source code for mxGraph from the Github repository. The source code contained an example C# .Net website. The solution worked right out of the box without an issues. Fortunately because of work done for the Fieldscapes Editor most of the code needed to read Fieldscapes exercises stored in XML was already written so all I needed to do was write a couple of functions that extracted the data I needed to represent the various elements of an exercise as geometry with connecting lines. Extracting the data was a breeze however progress grinded to a halt when I tried to draw different shapes to represent the various elements of an exercise such as props, system nodes etc. After some trial and error and lots of googling I managed to understand how to style the vertex which is the word used in mxGraph for the geometry that are draw on the canvas.


From what I saw of the documentation and my brief time using mxGraph, mxGraph is a powerful library that has many affordances for those who want to create diagrams of any nature. It allowed me to create a diagram(see image below) that showed all the different elements of a Fieldscapes exercise with line indicating their relationships with to each other. The next step create some form of structure for the diagram. Development on the Fieldscapes Exerciser Visualizer is not a priority at the moment but it something I intend to continue working on until it becomes more useful at which point it will be integrated into the Fieldscapes Content Manager.



2 November 2017

C# as a Scripting Language - and in Fieldscapes?

By: Iain Brazendale

Daden U days give me the opportunity to play with ideas that have been floating around at the back of my mind for some time and this Daden U day was no exception.

I’ve been reading good things about the changes Microsoft have been doing to their .NET complier platform “Roslyn” particularly with regards to adding scripting support. It was this combined with thoughts of how we could easily add additional functionality to Fieldscapes that made me decide it was time to take a deeper look at scripting.

The advantage of scripting provided through .NET compiler is that there’s no need to learn yet another scripting language, scripts are simply written in C# but with looser syntax requirements.


Following the article quickly gave me a rough working console application to do some testing with:






This example shows accessing an array containing the names of images for different days of the week. The third query to me shows the real power of the script – here the day of the week becomes the index into the images and returns the correct images for the day of the week (You guessed it – I ran this example on a Tuesday). The ability to show a different image for each day of the week is not something that is ever likely to be baked into Fieldscapes, however, adding scripting support allows this new functionality to be added with a single line of script.

This example below shows how picking of images at random can be achieved.





However, with great power comes great responsibility… if we let users add their own scripts then they could take advantage:








Here we can see that users may access functionality that wasn’t intended, in this case displaying inappropriate messages. Also of note, as you can see from my several attempts to get the syntax right (C# is case sensitive), it could be fiddly for people to write scripts without the use of a good IDE correcting those minor typos. These issues suggest that scripting is something that should initially be left to the “advanced” users and more thought and design will be needed before it becomes a “typical” user feature.

So, when will scripting be added Fieldscapes? Unfortunately, scripting requires .NET 4.6 and Roslyn. Unity still uses .NET 3.5 with a non-Roslyn compiler so as wonderful as this technology looks it won’t be a scripting solution for Fieldscapes until the tech catches up. However, it could be a useful technology for Daden’s other products such as Datascape.







26 October 2017

Daden October Newsletter - Fieldscapes in Bournemouth, new iOS release, smart city data


In this latest issue of the Daden Newsletter we cover:

* Fieldscapes in action! - a report on the project we are doing with Bournemouth University to evaluate training midwifery students in 3D and VR.
* The new iOS release of Fieldscapes, and updates to the Android and VR user interfaces to increase usability
* Visualising Smart City data with Datascape
* The return of Abi - our virtual assistant chatbot

Remember you can try our products instantly and free of charge by going to:

* https://www.fieldscapesvr.com/download for Fieldscapes and Trainingscapes
* https://www.datascapevr.com/trial for Datascape

We hope you enjoy the newsletter, and do get in touch (http://www.daden.co.uk/#contact) if you would like to discuss any of the topics raised in the newsletter, or our products and services, in more detail!


** Download your PDF copy of the newsletter here .

23 October 2017

Trees in Birmingham




The West Midlands Data Discovery Centre is publishing a wide and interesting set of open data on the West Midlands. Over the next few weeks we'll take a look at some of the data sets and use them to create visualisations in Datascape.

First up is a database of all the tress managed by the city's highways department. For each tree we have:


  • Locations (lat/long)
  • Height (shown as height on map)
  • Age (shown as colour, yellow = new, green = mature, brown = old)
  • Form (eg symmetric/non-symmetric - latter more at risk from storms, shown as shape)
  • Species (shown in text)




This gives a low oblique view over the city, showing relative heights - as expected the younger trees tend to be smaller!



Close in on a set of pollarded and unbalanced trees. Bright green are mature, dull green are semi-mature. The small blue one has unspecified data.

We can use the standard Datascape search, filter and scrub features to help analyse the data.

We've published a subset of this visualisation to the web so that you can fly around and investigate it yourself. Just click on the image or link below.




More datasets from WMDDS to follow!

9 October 2017

Fieldscapes 1.4 released




Fieldscapes v1.4 is now out and available for free download and had some good heavy user testing at Malvern recently with Year 7s from across the county. The main new features in v1.3 and v1.4 are:


  • The start of support for NPC characters through an NPC widget. You can now add an NPC avatar as a prop and have it TP from location to location and activate specific animations - eg "sit". Later releases will allow the NPC to glide or walk between locations. We are also close to releasing a "chatbot" widget to hook an NPC up to an external chatbot system so that you can really start to create virtual characters.
  • General improvements to the UI when in VR mode - users found that the "virtual iPAD" just got in the way so we're now putting the UI directly into the scene. We'll make steady improvements to the usability of the VR experience in future releases
  • Added a new Avatar command to change clothes. This only changes between preset outfits but is good if a character needs to change into special kit for a task - for instance our nursing avatars putting on gloves and aprons (see below)
  • Multiple choice questions are now randomised - makes the student think if they repeat the lesson!
  • Increased the inventory limit from 3 to 5 in the editor - so you can bring in props from more inventories for your scene
  • Increased the word count for multiple-choice panels and default popup panel

Various bug fixes were also made and you can see a full list at http://live.fieldscapesvr.com/Home/Download




v1.4 is already available for PC and Mac. The Android version of v1.4 is just under final testing, and we're also still progressing the iOS version of Fieldscapes through the App Store acceptance process.

Remember: We have an ongoing survey for new Fieldscapes features, please take 2-3 minutes to fill it out at: https://www.surveymonkey.co.uk/r/88YL39B

4 October 2017

Fieldscapes at the Malvern Festival of Innovation

Oculus Rift on the Moon

For the second year in a row we ran a set of workshops at the Malvern Festival of Innovation playing host to a succession of groups of 20-30 students (mostly year 7s) from around Worcestershire and giving them a 1 hour introduction to immersive learning. We set up four stands in order to show the range of experiences and lessons that can be created, and the different ways in which they can be delivered. We had:


  • One laptop running the Solar System lesson
  • One laptop running the Apollo Educate lesson
  • One laptop running the Introduction to Carding Mill (which several groups knew from Fieldtrips)
  • A couple of Google Cardboards, one with the Photosphere tour of Carding Mill and one the Apollo Explore lesson
  • Oculus Rift running Apollo Explore
Playing tag on the Moon!


Students were split into groups of about 5 and had 10 minutes on each "stand" - so everyone got to try all the kit.

Looking at the Waterspout Waterfall in a (plastic) Google Cardboard


Student feedback from comments and feedback forms included:
  • "I wish I could spend all afternoon here"
  • "Can I come back later?"
  • "It was really cool"
  • "It was fun to do"
  • "It was memorable"
  • "The realness of it"
  • "I liked the fact that we were not there but we could see everything"
  • "It was like it was real"
  • "It was educating and fun"
Exploring the Moon and Apollo on Google Cardboard


When talking to the teacher we were keen to highlight that:
  • They didn't have to buy any new hardware, like expensive VR headsets, as they can run lessons in 2D/3D mode on existing computers, or in VR on Google Cardboard (one teacher loaded the app onto his phone as we spoke)
  • They could create their own lessons, share them, and use (and customise) those producted by other users
  • With our licencing model they only started paying once they started using it in class, so they could explore and test for free until they were confident in the system and lessons and were ready to use it in class.
Even the teachers got in on the act!


Fieldscapes itself was rock solid on all the devices all day - despite getting a hammering from the kids. What was particularly impressive was when we had the Apollo experiences in multi-user mode so the kids could play tag on the moon - and even using the public wifi at the venue we had no issues with lag and avatar movement was very smooth.



All in all a great day and helped remind us all why we've built Fieldscapes!





2 October 2017

Abi's Back!



Having been off on sabbatical for a few years whilst we rebuilt the web sites Abi, our virtual assistant is back up. We think she forgot a fair amount whilst she was backpacking around the world, but we'll slowly bring her up to speed, and she's got lots to learn about what with Datascape, Fieldscapes and Trainingscapes. One thing we have given her is the ability to load up web pages - we used to do this on client clients but never got around to doing it on our own!

So if you've got a question about Daden or any of our products just talk to Abi.

This new Abi is built on the Chatscript open source platform that we're using for all our chatbot projects now. She's not the smartest bot we've done, but does give you an idea of the basic sort of functionality and interaction style. When she loads she also provides a page which gives you some idea of her scope and some of the "tricks" we're teaching her.

You can read more about our chatbot work on our dedicated website at www.chatbots.co.uk.

Just get in touch if you'd like to know more about chatbot technology and have a project in mind.

For a quick trip down memory lane here are some of Abi's previous incarnations.

The very first ABI - text and static image in Second Life


Abi as an avatar walking around our old Sim in Second Life



Abi moved onto our website with an art-work avatar


Abi before she went on hols as a photoface avatar


28 September 2017

Fieldscapes and Midwifery Training at Bournemouth University


As you may have seen from our Fieldscapes Twitter stream we've just reached the delivery stage of our first Midwifery Fieldscapes lesson (on Urinalysis) for Bournemouth University's Midwifery training team. We had a number of meetings with the team over the summer and then went away and customised our existing Simple Clinic space into "Daden Community Hospital", and added more generic medical props, and also brought in some BU specific posters. Nash then created the first exercise based on an agreed flowchart/storyboard and now we're just getting to the end of the iterations with the team, stakeholders and students. The final step for us will be to train the BU team on how to use Fieldscapes to continue to maintain and develop the exercise (and create other exercises), before they then start their evaluation with the current student cohort.


Response so far from all involved has been excellent with comments such as:

  • "I had such a cool day at work recently – I got to play with the first of my VR healthcare education environments using Oculus Rift"
  • " I absolutely love this! A brilliant way to learn" - student feedback
  • "So amazing to see my project becoming a reality – I hope the students love this way of bridging the gap between classroom theory and clinical practice"
  • "That was brilliant loved it! can’t wait to do more. Very informative" - student feedback
  • "Delighted that the Oculus Rift dramatically altered the look and feel of the clinical room, and that the handheld Haptic feedback controls added to the experience"


Being Fieldscapes the exercise can be experiences on a PC/Mac or Android device, and in VR on Oculus Rift or Google Cardboard on Android. One of our final tasks is integrating one of the £2-£3 hand controllers for the Cardboard go go along with the c.£15-£20 VR-BOX headsets that BU have (VR doesn't have to be expensive!).



We'll keep you posted as developments and evaluation progress and we're already talking to BU about other exciting way to take the training.


You can read more about the BU view on the project on their blog posts at:






26 September 2017

The Three Big Challenges in AI Development: #2 Generalism and #3 Sentience


Following on from the previous post I now want to look at what happens when we try and move out of the "marketing AI" box and towards that big area of "science fiction" AI to the right of the diagram. Moving in this direction we face two major challenges, #2 and #3 of our overall AI challenges:

Challenge #2: Generalism

Probably the biggest "issue" with current "AI" is that it is very narrow. It's a programme to interpret data, or to drive a car, or to play chess, or to act as a carer, or to draw a picture. But almost any human can make a stab at doing all of those, and with a bit of training or learning can get better at them all. This just isn't the case with modern "AI". If we want to get closer to the SF ideal of AI, and also to make it a lot easier to use AI in the world around us, then what we really need is a "general purpose AI" - or what is commonly called Artificial General Intelligence (AGI). There is a lot of research going into AGI at the moment in academic institutions and elsewhere, but it is really early days. A lot of the ground work is just giving the bot what we would call common-sense - just knowing about categories of things, what they do, how to use them - the sort of stuff a kid picks up before they leave kindergarten. In fact one of the strategies being adopted is to try and build a virtual toddler and get it to learn in the same way that a human toddler does.

Whilst the effort involved in creating an AGI will be immense, the rewards are likely to be even greater - as we'd be able to just ask or tell the AI to do something and it would be able to do it, or ask us how to do it, or go away and ask another bot or research it for itself. In some ways we would cease to need to programme the bot.

Just as a trivial example, but one that is close to our heart. If we're building a training simulation and want to have a bunch of non-player characters filling roles then we have to script each one, or create behaviour models and implement agents to then operate within those behaviours. It takes a lot of effort. With an AGI we'd be able to treat those bots as though they were actors (well extras) - we'd just give them the situation and their motivation, give some general direction, shout "action" and then leave them to get on with it.

Not also that moving to an AGI does not imply ANY linkage to the level of humanness. It is probably perfectly possible to have a fully fledged AGI that only has the bare minimum of humanness in order to communicate with us - think R2D2.

Challenge #3: Sentience

If creating an AGI is probably an order of magnitude greater problem than creating "humanness", then creating "sentience" is probably an order of magnitude greater again. Although there are possibly two extremes of view here:


  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there's be "nobody home", no matter much much it appears to show intelligence, emotion or empathy.
  • At the other, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the "extras" example above our anthropological instinct almost immediately starts to ask "well what if the extras don't want to do that..."
We also need to be clear about what we (well I) mean when I talk about sentience. This is more than intelligence, and is certainly beyond what almost all (all?) animals show. So it's more than emotion and empathy and intelligence. It's about self-awareness, self-actualisation and having a consistent internal narrative, internal dialogue and self-reflection. It's about being able to think about "me" and who I am, and what I'm doing and why, and then taking actions on that basis - self-determination.

Whilst I'm sure we code a bot that "appears" to do much of that, would that mean we have created sentience - or does sentience have to be an emergent behaviour? We have a tough time pinning down what all this means in humans, so trying to understand what it might mean (and code it, or create the conditions for the AGI to evolve it) is never going to be easy.




So this completes our chart. To move from the "marketing" AI space of automated intelligence to the science-fiction promise of "true" AI, we face three big challenges, each probably an order of magnitude greater than the last:


  • Creating something that presents as 100% human across all the domains of "humanness"
  • Creating an artificial general intelligence that can apply itself to almost any task
  • Creating, or evolving, something that can truly think for itself, have a sense of self, and which shows self-determination and self-actualisation
It'll be an interesting journey!



25 September 2017

The Three Big Challenges in AI Development: #1 Humanness





In a previous blog post we introduced our AI Landscape diagram. In this post I want to look at how it helps us to identify the main challenges in the future development of AI.

On the diagram we’ve already identified how that stuff which is currently called “AI” by marketeers, media and others is generally better thought of as being automated intelligence or “narrow” AI. It is using AI techniques, such as natural language or machine learning, and applying them to a specific problem, but without actually building the sort of full, integrated, AI that we have come to expect from Science Fiction.

To grow the space currently occupied by today’s “AI” we can grow in two directions – moving up the chart to make the entities seem more human, or moving across the chart to make the entities more intelligent.

MORE HUMAN

The “more human”  route represents Challenge 1. It is probably the easiest of the challenges and the chart we showed previously (and repeated below) shows an estimate of the relative maturity of some of the more important technologies involved.



There are two interesting effects related to work in this direction:


  • Uncanny Valley - we're quite happy to deal with cartoons, and we're quite happy to deal with something that seems completely real, but there's a middle ground that we find very spooky. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we've made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we're now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we're barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism - People rapidly attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. In some ways this can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it's still a lot more basic than is thought.
The upshot is that the next few years will certainly see systems that seem far more human than any around today, even though their fundamental tech is nowhere near being a proper "AI". The question is whether a system could pass the so-called "Gold" Turing Test ( a Skype like conversation with an avatar) without also showing significant progress along the intelligence dimension. Achieving that is probably more about the capability of the chat interface as it seems that CGI and Games will crack the visual and audio elements (although ding them in real-time is still a challenge) - so it really remains the standard Turing challenge. An emotional/empathic version of the Turing Test will probably prove a far harder nut to crack.

We'll discuss the Intelligence dimension in Part 2.





18 September 2017

Automated Intelligence vs Automated Muscle

As previously posted I've long had an issue with the "misuse" of the term AI. I usually replace "AI" with "algorithms inside" and the marketing statement I'm reading still makes complete sense!

Jerry Kaplan speaking on the Today programme last week was using the term "automation" to refer to what a lot of current AI is doing - and actually that fits just as well, and also highlights that this is something more than just simple algorithms, even if it's a long way short of science-fiction AIs and Artificial General Intelligence.

So now I'm happy to go with "automated intelligence" as what modern AI does - it does automate some aspects of a very narrow "intelligence" - and the use of the word automated does suggest that there are some limits to the abilities (which "artificial" doesn't).

And seeing as I was at an AI and Robotics conference last week that also got me to thinking that robotics is in many ways just "automated muscle", giving us a nice dyad with advanced software manifesting itself as automated intelligence (AI), and advanced hardware manifesting as automated muscle (robots).


15 September 2017

AI & Robotics: The Main Event 2017


David spoke at the AI & Robotics: The Main Event 2017 conference yesterday. The main emphasis was far more on AI (well machine learning) rather than robotics. David talked delegates through the AI Landscape model before talking about the use of chatbots/virtual characters/AI within the organisation in roles such as teaching, training, simulation, mentoring and knowledge capture and access.

Other highlights from the day included:


  • Prof. Noel Sharkey talking about responsible robotics and his survey on robots and sex
  • Stephen Metcalfe MP and co-chair of the All Party Parliamentary Group on AI talking about the APPG and Government role
  • Prof. Philip Bond talking about the Government's Council for Science and Technology and its role in promoting investment in AI (apparently there's a lot of it coming!)
  • Pete Trainor from BIMA talking about using chatbots to help avoid male suicides by providing SU, a reflective companion - https://www.bima.co.uk/en/Article/05-May-2017/Meet-SU
  • Chris Ezekial from Creative Virtual talking about their success with virtual customer service agents (Chris and I were around for the first chatbot boom!)
  • Intelligent Assistants showing the 2nd highest growth in interest from major brands in terms of engagement technologies
  • Enterprise chat market worth $1.9bn
  • 85% of enterprise customer engagement to be without human contact by 2020
  • 30% increase in virtual agent use (forecast or historic, timescale - not clear!)
  • 69% of consumers reported that they would choose to interact with a chatbot before a human because they wanted instant answers!
There was also a nice 2x2 matrix (below) looking at new/existing jobs and human/machine workers. 



This chimed nicely with a slide by another presenter which showed how as automation comes in workers initially resist, then accept, then as it takes their job over say the job wasn't worth doing and that they've now found a better one - til that starts to be automated. In a coffee chat we were wondering where all the people from the typing pools went when PCs came in. Our guess is that they went (notionally) to call centres - and guess where automation is now striking! Where will they go next?

14 September 2017

Daden at Number 10


Daden MD David Burden was part of a delegation of Midland's based business owners and entrepeneurs to 10 Downing Street yesterday to meet with one of the PM's advisors on business policy. The group represented a wide range of businesses from watchmakers to construction industry organisations, and social enterprises and charity interests were also well represented. Whilst the meeting of itself was quite short it is hopefully the start of a longer engagement with Government for both this group and Daden (we also submitted evidence to the House of Lord's Select Committee on AI last week and are exploring some other avenues of engagement).




6 September 2017

An AI Landscape


In the old days there used to be a saying that "what we call ‘artificial intelligence’ is basically what computers can’t do yet" - so as things that were thought to take intelligence - like playing chess - were mastered by a computer they ceased to be things that needed "real" intelligence. Today, it's almost as though the situation has reversed, and to read most press-releases and media stories it now appears to be that "what we call 'artificial intelligence'" is basically anything that a computer can do today".

So in order to get a better handle on what we (should) mean by "artificial intelligence" we've come up with the landscape chart above. Almost any computer programme can be plotted on it - and so can the "space" that we might reasonably call "AI" - so we should be able to get a better sense of whether something has a right to be called AI or not.



The bottom axis shows complexity (which we'll also take as being synonymous with sophistication). We've identified 4 main points on this axis - although it is undoubtably a continuum, and boundaries will be blurred and even overlapping - and we are probably also mixing categories too!:


  • Simple Algorithms - 99% of most computer programmes, even complex ERP and CRM systems, they are highly linear and predicatable
  • Complex Algorithms - things like (but not limited to) machine learning, deep learning, neural networks, bayesian networks, fuzzy logic etc where the complexity of the inner code starts to go beyond simple linear relationships. Lots of what is currently called AI is here - but really falls short of a more traditional definition of an AI.
  • Artificial General Intelligence - the holy grail of AI developers, a system which can apply itself using common sense and  general knowledge to a wide range of problems and solve them to a similar laval as a human
  • Artificial Sentience - beloved of science-fiction, code which "thinks" and is "self-aware"



The vertical axis is about "presentation" - does the programme present itself as human (or indeed another animal or being) or as a computer. Our ERP or CRM system typically presents as a computer GUI - but if we add a chatbot in front of it it instantly presents as more human. The position on the axis is influenced by the programmes capability in a number of dimensions of "humanness":

  • Text-to-speech: Does it sound human? TTS has plateaued in recent years, good but certainly recognisably synthetic
  • Speech Recognition: Can it recognise human speech without training. Systems like Siri have really driven this on recently.
  • Natural Language Generation: This tends to be template driven or parroting back existing sentences. Lots more work needed, especially on argumentation and story-telling
  • Avatar Body Realism: CGI work in movies has made this pretty much 100% except for skin tones
  • Avatar Face Realism: All skin and hair so a lot harder and very much stuck in uncanny valley for any real-time rendering
  • Avatar Body Animation: For gestures, movement etc. Again movies and decent motion-capture have pretty much solved this.
  • Avatar Expression (& lip sync): Static faces can look pretty good, but try to get them to smile or grimace or just sync to speech and all realism is lost
  • Emotion: Debatable about whether this should be on the complexity/sophistication axis (and/or is an inherent part of an AGI or artificial sentient), but it's a very human characteristic and a programme needs to crack it to be taken as really human. Games are probably where we're seeing the most work here.
  • Empathy: Having cracked emotion the programme then needs to be able to "read" the person it is interacting with and respond accordingly - lots of work here but face-cams, EEG and other technology is beginning to give a handle on it.
The chart gives a very rough assessment of the maturity of each.

There are probably some alternative vertical dimensions we could use other than "presentation" to give us an view on interesting landscape - Sheridan's autonomy model could be a useful one which we'll cover in a later post.

So back on the chart we can now plot where current "AI" technologies and systems might sit:


The yellow area shows the space that we typically see marketeers and others use the term AI to refer to!

But compare this to the more popular, science-fiction derived, view of what is an "AI".


Big difference - and zero overlap!

Putting them both on the same chart makes this clear.


So hopefully a chart like this will give you, as it has us, a better understanding of what the potential AI landscape is, and where the current systems, and the systems of our SF culture, sit. Interestingly it also raises a question about the blank spaces and the gaps, and in particular how do we move from today's very "disappointing" marketing versions of AI to the one's we're promised in SF from "Humans" to Battlestar Galactica!