22 September 2016

Brighttalk Webinars now available



David's two Brighttalk webinars are now available for you to view for free and at your leisure:
Enjoy, and we'd welcome any feedback. David will probably be doing another Brighttalk event looking more broadly at the use of 3D in DataViz in November.



21 September 2016

A 3D Dataviz Taxonomy




Whilst prepping the slides for last week's Brighttalk 3D Dataviz Webinar (watch it now) I started to put together a taxonomy of 3D data visualisation.

The starting point is a 3D plot - we are plotting data against 3 axes, not 2.

There is then a big divide between an allocentric and an egocentric way of viewing the data. Allocentric means that your reference point is not you, it's something else. In egocentric you are the reference point. In practice this means that in an allocentric plot if you move the viewpoint it feels like its the data moving, not you; and in an egocentric plot if the viewpoint moves it feels like you're moving and the data is staying still. Since this latter is how the physical world works it's what our eyes and brains are used to, so we feel more at home, and we can maintain context and orientation as we move through the data. Tests we did a few years ago with Aston University compared allocentric and egocentric ways of exploring 3D data, and showed that generally performance was better for the egocentric view.

Within the allocentric branch the next divide is whether the plot is static (in which case I suppose you could argue it's neither allocentric or egocentric) - as you might get in say Excel, or whether you can rotate and zoom the plot (as in something like Matlab). Are there any further sub-divisions?
On the egocentric branch we think the divide between viewing the data on a 2D screen (as in "3D" computer games), and viewing it through a VR headset in "real" 3D is far more a case of how you view the data rather than in any fundamental change in how it is being plotted. To us the big benefit is going egocentric not allocentric, not going from 2D screen to 3D headset. In fact our experiences with Oculus DK1 and DK2 suggest that the 3D headset is actually a worse way of viewing data in many (most?) cases. Luckily Datascape will be agnostic between 2D and 3D displays once we release V2.1 - you'll be able to do both. 3D wall displays using head-tracking glasses are probably another example of a different view rather than a different method of plotting. But again are there other more useful/detailed distinctions that can be made?

Let us have your thoughts in the comments, on our Facebook or Linked-In group, or to @datascapevr.



12 September 2016

The 3D Dataviz Escalator




    Back when we did some of the original research work and testing on immersive 3D data visualisation that led to Datascape we developed this "benefits escalator" to show the increasing possible benefits of moving visualisation away from 2D graphs to immersive, multi-user 3D spaces. With the launch of Datascape 2.0 we felt it needed a bit of a facelift. What's interesting is that there is very little there that we thought actually needed changing - the fundamental message of the chart still stands:
    • 2D plots can rapidly become crowded and unreadable, and provide no spatial cues to help remember them - they are all just lines on paper or screen.

    • Adding a 3rd dimension to a 2D plot let's you add an extra dimension of data, but the image gets even busier, and if you can rotate and zoom the chart then you get rapidly disorientated.

    • Moving into an immersive 3D space where its the data that stands still but you that has the sense of movement gets around much of the disorientation, which in turn lets you take up lots of different viewpoints inside and outside the data to better understand it, and the sense of moving in amongst the data gives you a spatial relationship to the data which can help with recall and with story-telling within the data.

    • Given that the 3D space is near infinite you can spread your data out far beyond the limits of the page or screen, yet you can zoom in on the smallest part but still have the context of "the whole" in the background. The space also lets you use 3D models to represent the data which can let you communicate far more than a colour key, and with minimal visual interference unlike 2D pictograms. The 3D space can also be a home for multiple charts, all on different axes, but all in the same space or frame of analysis.

    • The final step is when we make the space multi-user, so that you can see where everyone else is in the space, and what they are looking at - opening up whole new possibilities in collaborative data visualisation and visual analytics.

    You can read more about immersive visual analytics in our white paper, or download a trial copy of Datascape to try it out yourself.

    7 September 2016

    Immersive Visual Analytics

    We've updated our Immersive Visual Analytics White paper - first published in 2012. The updates reflect our experiences with visual analytics through Datascape 1 since then, market developments such as the rise of VR, and the new opportunities opened up by tools such as Datascape 2.

    Download the Immersive Visual Analytics White Paper today!

    Immersive Visual Analytics - Updated White Paper Published

    We've updated our Immersive Visual Analytics White paper - first published in 2012. The updates reflect our experiences with visual analytics through Datascape 1 since then, market developments such as the rise of VR, and the new opportunities opened up by tools such as Datascape 2. Download the White Paper today!

    6 September 2016

    Brighttalk AR/VR Summit - Daden's Two Talks

    Daden have two talks at the forthcoming Brighttalk "Brave New World: Augmented and Virtual Reality" webinar summit taking place on 13th & 14th September.
    Sign up to either (or both!) free live webinar presentations today.




    5 September 2016

    DadenU Day: Vulkan Graphics API

    By: Sean Vieira

    As June was coming to a close there were big decisions to be made. The most important day in a long time was approaching. No, not the EU referendum. The obviously more important DadenU day, on the 1st of July. Once again the decision on what I would do came late but I had a few things that interested me.

    One of those things was the Vulkan API, by Khronos Group, which is a modern graphics and compute API that is cross platform and has a focus on parallelisation and efficiency. An evolution on the company’s OpenGL API it aims to be an open standard for devices ranging from PCs, smartphones, games consoles and embedded systems. It was first released in February 2016 and currently has very little software support, though notable game engines such as Unreal Engine 4 (Epic) and Source 2 (Valve) have jumped on the bandwagon. More information on Vulkan can be found here. So I took this opportunity to jump on the bandwagon myself and see what it was all about. Vulkan-2016-Transparent-bg.png

    Vulkan’s cross platform support offered up the first choice of the day, but it wasn’t a particularly difficult decision. The graphics card in my work PC is not supported by the API, and is actually the most recently released AMD GPU architecture that is not supported by Vulkan. At the time of writing only AMD cards that are based on GCN (Graphics Core Next) 1.0 or later are supported, so Android was my platform of choice.

    Despite having very limited Android development experience (outside of Unity) I took it upon myself to try Vulkan out. Having already prepared my phone for use with Android Studio, I went in search of some examples to test, and running these tests would take up the majority of my day. There were two prominent resources that I found for Vulkan on Android; the first was an official Android NDK tutorial, and the other was the official SDK by Qualcomm, which is for their Adreno GPU.

    My attempts to get the former to work were futile. A combination of my development inexperience coupled with, what I later discovered was, an incompatible version of Android made this a frustrating exercise. Most of the issues came from trying to ensure that the Android target versions weren’t incorrect and I spent a lot of time trying to sort it out. In the end, I had gotten nowhere and decided it was best to move on to the latter option.

    This proved to be much easier and more productive. I managed to import the basic 'triangle' project from the SDK into Android Studio with little to no hassle, and I had managed to compile it without error. Now that it was compiled I it on my device, only to find that the app popped up on screen and then disappeared in a flash. After a little bit of digging around I found out that it was most likely an issue with the Android version requirement of Vulkan - it required API level 24 (the N-Preview) whereas my device was still only on API level 22 (ie Lollipop).

    After failing to update the company phone to the correct version, my final throw of the dice was to forego Android and use the new laptop, which was bought specifically to demo our products. Being able to maintain a high frame rate when running our applications was a priority and for this reason the laptop contains a dedicated NVIDIA mobile GPU - which is fortunately supported by the Vulkan API. To test this, I downloaded the ‘Chopper’ demo from NVIDIA’s website and ran the it. Thankfully the helicopters flew forwards, albeit stuttering fairly frequently, and I could begin.


    Chopper00.png

    Unfortunately the laptop wasn’t set up for development, so I had to install the necessary applications before getting started. Once Visual Studio, CMake, and the LunarXChange SDK were installed, I got to work and attempted to run an example project. I used this handy tutorial video by Niko Kauppi to help me along the way.

    The SDK first needed to be prepared for development by building the necessary libraries. These had to be built in a specific order, to ensure that each library project has the correct dependencies. Each project required a debug and release build as well. The glslang library needed to be built first, followed by the spirv-tools, and finally the samples that I would run.

    Each library required a Visual Studio solution be made so that they could be compiled, so I used CMake through the command line to build the glslang library first. Once the solution had been created I opened it and compiled the 'ALL_BUILD' project within the solution twice - one debug and one release build. I did the same thing for the spirv-tools and samples libraries, in that order.

    Now I could run some samples. The very first sample I decided to run was the 'instance' sample, which is the most basic example project, so I set it as the startup project of the solution. It creates an instance of Vulkan… and destroys it. To ensure that it was actually running, I added in a line of code at the end of sample_main() to wait for a keypress. Having done this, I re-built the project and the console window that flashed previously, now sat waiting for my to press a button, with no errors.

    Looking at the source file for the project, we see that the sample_main() function contains four major steps. To create an instance the API needs to know some information when creating the instance, and this is stored in an VkInstanceCreateInfo struct. However, this structure requires knowledge of the general application info, so we first fill out a VkApplicationInfo struct with information such as application name, engine name, and api version.



    // initialize the VkApplicationInfo structure

    VkApplicationInfo app_info = {};

    app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;

    app_info.pNext = NULL;

    app_info.pApplicationName = APP_SHORT_NAME;

    app_info.applicationVersion = 1;

    app_info.pEngineName = APP_SHORT_NAME;

    app_info.engineVersion = 1;

    app_info.apiVersion = VK_API_VERSION_1_0;

    // initialize the VkInstanceCreateInfo structure

    VkInstanceCreateInfo inst_info = {};

    inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;

    inst_info.pNext = NULL;

    inst_info.flags = 0;

    inst_info.pApplicationInfo = &app_info;

    inst_info.enabledExtensionCount = 0;

    inst_info.ppEnabledExtensionNames = NULL;

    inst_info.enabledLayerCount = 0;

    inst_info.ppEnabledLayerNames = NULL;

    VkInstance inst;

    VkResult res;

    res = vkCreateInstance(&inst_info, NULL, &inst);

    if (res == VK_ERROR_INCOMPATIBLE_DRIVER) {

    std::cout << "cannot find a compatible Vulkan ICD\n";

    exit(-1);

    } else if (res) {

    std::cout << "unknown error\n";

    exit(-1);

    }

    vkDestroyInstance(inst, NULL);




    Once we have the VkInstanceCreateInfo, we create the instance by calling the vkCreateInstance method and pass it the info struct and a null VkInstance by reference. Unless there are any issues with drivers and compatibility, this should create the instance. The final step is to destroy the instance, so that the application can safely close. This is achieved by calling the vkDestroyInstance by passing the VkInstance by reference.

    This was as far as I managed to get on the day. It was farther than I expected to get but not as far as hoped. I have learned a few things from this, and unfortunately not much to do with coding using the Vulkan API. It was fairly hard to find somewhere to start when it came to the Android version, though most of that can be put down to my inexperience, and I had to cross reference a few tutorials to try and figure out exactly what I needed to do. Getting started with the Windows version was much easier, though that could be down to having more experience.

    Learning Vulkan is something that I would certainly like to pursue in the future and having managed to set it up on the laptop provides me a foundation for future exploration. Perhaps next time I can get round to rendering a triangle!