Category Archives: iOS

iOS 10 Released

Today iOS 10 has been released. We have tested VentusAR 5.0 with iOS 10 on the iPad Air, Air2 and Pro and everything seems to be working perfectly.

logodeios10The main difference I think that VentusAR customers will see is the new way of unlocking the tablet: instead of swiping right on the lock screen, you now press the home button to unlock. There are loads of other new features that have been added into iOS 10, you can find a full list at apple’s website.

We recommend everyone updates to iOS 10 to take advantage of the new features and security updates included in this release.

VentusAR 5.0 Released

We are delighted to announce that this morning we released VentusAR v5.0 into the app stores.

The release includes many new features across Wind, Grid, Solar and Building domains. The new features make the assessment and visualisation process on iPad or Android quicker and more efficient. To highlight a few: we have added new modes for displaying models, the ability to add multiple traces to a capture and introduced an improved, streamlines render output process.

Display Modes for Models

We have added new modes for displaying models – as classic models, in block colour (with no lighting / shadows) and as outlines.

  • The colours help to distinguish building types
  • The outline / transparency helps to show what is behind the development
  • This helps assessment process to understand how development fits in the environment
Block colour mode

Block colour mode

Outline mode

Outline mode

Model mode

Model mode

 

 

 

 

Adding Multiple Traces to Gallery Photographs

You can now add multiple traces to a capture, for example, removing multiple areas of foreground (hedges etc). This produces more realistic visualisations as the model appears to be within the image and not superimposed on top.

IMG_0095

3D model in Wireline Mode

Models shown on top of photograph

Multiple traces areas defined

Final Render – the buildings are shown in the photograph

 

Streamlined Render Process

We have improved the render process to ensure users can select all the options they want
– making output render production process quicker.

IMG_0099 IMG_0102 IMG_0101

 

 

 

 

 

 

 

 

 

 

Further information and full details of the other features in VentusAR 5.0 can be found in the release notes for iPad, Android and the Portal

If you have any queries regarding these new features, or you would like to chat further about how VentusAR can help you with your visualisations, please give us a call on 0141 559 6170 or email hello@ventusar.com 

iOS 9.3.2 Released

ios9This week, Apple released an update to iOS 9. The iOS 9.3.2 update brings a few minor fixes to the iPad for the newest iPad Pro users.

This has been tested with VentusAR and all appears to be working fine.  We recommend upgrading to the latest version of iOS to take advantage of these new features and fixes.

Staff Spotlight – Rufus Mall

Our latest team spotlight is our iOS guru, senior software engineer, Rufus Mall.

Who are you and what is your role at Linknode?

My name is Rufus Mall. I’ve been working as a Software Engineer at Linknode for around three and a half years now. The main skills I use of at Linknode are Graphics Programming and iOS Development. I spent the first three years of my time at Linknode working on VentusAR but have now moved on to working on our upcoming product UrbanPlanAR.

RUFUS

My favourite person in the world (alongside Steve Wozniak)

Continue reading

Drone Visualisation

An obvious extension to the first-person visualisation experiences that Linknode deliver is using the same technology to deliver remote visualisation. By that, we mean to take the visualisation solution (mobile tablet) that our existing users hold in the field, and change the camera location. This could be to get a different perspective on a project or to place a point of view in a location which may be otherwise inaccessible or dangerous to access.
In theory, all the technical platform requirements (location, real-time sensors, 3D modelling, camera metrics and AR integration) that Linknode specialise in are the same as VentusAR and UrbanPlanAR. However, instead of the mobile platform being packaged into a consumer tablet, containing all the hardware we need, just like the best chefs we need to do some deconstruction of the product to create a new experience. Continue reading

Team Spotlight – Ryan Welsh

Team spotlight time – let me introduce you to our newest and youngest member of staff – software engineer Ryan Welsh.

Who are you?

I am Ryan

ruan2

What is your role at Linknode and how long have you been working here?

I have been working as a software engineer for just over a year. My role in the company is iOS developer, for our flagship product, VentusAR. My work involves adding new features to our game engine.

What has been your favourite project at Linknode?

My favourite project was undertaking a bespoke task for Natural Power. It involved combining VentusAR’s virtual reality mode with an external sensor system to create a useful tool for monitoring wind farms. I learned alot from this project as I completed it on my own, and several areas were new to me.

How has Linknode helped you in your career development?

I have been flung in at the deep end several times and been sent off to carry out research in a particular area to gain an understanding of the best approach needed to implement a new feature. This has enhanced my skills, not only in programming, but in design documentation and research. I am also given large feature projects to work on which are different to the next, so I am never really repeating tasks.

If you could describe working at Linknode, what would they be?

Friendly, Challenging, Rewarding

If you had to eat one meal every day, for the rest of your life, what would it be?

Chicken and chips

What books are at your bedside?

The Walking Dead

What did you want to be when growing up?

A fireman 🙂

Ryan_working

Xamarin acquired by Microsoft

Hi, it’s me again – Rufus. It is that time of the year again where I write a technical blog!

You may have read my previous post about the Apple technologies that were announced a few months ago. With the recent announcement that Xamarin has been acquired by Microsoft, I thought it was an appropriate time to share some of my thoughts with the world! This blog post will share a little bit our history with Xamarin technologies – and some of our thoughts on the recent news.

How we got into Xamarin

When Linknode first dipped into the world of mobile applications, it was only natural for us to begin investigating the Windows Phone platform, due to our developers having a rich history with Windows desktop and server based technologies. After creating a number of Windows Phone applications – to gain some experience with intricacies of mobile development we looked into expanding to the other platforms. Driven by our development history, our desire to build Augmented Reality applications and the small size of our development team the following requirements were important to us:

  • Share as much code as much as possible
  • Provide the user with a “Native” experience – Each application should follow the idioms of the device it is running on
  • The ability to write high performance – real time applications
  • Ideally we should be able to reuse our .net/C# skills and code.

The above list of requirements is quite steep – but the first three are easily possible using available tools. However back then, most people thought C# was a Microsoft technology and was not supported by other platforms… or was it? This is where Xamarin comes in!

xamagonXamarin

We started building some simple test applications to try out the Xamarin technologies and were highly impressed with the quality of the output, and also the lack of a steep learning curve.

Xamarin is based on the “Mono” runtime and allows you to write applications in C# for Android, iOS, Windows and Mac. Xamarin allowed us to share a large amount of code and make use of our experience with C# and the general richness of the large “.net” based API’s.

Some other options at the time were not sounding so promising. This is not to say switching to Xamarin was without issues. As an “early bird” user of Xamarin we had some issues such as the primitive nature and stability of the development tools. However we have seen the Xamarin toolchain go through various phrases of re-branding and improvements and it has now emerged as a stable and somewhat mature development platform. The Xamarin developer Ecosystem is full of libraries/components developers can make use of to accelerate development without sacrificing the experience for the end user. Another point of note is that if you are a native iOS/Android developer with a lack of sharp experience as I was – switching to Xamarin is extremely easy. All the API’s and built-in Frameworks you are familiar with are still there and easily accessed from C#.

code-sharing1

 

Acquisition + conclusion

We are happy with the decision to go down the route of building cross platform applications using Xamarin and are pleased with the somewhat expected acquisition. We hope the recent news will enrich the Xamarin development community further, not only by generating more interest for the Xamarin toolchain and growing the size of the community – but also to aid in breathing some new life into the windows mobile space.

Either way I am sure having the great experience of a company such as Microsoft cannot be a bad thing. If any of you are deliberating whether to investigate Xamarin for your own projects I strongly recommend you try it out!

xamarin-joins-microsoft

 

 

iOS 9 and New Apple Stuff

My name is Rufus and I am a software engineer at Linknode. After watching the live stream of Apple’s “Special Event” and being interested in some of the new products I thought I would write a small blog post to share my thoughts with the world!

As someone who primarily uses iOS devices I am mostly interested in changes that help make the device easier to use and improve productivity. iOS 9 brings a lot of improvements in these regards. Some of my favourites are listed below.

iOS 9

iOS 9 was announced and will be available on 16th September 2015.

ios9

Split Screen Multitasking
This feature allows users to place multiple apps on the screen at once. This will be useful in a whole host of situations, when you want to read a website while reading an email, taking notes or reading email while chatting to someone via the Messages app.

iCloud Drive App
This feature seems to provide iOS with a dropbox-like functionality. You can now browse the contents of your iCloud account and view, move, delete or share files using the “iCloud Drive” app. For me, I don’t think this will replace the way I use dropbox for storing and sharing files.

Notes App Improvements
I already use this app a lot to take notes and they sync automatically across all of my devices and are uploaded to my email account. The improvements to notes include better text formatting tools, being able to insert rich content such as images and a new drawing tool.

VentusAR 4.1 will support iOS 9 when it released – we are still on target for a release in mid September.

Devices

Two of the new devices that were announced at the Apple event are: iPhone 6S and iPad Pro.

iPhone 6S
I am mostly excited about the new “3DTouch” feature that will be available on the new iPhones. During the event, one of the clips shown demonstrated the uses of 3DTouch – a feature that allows the user to quickly preview content without having to fully transition to another screen. This seemed great for quickly previewing images or taking a quick peak at your calendar etc. If I had an iPhone 6S, this is definitely a feature I would be making the most of! The new devices were also said to be much faster than the previous generation of iPhones, with a 70 percent faster processor and 90 percent faster graphics.

iPad Pro
This is the largest iPad to date, with a 12.9-inch screen that has 5.6 million pixels.

One of the accessories announced for the iPad Pro was the “Apple Pencil”. This is a battery powered stylus which is recharged by plugging it directly into the iPad Pro. The sensors in both the pencil and iPad’s screen work together to respond to the angle and pressure of the pencil to simulate a true writing or drawing instrument. The have clearly invested a lot into the Apple pencil, and I am interested to see how this compares with the current high-end graphics tablets available from companies like Wacom.

pencil

Apple also boasted about the performance of this new iPad – claiming that it is “faster than 80% of the portal PCs that shipped in the last 12 months”, and that in graphics tasks it is “faster than 90% of them”. There was no mention about the amount of memory in the device, but rumors suggest it contains 4GB of memory. This is twice times as much as the iPad Air 2.

What does this means for Linknode and VentusAR? Well the larger screen size and huge performance increases will enable richer and more complex visualisations that were not possible on the previous devices. The larger screen size will also aid in communicating visualisations to larger groups of people. We plan to support the iPad Pro soon after it is launched in November 2015.

It is an exciting time to be both a user and developer of Apple products and I am excited to see what is possible with the new range of devices and operating system.

RUFUS

Me and Steve Wozniak, AppsWorld 2013

Mirroring an iPad to a Laptop

Over the last week, I’ve have done a few demo’s of our VentusAR product to a variety of different people. Quite often I end up doing these demo’s in places where there is no Wi-Fi, so I need some other way of connecting the iPad to a big screen. Sure, I could use a lightning to HDMI connector, but then the iPad is tied to the screen by a cable – VentusAR is a personal, engaging visualisation tool that requires users to move around to get the full effect. I don’t want the first experience potential clients see to be limited by a bit of wire!

As I’ve just upgraded to Windows 10, I took the opportunity to document what would I do to project the iPad screen to a room of people. (Mostly so I have notes to look back on next time I re-install windows.)

TLDR – Overview

  • I create a hosted network between my laptop and iPad
  • I use AirServer to mirror the iPad screen to the laptop screen
  • I have some short cuts in my windows 10 start menu to make it easier

Setup

I setup my laptop as a wireless access point using a few custom scripts to get everything started. Then I use the rather excellent AirServer to act as an AirPlay device and set the iPad to mirror the screen to the laptop. This can then be placed in front of the audience (if there are only a few people) or connected to a projector / shown on a TV.

Network Setup

1. Create a Hosted Network

To create a hosted network on your computer you need to execute the following command (as an administrator).

netsh wlan set hostednetwork mode=allow ssid=”<NetworkName>” key=”<Password>” keyUsage=persistent

2. Start the Hosted Network

This doesn’t need to be run as an administrator

netsh wlan start hostednetwork

3. Setup Sharing to Allow Access

The above commands will have enabled an access point on your computer. I call my accesspoint “GBLaptop”. By default it will not have access to the internet (it will be in its own isolated network). I find it much more useful if the access point network can access the internet through my laptop. To do this you need to enable internet sharing on your wifi network.

  1. Open network and sharing center by right clicking on the task icon (near the clock on the screen).
    Network and sharing center
    The network and sharing center shows two networks. Your Wi-Fi network (highlighted in red) and your ad-hoc network (highlighted in green).
    Network connections
  2. Choose Wi-Fi (the connection to your Wi-Fi network and internet). This brings up the Wi-Fi Status options page.
  3. Choose Properties to bring up the properties of this connection and change to the Settings tab.WifiStatus
  4. Tick the box next to Allow other network users to connect through this computers internet connection.NetworkSharing
  5. Then choose the item from the Home Networking connection: drop down that matches your ad-hoc network (Highlighted in green above).
  6. Click OK
  7. Then Close

4. Try It Out

On your iPad connect to your new hosted network. It should be visible from the Wi-Fi section of the settings app. The Wi-Fi name and password will be whatever you set <NetworkName> and <Password> to in step 1.

Then you should be able to use Safari to go to a webpage to ensure you have everything set up correctly.

AirServer Setup

Airserver_icon_x128AirServer is really good (use the 7 day trial if you need to – or buy it – it’s not that expensive). Once you have installed it on Windows 10 it will be running in the system tray. There is very little other setup required (note I’ve not tried the miracast options yet as my laptop doesn’t have the required network drivers – I expect that is just as easy).

On the iPad, drag the bottom menu up to reveal the Air Play control and choose your laptop’s name.

Troubleshooting

  • Laggy? – try the Slow Network option in the AirServer settings
  • Can’t connect – make sure your on the hosted network (not connected directly to your Wi-Fi
  • Try Rebroadcasting from AirServer

Short cuts

I add some short cuts to my Windows 10 Start menu:

  • Shortcut to Air Server to start that up.
  • start hosted network and stop hosted network are shortcuts in <user>AppDataRoamingMicrosoftWindowsStart MenuPrograms
    • start is a shortcut to C:WindowsSystem32netsh.exe wlan start hostednetwork
    • stop is a shortcut to C:WindowsSystem32netsh.exe wlan stop hostednetwork

start menuEnd Result

The end result – VentusAR Fly Through running on the iPad and mirrored to the laptop screen, with no requirement for Wi-Fi in someone elses office.

WP_20150903_21_28_03_Pro

VentusAR App Running on an iPad, Mirrored to the Laptop Screen

Your Phone has Attitude!

The axis on a mobile device

The axis on a mobile device

Sorry, this post isn’t about your phone or tablets bad attitude and the way it doesn’t let you do what you want – that’s just working with Android that does that. Instead, this post is about how we at Linknode use the sensors built into your device to understand the direction it is orientated to and how that can be used to do interesting things.

This is a core piece of technology we use within VentusAR. We have spent a lot of time and effort interfacing with the sensors within your devices. This experience and skill goes into several of our mobile apps to provide a more intuitive and useful mobile experience.

In this post, we’ll talk about attitude (or geospatial orientation), sensors and sensor fusion, then show some example code of how to get this attitude information on each of the major platforms. I’ll write a follow up post that will dig more deeply into what sensor fusion is and how we have customised it in VentusAR, to provide a better user experience in our augmented reality applications.

Attitude

To allow the device to present useful information about its surroundings, we need to know the direction the device is looking. This provides key information that you must know to be able to do any proper augmented reality. The direction your device is looking  is called ‘the attitude’ (or geographic orientation) of the device. In essence, this is a value that represents the rotation of the device in real world coordinates.  In mathematics, this rotation value can be represented in a number of ways: a quaternion, a rotation matrix or as three separate values for yaw, pitch and roll. We use a quaternion to represent this rotation because this is smaller, involves simpler maths to work with and avoids known problems with rotation matrices – I’ll cover that in a separate blog post some time.

Sensors

Modern phones and tablets have lots of sensors in them – they allow app developers get an insight into the world around them. In terms of attitude, the ones we are interested in this post are:

  • Compass – gives the direction of magnetic north in 3D space
  • Gyroscope – this measures angular rotation – how far you have rotated the device
  • Accelerometer – measure the direction of gravity in 3D space

There are a couple of limitations of these sensors that are worth knowing about:

  • Digital compasses are very noisy and susceptible to interference so often they jump during real world use. This is down to the characteristics of the sensor – as an app developer, there is not much you can do about it.
  • Gyroscopes tend to drift. There is no real world reference for the gyroscope, it is just measuring rotation. If you did a complete 360°, you would expect the gyroscope to give the same result. Unfortunately it doesn’t, after a while of running it tends to drift.

For these reasons, some very clever people came up with the concept of sensor fusion.

 Sensor Fusion

These sensors can be merged through software into a single “virtual” sensor using a process called Sensor Fusion. Many people have written in-depth articles about what Sensor Fusion is and how it works – but you may need a PhD to understand them. I think it is easiest to see it as a mathematical process that takes input from the three physical sensors (Compass, Gyro and accelerometer) and provides one unified quaternion representing the attitude of the device.

Sensor Fusion block diagram

Sensor Fusion block diagram

To provide a more detailed example, if you were standing in the northern hemisphere with the device perpendicular to the ground facing the north pole (i.e. level on a tripod, facing a heading of 0 degrees), the devices attitude would be:

0 degrees 45 degrees 90 degrees 180 degrees
x  0  0  0  0
y  -1  -0.9238795  – 0.7071068  0
z  0  0  0  0
w  0  0.3826834   0.7071068  -1

How does it help

As I said at the start, the integration to the sensors is at the core of what we do at Linknode. We have several apps that read data from the sensors and provide a real time view across a 3D world. We can pull in a real world terrain model and show what the terrain looks like in a particular direction.

Implementations

Each device manufacture / OS vendor provides their own implementation of sensor fusion within their devices. These are usually good enough for general or gaming purpose – they tend to have an emphasis on speed of response instead of absolute accuracy. Below I have shown some code that allows you to get a quaternion out of the API provided by the OS.

All code below is c# as all the code we write is c#. For more information on running c# on iOS or Android have a look at what Xamarin are up to.

Apple (iOS)

Apple provide the CMMotionManager classes that can be used on iOS.

public class IOSSensorFusionExample
{
  public void Start()
  {
    CMMotionManager _motionManager = new CMMotionManager();
    _motionManager.DeviceMotionUpdateInterval = 1/60; //request 60 updates a second
    _motionManager.StartDeviceMotionUpdates(
      CMAttitudeReferenceFrame.XMagneticNorthZVertical,
      _backgroundQueue,
      delegate (CMDeviceMotion motionData, NSError error)
      {
        CMQuaternion cMQuatAttitude = (CMQuaternion)motionData.Attitude.Quaternion;
        //do something useful with the quaternion here
      });
  }
}

(See Xamarin API for mode details)

Android

Android provides the RotationVector sensor type accessible from their SensorManager class:

public class AndroidSensorFusionExample : Java.Lang.Object, ISensorEventListener
{
  public void Start()
  {
    SensorManager sensorManager = this.GetSystemService(Context.SensorService);
    var defaultRotationVectorSensor = sensorManager.GetDefaultSensor(SensorType.RotationVector);
    sensorManager.RegisterListener(this, defaultRotationVectorSensor, SensorDelay.Game);
  }

  public void OnSensorChanged(SensorEvent e)
  {
    float[] q = new float[4];
    SensorManager.GetQuaternionFromVector(q, e.Value.Values.ToArray());
    Quaternion quaternion = new Quaternion(q[1], q[2], q[3], q[0]);
    //do something useful with the quaternion here
  }
}

(see Xamain Android API and Android Docs for more information)

Windows Phone

Windows Phone provides the motion classes:

Motion sensor = new Microsoft.Devices.Sensors.Motion();
sensor.CurrentValueChanged += (sender, args) =>
{
    var quaternion = args.SensorReading.Attitude.Quaternion;
    //do something useful with the quaternion here
};
sensor.Start();

(see MSDN for more details)

Windows 8

Windows 8 uses the motion class:

var sensor = Windows.Devices.Sensors.OrientationSensor.GetDefault();
sensor.ReadingChanged += (sender, args) =>
{
    var quaternion = args.Reading.Quaternion;
    //do something useful with the quaternion here
};
sensor.ReportInterval = 16;

(see MSDN for more details)