Category Archives: Xamarin

Staff Spotlight – George Banfill

Our latest team spotlight is our Technical Director George Banfill.

GB

Who are you and what is your role at Linknode?
I’m George Banfill – husband, father, sailor, cyclist, geek and tired. (Comes of having a 3 year old and a 4 month old at home).

 What is your role at Linknode?
I oversee all the product software development and software releases we do as a company.  I was employee number two back in August 2011 and have been working on building cross platform mobile apps and supporting systems ever since. I’ve been involved in building and overseeing Windows phone, Windows 8, Xamarin.Android and Xamarin.iOS development as well as creating asp.net web sites to support them. Continue reading

Staff Spotlight – Rufus Mall

Our latest team spotlight is our iOS guru, senior software engineer, Rufus Mall.

Who are you and what is your role at Linknode?

My name is Rufus Mall. I’ve been working as a Software Engineer at Linknode for around three and a half years now. The main skills I use of at Linknode are Graphics Programming and iOS Development. I spent the first three years of my time at Linknode working on VentusAR but have now moved on to working on our upcoming product UrbanPlanAR.

RUFUS

My favourite person in the world (alongside Steve Wozniak)

Continue reading

Team Spotlight – Stewart Fullerton

Next up for our Team Spotlight is another of our software engineers –  Stewart Fullerton.

Who are you?

I am Stewart Fullerton

1596677_10152716674728272_494951341_o

What is your role at Linknode and how long have you been working here?

I’ve been working as a Mobile Development Engineer at Linknode for nearly 4 years. I specialise in building Android and Windows Phone apps as well as developing commercial websites. I work in a team of 5 developers on Linknode’s products like VentusAR, Album Flow, HistoryLens, MyStory and 3DTry.it. I am also responsible in building and maintaining the company’s websites, a few examples are ventusar.com, historylens.co.uk and urbanplanar.com.

What are the technologies that you use?

At Linknode we make the most of cutting edge technology in software development, the piece of software that we use frequently is Xamarin, which allows you to create IOS and Android apps using C#, part of Microsofts .NET Framework. Other technologies that we use are SignalR, WCF, MonoGame, Google Cardboard, MVC, Azure , Jquery and many others.

What is your proudest moment in Linknode?

I think my proudest moment came in 2013 where our main product VentusAR was at its prototype stage and we focused more on consumer apps. I was responsible for the development of Album Flow, an app that allows you to browse your music using a flow of album art. The Windows 8 version was submitted to a contest it was placed in the top 10 apps and we won prizes as a result.

How has Linknode helped you in your career development?

Before I joined Linknode I came from a web background, already fluent in web development, database design etc so I was looking for a job within that industry. I was however very intrigued by developing mobile apps as I felt that smartphones and tablets were the way of the future. 4 years later I have new commercial experience in developing apps and I also get the opportunity to use my old skills in web development.

What do you do when you are not working?

I am a keen music photographer which I have been doing for almost a decade now, I work for many music publications covering concerts from all over Scotland. If I’m not doing that I’m either chilling in the flat or out with friends.

Any random facts you could share with us?

Cows drink milk……… oh wait….

What’s the last joke you recall?

Two fish are in a tank, and one says to the other “How on earth do you drive this thing?

599041_10151172001479006_876693548_n

Xamarin acquired by Microsoft

Hi, it’s me again – Rufus. It is that time of the year again where I write a technical blog!

You may have read my previous post about the Apple technologies that were announced a few months ago. With the recent announcement that Xamarin has been acquired by Microsoft, I thought it was an appropriate time to share some of my thoughts with the world! This blog post will share a little bit our history with Xamarin technologies – and some of our thoughts on the recent news.

How we got into Xamarin

When Linknode first dipped into the world of mobile applications, it was only natural for us to begin investigating the Windows Phone platform, due to our developers having a rich history with Windows desktop and server based technologies. After creating a number of Windows Phone applications – to gain some experience with intricacies of mobile development we looked into expanding to the other platforms. Driven by our development history, our desire to build Augmented Reality applications and the small size of our development team the following requirements were important to us:

  • Share as much code as much as possible
  • Provide the user with a “Native” experience – Each application should follow the idioms of the device it is running on
  • The ability to write high performance – real time applications
  • Ideally we should be able to reuse our .net/C# skills and code.

The above list of requirements is quite steep – but the first three are easily possible using available tools. However back then, most people thought C# was a Microsoft technology and was not supported by other platforms… or was it? This is where Xamarin comes in!

xamagonXamarin

We started building some simple test applications to try out the Xamarin technologies and were highly impressed with the quality of the output, and also the lack of a steep learning curve.

Xamarin is based on the “Mono” runtime and allows you to write applications in C# for Android, iOS, Windows and Mac. Xamarin allowed us to share a large amount of code and make use of our experience with C# and the general richness of the large “.net” based API’s.

Some other options at the time were not sounding so promising. This is not to say switching to Xamarin was without issues. As an “early bird” user of Xamarin we had some issues such as the primitive nature and stability of the development tools. However we have seen the Xamarin toolchain go through various phrases of re-branding and improvements and it has now emerged as a stable and somewhat mature development platform. The Xamarin developer Ecosystem is full of libraries/components developers can make use of to accelerate development without sacrificing the experience for the end user. Another point of note is that if you are a native iOS/Android developer with a lack of sharp experience as I was – switching to Xamarin is extremely easy. All the API’s and built-in Frameworks you are familiar with are still there and easily accessed from C#.

code-sharing1

 

Acquisition + conclusion

We are happy with the decision to go down the route of building cross platform applications using Xamarin and are pleased with the somewhat expected acquisition. We hope the recent news will enrich the Xamarin development community further, not only by generating more interest for the Xamarin toolchain and growing the size of the community – but also to aid in breathing some new life into the windows mobile space.

Either way I am sure having the great experience of a company such as Microsoft cannot be a bad thing. If any of you are deliberating whether to investigate Xamarin for your own projects I strongly recommend you try it out!

xamarin-joins-microsoft

 

 

Hackdays!

Last week, as a company we had a hack day. Everyone (well all the developers) stopped their normal work and had two days to see what they could produce.

The Brief

Produce an immersive Augemented or Virtual Reality experience.

The Prizes

  • A bluetooth wireless speaker
  • A  mini drone
  • Pie Face (the kids game with skooshy cream)
  • Eternal respect and bragging rights in Linknode Towers

The Technology

Our standard developer kit is to use Xamarin to make cross platform C# for android and iOS. So for this hack day, I wanted to give everyone the option to use whatever tools (software / hardware) they wanted.

  • An Oculus Rift DK2
  • Google Cardboard
  • A DSLR camera and a Go Pro
  • iPhones (various different types)
  • Android phones (various different types)
  • Unity / MonoGame

The Starting Gun

The Entries

VentusAR Fly Through using WebGL on Google Cardboard / Oculus Rift

ventusar_cardboard ventusar_oculusrift

 

 

 

Experiementing with WebGL to create something for  google cardboard experience. This provdes a views for each eye that when viewed togther provide a stereoscopic effect. In theory the same technique could be used to show the VentusAR Fly Through on the Oculus Rift.

Google Street View on Google Cardboard

Screenshot_20151113-102946Can we provide an Outdoor Virtual Reality experince using imageary from Google Street View and view it while on site. This takes the GPS location from the device and contacts google street view to show photography from that location.

 

Surviving the blob onslaught (with Unity for Google Cardboard)

Screenshot_2015-11-13-16-30-59A simple survival game based where users, wearing the Google Cardboard headset, have to explore the virtual world. They are under sustained attack from blobs, which they must destroy using the magnetic switch on the side of the device. Built in Unity so is cross platform, this game runs well on Android and iPhone devices.

Exploring panoramic photographs with the Oculus Rift

Could we take a 360degree panoramic photograph and explore our way aorund it using the oculus rift. This would anchor the photograph in real world position and as you move you head, only show the part currently shown in the current field of view.  However, getting the oculus rift running in 2 days proved hard.

The Results

Ryan won the best experience prize with his ‘Surviving the blob onslaught’ game for google cardboard. He won the drone and eternal bragging rights within Linknode Towers (or at least until we do it all again sometime in 2016). Congratulations Ryan.

Minh and Rufus shared the Most Commercial possibility prize for their efforts with bringing existing VentusAR functionality to google cardboard / oculus rift.

Conclusions

We would pick three over-riding thoughts that we took out of this hack day:

  • The pixel resolution of a phones screen is visible when viewed through the google cardboard lenses.
  • The google cardboard is crying out for other methods of getting input
  • Getting anything to work with the Oculus Rift is hard

In all there are some interesting additions we could add to VentusAR that provide more immersive virtual reality experiences. Watch this space…

Sensor Fusion: Rolling your own

Last time, I wrote about sensors and sensor fusion. Over the last couple of years at Linknode, we’ve gained considerable practical experience with the sensors that are built into modern tablets. This blog post is a fairly technical explanation of some of that knowledge and understanding, and why we ended up doing what we did – if you don’t like maths, I suggest you skip this blog post.

I mentioned last time that most sensor fusion algorithms have been written with gaming in mind. Whatever anyone says, games are the things that push the boundaries of the hardware on mobile devices. There are two needs for speed on mobile devices these days – animating the transitions within app and providing detailed 3D experiences within games. Mobile devices are not (yet) used for number crunching or other processor intensive operations.

Problems

We have seen some specific problems with manufactures implementation of sensor fusion. A lot of the algorithms are about rate of change and responsiveness over absolute accuracy. With VentusAR, we require accuracy foremost as the visualisations we produce could end up under expert scrutiny. One of our early clients said that they would accept a tolerance of +/- 2° from the compass (and much less in the other sensors). We provide calibration tools to allow sensors to get within a tolerance of 0.1°. We found two main problems with default sensor fusion algorithms:

  • Jitter – the compass / fusion on some android tablets would jitter unacceptably. The terrain model would jump by +/- 10° while the device was sitting on the table.
  • Inaccuracy – the wireline could be a few degrees out when trying to align it to real world terrain. This could sometimes be corrected by rotating the device in 3D then pointing at the view again

Rolling your own

Rolling your own sensor fusion isn’t too hard, we started with some basic requirements:

  • Smoothing – the sensor fusion algorithm should produce a smooth output. If the device is setting stationary on a table, the inputs should not “jitter”. For example, using My View, the terrain should be accurate and not jitter
  • Accurate – prioritise the input from the compass over all over input – we should be using the other sensors to smooth and enhance the compass, not using the compass to provide stability to the gyroscope
  • Fast – The sensor data is read from the device at either 50 times or 60 times a second (50 on android, 60 on iOS). This means the CPU has 20 to 25ms to process each package of sensor data to ensure that we are keeping up.

With these requirements in mind we set about writing our own sensor fusion algorithm. The pseudo code for the algorithm we ended up with looks little like:

  • Calculate a ‘smoothed heading’
    • Calculate change between current heading value and last heading value
    • If change > THRESHOLD
      • smoothedHeading = lastHeading +LARGE_OFFSET
    • Else
      • smoothedHeading = lastHeading + SMALL_OFFSET
  • Maximise accuracy of the compass component
    • Remove current heading component from the output of the manufactures sensor fusion
    • Multiply by smoothed heading from above
  • Normalize

When smoothing, if we have a big change in magnetic heading (ie if change is > THRESHOLD) the ‘Smoothed Heading’ will respond to that quickly, however small changes will be smoothed over by the small offset. The values for THRESHOLD, LARGE_OFFSET and SMALL_OFFSET are device specific and have been found by running appropriate testing on each device.

Results

To give an example of how the smoothing function of our sensor fusion works, I did some work to export the raw data to excel and do some analysis on it. The graph below shows the input (raw heading) plotted next two the output of the sensor fusion. This graph shows the input to the sensor fusion classes in blue and the output in orange. The x axis shows the number of data points (were receiving approximately 50 per second so this shows 2000 data points over about 40 seconds). The y axis shows the heading of the device is shown on the left (values between 0° and 360°).

40 of smoothing data from Linknode Sensor fusion implementation

40 seconds of smoothing data from Linknode Sensor fusion implementation

This shows our smoothing functions working correctly:

  • The peak and troughs on the graph are less extreme
  • Curves are smoother so there is less jitter
  • The peak is delayed by approximately 15 samples. This equates to approximately 0.25s which we have decided to be acceptable performance.

Comparison
Below are two crops of the My View function of VentusAR. The left hand (red wireline) shows the jitter as seen in v2.1. While the right video (black wireline) shows much less jitter in v2.2

Our custom sensor fusion has been a considerable bit of work at Linknode which we hope is useful for other people who want to understand the way the sensors work on these types of devices.

Your Phone has Attitude!

The axis on a mobile device

The axis on a mobile device

Sorry, this post isn’t about your phone or tablets bad attitude and the way it doesn’t let you do what you want – that’s just working with Android that does that. Instead, this post is about how we at Linknode use the sensors built into your device to understand the direction it is orientated to and how that can be used to do interesting things.

This is a core piece of technology we use within VentusAR. We have spent a lot of time and effort interfacing with the sensors within your devices. This experience and skill goes into several of our mobile apps to provide a more intuitive and useful mobile experience.

In this post, we’ll talk about attitude (or geospatial orientation), sensors and sensor fusion, then show some example code of how to get this attitude information on each of the major platforms. I’ll write a follow up post that will dig more deeply into what sensor fusion is and how we have customised it in VentusAR, to provide a better user experience in our augmented reality applications.

Attitude

To allow the device to present useful information about its surroundings, we need to know the direction the device is looking. This provides key information that you must know to be able to do any proper augmented reality. The direction your device is looking  is called ‘the attitude’ (or geographic orientation) of the device. In essence, this is a value that represents the rotation of the device in real world coordinates.  In mathematics, this rotation value can be represented in a number of ways: a quaternion, a rotation matrix or as three separate values for yaw, pitch and roll. We use a quaternion to represent this rotation because this is smaller, involves simpler maths to work with and avoids known problems with rotation matrices – I’ll cover that in a separate blog post some time.

Sensors

Modern phones and tablets have lots of sensors in them – they allow app developers get an insight into the world around them. In terms of attitude, the ones we are interested in this post are:

  • Compass – gives the direction of magnetic north in 3D space
  • Gyroscope – this measures angular rotation – how far you have rotated the device
  • Accelerometer – measure the direction of gravity in 3D space

There are a couple of limitations of these sensors that are worth knowing about:

  • Digital compasses are very noisy and susceptible to interference so often they jump during real world use. This is down to the characteristics of the sensor – as an app developer, there is not much you can do about it.
  • Gyroscopes tend to drift. There is no real world reference for the gyroscope, it is just measuring rotation. If you did a complete 360°, you would expect the gyroscope to give the same result. Unfortunately it doesn’t, after a while of running it tends to drift.

For these reasons, some very clever people came up with the concept of sensor fusion.

 Sensor Fusion

These sensors can be merged through software into a single “virtual” sensor using a process called Sensor Fusion. Many people have written in-depth articles about what Sensor Fusion is and how it works – but you may need a PhD to understand them. I think it is easiest to see it as a mathematical process that takes input from the three physical sensors (Compass, Gyro and accelerometer) and provides one unified quaternion representing the attitude of the device.

Sensor Fusion block diagram

Sensor Fusion block diagram

To provide a more detailed example, if you were standing in the northern hemisphere with the device perpendicular to the ground facing the north pole (i.e. level on a tripod, facing a heading of 0 degrees), the devices attitude would be:

0 degrees 45 degrees 90 degrees 180 degrees
x  0  0  0  0
y  -1  -0.9238795  – 0.7071068  0
z  0  0  0  0
w  0  0.3826834   0.7071068  -1

How does it help

As I said at the start, the integration to the sensors is at the core of what we do at Linknode. We have several apps that read data from the sensors and provide a real time view across a 3D world. We can pull in a real world terrain model and show what the terrain looks like in a particular direction.

Implementations

Each device manufacture / OS vendor provides their own implementation of sensor fusion within their devices. These are usually good enough for general or gaming purpose – they tend to have an emphasis on speed of response instead of absolute accuracy. Below I have shown some code that allows you to get a quaternion out of the API provided by the OS.

All code below is c# as all the code we write is c#. For more information on running c# on iOS or Android have a look at what Xamarin are up to.

Apple (iOS)

Apple provide the CMMotionManager classes that can be used on iOS.

public class IOSSensorFusionExample
{
  public void Start()
  {
    CMMotionManager _motionManager = new CMMotionManager();
    _motionManager.DeviceMotionUpdateInterval = 1/60; //request 60 updates a second
    _motionManager.StartDeviceMotionUpdates(
      CMAttitudeReferenceFrame.XMagneticNorthZVertical,
      _backgroundQueue,
      delegate (CMDeviceMotion motionData, NSError error)
      {
        CMQuaternion cMQuatAttitude = (CMQuaternion)motionData.Attitude.Quaternion;
        //do something useful with the quaternion here
      });
  }
}

(See Xamarin API for mode details)

Android

Android provides the RotationVector sensor type accessible from their SensorManager class:

public class AndroidSensorFusionExample : Java.Lang.Object, ISensorEventListener
{
  public void Start()
  {
    SensorManager sensorManager = this.GetSystemService(Context.SensorService);
    var defaultRotationVectorSensor = sensorManager.GetDefaultSensor(SensorType.RotationVector);
    sensorManager.RegisterListener(this, defaultRotationVectorSensor, SensorDelay.Game);
  }

  public void OnSensorChanged(SensorEvent e)
  {
    float[] q = new float[4];
    SensorManager.GetQuaternionFromVector(q, e.Value.Values.ToArray());
    Quaternion quaternion = new Quaternion(q[1], q[2], q[3], q[0]);
    //do something useful with the quaternion here
  }
}

(see Xamain Android API and Android Docs for more information)

Windows Phone

Windows Phone provides the motion classes:

Motion sensor = new Microsoft.Devices.Sensors.Motion();
sensor.CurrentValueChanged += (sender, args) =>
{
    var quaternion = args.SensorReading.Attitude.Quaternion;
    //do something useful with the quaternion here
};
sensor.Start();

(see MSDN for more details)

Windows 8

Windows 8 uses the motion class:

var sensor = Windows.Devices.Sensors.OrientationSensor.GetDefault();
sensor.ReadingChanged += (sender, args) =>
{
    var quaternion = args.Reading.Quaternion;
    //do something useful with the quaternion here
};
sensor.ReportInterval = 16;

(see MSDN for more details)