Meet the Team- Intern Edition Vol. 3

Silvia Pichler, 

Not the typical internship

Brewing coffee for the other team members, spending most of your time copying files and killing the rest of it stalking your friends on Facebook- if that’s what your internship looks like, you should definitely waste your time more wisely. For instance by doing actually useful things at! Susanne Fercher, our research intern, tells you about her working experience at and all the cool stuff she learned.

Quick introduction

Hi, my name is Susanne, most people call me Susi. I am a student of „Verkehr und Umwelt“ (Traffic and Environment) at the UAS Technikum Wien, specifying in Intelligent Traffic Systems, i.e. Telematics.

In my final term this spring/summer I had to complete a 14-week full-time internship. Since Indoor Navigation is an expanding field of telematics with exciting possibilities, I was very happy to join the research team for three months and learn more about actual practices and the little everyday challenges that can come up.

My project

My internship project had to do with using a device’s built-in sensors in order to essentially figure out whether the person using the device is currently walking or not, also known as context detection. This was done with a substantial number of self-administered context recordings, amazing machine learning techniques, handy python tools, and several instances of trial and error.

The goal

The goal was to eventually determine between three basic contexts: stationary, active, and walking. Doing so provides valuable information for another part of the navigation framework, namely PDR, which stands for Pedestrian Dead Reckoning. PDR delivers good position estimation based on the number of steps and changes of direction. Its efficiency can be substantially increased with reliable context detection.

My typical day at

It is hard to generalise my activities because the task clearly evolved, and I dabbled in various areas over the course of the internship.
Generally speaking, a typical day would definitely involve some quantities of coffee, python code, machine learning tutorials, recorded sensor data, wacky plots, and maths. There was always somebody from the team there to help, if needed, and communication within the company was open and often entertaining.

The steps I took

After setting up my accounts and workspace, I endeavoured on a row of recording sessions and learned how to plot the collected data in a meaningful way. The following picture shows the acceleration data of an example recording. The vertical blue lines mark the beginning of a new context (in this instance, the order is stationary-active-walking-active-stationary), the horizontal black lines being the quartiles of each segment, the red lines being the mean and standard deviation.

sensor_output (1) (1)

Then, the concept of a sliding window was introduced, where a time window of fixed size was slid along the data, with statistical values constantly being calculated for each window. The larger the window (see colour legend), the “smoother” the output, with the cost of significant lag and loss of information.

sliding_window (1) (1)

A wide range of approaches later, I finally landed with Support Vector Machines (SVM), a machine learning method which trains on classified data (in our case, sensor output along with the associated context) and is able to automatically classify new test data with good accuracy.


The above picture shows the training data in the top left, with data recorded in a walking context coloured red. In the top right part of the picture, a set of test recordings is shown, the colours signifying the known states (for comparison), the shapes around the dots signifying the SVM classification. After this step, the data classified as walking is cropped from the dataset, and the process is repeated with stationary context on the remaining data (shown in the bottom half of the picture).

To illustrate how well the SVM actually works, the final picture below shows a plot of several statistical values of the sensor output over time (using a predefined time window of a few seconds) along with the detected states (red = stationary, green = walking). For comparison, the circles (the semitransparent horizontal lines) signify the recorded states for the time (0 = stationary, 1 = active, 2 = walking).


What I learned from my internship at

I was a true python newbie before my internship, and while there is still so much to learn, I have definitely begun to find my feet around it. I also know much more about machine learning than I did before, and I’ve even learned a thing or two about Beacons, although they were not directly involved in my line of work (except for the time I helped installing several dozens of them in Lugner City, but that’s another story..).
But apart from the obvious stuff related to my task, I can safely say that I learned something new almost every day at The research team is bustling with all sorts of knowledge, and from AeroPress (look it up!) to tar-flavoured liquorice to the Zen of Python, there were questions answered I wouldn’t even have known to ask.

Text written by Susanne
Photo by Max

Read about other interns and their experiences
Agnese Sacchi
Paula Mayerhofer