QGIS Timemanager og Hvordan jeg fik hjælp af QGIS fælleskabet til at komme et stort skridt fremad.
613
post-template-default,single,single-post,postid-613,single-format-standard,bridge-core-3.0.5,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-child-theme-ver-1.0.0,qode-theme-ver-29.2,qode-theme-bridge,disabled_footer_top,disabled_footer_bottom,wpb-js-composer js-comp-ver-6.10.0,vc_responsive

QGIS Timemanager og Hvordan jeg fik hjælp af QGIS fælleskabet til at komme et stort skridt fremad.

Af Søren Zebitz Nielsen, Københavns Universitet, 11. marts 2015

This blog post is the story of how a very quick response on Github to my request for fixing an issue with milliseconds support in the TimeManager QGIS plugin (1, 2, 3) led to a lunch meeting, that led to brainstorming on ideas for new features to the plugin, enabled me to produce some much needed visualizations of my tracking data for my ongoing PhD project, and eventually will result in the first face-to-face meeting between the two developers of TimeManager, Underdark, a.k.a. Anita Graser (1, 2), and Carolinux , a.k.a. Ariadni-Karolina Alexiou, at the QGIS User and D veveloper Conference in Copenhagen (May 18th – 22nd).

I have decided to write the blog post in English even though the other blog posts on qgis.dk are in Danish. The reason is that I will like our international guest at the upcoming conference to be able to read the story as well and thank them for the great work they do on QGIS and all the plugins. Apart from telling the story and praise QGIS and its developers, I hope this post can inspire and give some useful tricks to aid people’s work in making great visualizations of time enabled geospatial data. QGIS TimeManager is certainly the weapon of choice for that. Before telling the story I will give a short introduction to my PhD project and the data that I need to visualize. You can skip this if you just want to read about what was done in QGIS and TimeManager.

The PhD project

In my ongoing PhD project on “Human Movement Patterns in Public Spaces” at the Department of Geosciences and Natural Resource Management (IGN) at University of Copenhagen. I am working with tracking of pedestrians and cyclists using thermal cameras. Understanding human movement patterns is an important factor in planning for smarter and greener cities for people, and to do so we need data. The project is inspired by the seminal work of William H. Whyte on ‘The Social Life of Small Urban Spaces’ (1980). The ambition is to be able to track all individual pedestrian’s and cyclist’s movement in a public space over a sustained period of time and analyze movement patterns, characteristic behaviors, and extract movement parameters. The generated trajectories are analyzed using GIS tools and methods.

Why do we use thermal cameras you might ask? Simply since we cannot then identify individuals in the thermal images. We can only see the heat signature emitted from objects and people in the scene. In essence we are merely taking the temperature of the public life in terms of the movement taking place in the plaza. That means that there are no legal issues in relation to privacy. After all we are not advocating a surveillance society. We just want to find a way to study movement patterns in a place over time and the information that can be gained from such a large collection of trajectories in order to understand how a place functions and how people use it.

The use of thermal cameras has also the advantage over normal cameras that they are independent of the lighting conditions in the scene and the fact that people cast no shadows in thermal images. Shadows are important to consider when using Computer Vision tracking as the appearance of shadows can be confused by tracking algorithms as contours of persons. Furthermore bad lighting conditions can deteriorate the performance of the algorithms, but that problem is not an issue with thermal imaging.

Computer Vision tracking

For the Computer Vision part of my project I have collaborated with Rikke Gade, PhD, and her supervisor Professor Thomas B. Moeslund from the Visual Analysis of People Lab (VAP) at Aalborg University. They provided access to the thermal camera equipment and developed the tracking algorithms used. The collaboration was done as a side-project during Rikke’s PhD project on tracking people in indoor sports arenas to understand how these are used over time in order to optimize booking of such facilities. After reading some of her papers  I decided to contact her and ask if we should try to test the camera and tracking technology in outdoor public spaces to capture tracks for my project. She liked the idea and we decided to do a study at the Kultorvet plaza in central Copenhagen. We did this with a single thermal camera and captured two different views with a framerate of 30 frames per second. The first view was from a near-nadir camera position and the other from a wide angle view overlooking the plaza. The recordings were captured consecutively with the same camera. We relate the pixels of the video images to real world coordinates by a homograpy matrix which we calibrate using control points in the scene measured with high precision GPS surveying equipment.

The need for ground truth annotation

To be able to assess the accuracy and completeness of the tracks generated by the Computer Vision technology I needed to obtain the ground truth of a sample of the recorded tracks. I therefore decided to manually annotate five minutes of the two thermal videos to get the ground truth tracks for all people in the plaza in these periods. To do this I needed software with an interface to digitize tracks in videos and convert the tracks into real world coordinates by applying the same concept of homography as used in the Computer Vision algorithm. Some research into this subject led me to the work of Aliaksei Laureshyn and his colleagues in the research group on video analysis in traffic at Lunds Tekniska Högskola (LTH) at the University of Lund in Sweden. They have developed a program called T-analyst to aid their research in the field of traffic environments and road user behavior. The T-analyst program was a perfect fit for my purpose. Aliaksei kindly gave me an introduction to the software and helped me set it up for my project. The research group at LTH has decided to make T-analyst freely available online  so you can try it out if you want. Before you go about digitizing ground truth tracks from video, be aware that it is a very labor intensive process. To annotate 5 minutes of video for my study took about a week of tedious work. It gave between 350 and 400 tracks per video with a spatial accuracy between 10 and 30 centimeters depending on the distance from the camera. When consistent tracking data for a sustained period of time is needed there is therefore no way around using an automated approach like Computer Vision.

Computer Vision technology can still not track people perfectly in all situations as it has difficulties when people occlude each other in the camera view. This is a problem that can cause the tracking algorithms to loose track and switch IDs of the tracked individuals or assign a new ID to the same individual when a new track of the same person is picked up again. Having said that Computer Vision is a research field where very rapid developments are taking place these years and the algorithms keep getting better and better. For inspiration in relation to Computer Vision applied to pedestrian tracking there are some start-ups such as the New York based Placemeter and the Swiss based VisioSafethat are worth following.  Despite the rapid development in the field I have not yet seen any research that mention analysis of Computer Vision tracking data using GIS and spatio-temporal databases. This despite the fact that tracking data is basically an ID, X, Y and a timestamp, T, for each track point, which is in fact spatio-temporal point data that is suited for analysis in GIS. I hope my work can contribute to this gap.

Importing data to GIS

Before my data could be analyzed in GIS I needed to format it accordingly. The output formats I got from the T-analyst and the Computer Vision tracking program were not directly GIS readable. Furthermore the T-analyst format had only sequential frame numbers instead of timestamps, so I needed to calculate timestamps based on the video start time and frame rate. The parsing of the data was done in a Python script I wrote with my supervisor, Hans Skov-Petersen. The script also calculates statistics for each track as well as some movement parameters such as speed and sinuosity. It writes simple CSV files that can be loaded directly into GIS programs as well as imported to a PostGIS database, which is what I use to store and query my data. I owe much to Bo Victor Thomsen Aestas-GIS, who has written the blogposts on spatial databases  qgis.dk, for helping me with learning how to work with PostGIS.

Working with TimeManager

Having my data stored as time enabled point layers in PostGIS I can now tell the story about QGIS TimeManager I started out with to motivate this blog post. I wanted to be able to dynamically visualize my datasets as moving dots with short trailing tracks a few seconds back in time. To get inspiration on how to do that I read the QGIS planet spatio-temporal data blog and in particular Anita Graser’s blogpost on how to make ‘Nice Animations with Time Manager’s Offset Feature’. I adopted her idea about using a ‘Forever’ field with a date-time far out in the future to set it as the ‘end’ field in TimeManager to display trailing tracks permanently in the scene. I tweeked this idea a little in the sense that I also added a ‘breadcrumbs’ field to my data which I set be +5 seconds after my ‘dt’ (timestamp without time zone) field. When I used this field as the ‘end’ field I got the effect of a short trailing track after my data points.

Timemanager_setup

To distinguish between the trailing tracks and the data points themselves I simply add two versions of the same data layer to TimeManager with one having no ‘end’ field set and the other having the ‘breadcrumbs’ field as the ‘end’ field. I set a different layout for the two layers in the layer properties with the data points defined to be larger than the trailing track points and placed above it in the layer stack. When I want to highlight specific tracks as it is the case in a situation I have in my data with a ‘facer’ (colored orange), working in the street trying to stop people to sell them something or pursue them to donate to a cause, I export that track to its own layer, assign a different color to it, and add it again on top of the other layers. I have used the same technique for the tracks of the people that the facer approaches by coloring these in blue. See figure below. The light green polygons in the frame represent buildings and the grey polygons are features and obstacles in the street that people will need to avoid.

Screenshot_facers_sequence

The millisecond issue

Since my tracking data is extracted from a video with 30 fps I have data points for each fraction of a second at .033, .067 and .100. To begin with I played with the data and a Time Frame Size set to 1 second. This gave me 30 points displayed along the trajectory for each frame, which was not good enough for my needs. I wanted to view one point per trajectory per frame corresponding to one point for each trajectory for each video frame. To be able to do so I needed support for milliseconds in TimeManager.

A quick look at the TimeManager’s Github page revealed an issue about a known bug with support for sub-second time periods. I wrote a comment to the issue explaining what I needed to be able to do, and asked if the milliseconds bug maybe could be fixed in a future version of TimeManager. Less than half an hour later Carolinux responded to my request and asked for a sample of my data so she could work on resolving the issue. Some mailing back and forth revealed the funny coincidence that we are both based in Zürich at the moment. Carolinux, who is a Computer Scientist from ETH, is working for a software company in the city. I am in Zürich for three months as a visiting PhD student in the GIScience research group led by Prof. Dr. Robert Weibel at the Department of Geography at University of Zürich (UZH). Being in the same city we decided to meet for lunch later the same week and discuss solutions to the problem and ideas for new features in TimeManager. Carolinux also wanted to learn more about my PhD project, and for me a meeting would be a great opportunity to learn more about QGIS and plugin development, and get to know how to use Github properly since I am a newcomer to that.

Already the next day I received an email from Carolinux saying that she had solved the problem with the millisecond issue and provided a fix for it. The problem had to do with TimeManager detecting the minimum time value in my ‘dt’ (timestamp without time zone) field in my datasets as a string without trailing zeros for the sub-second part of the time format. This it did even though the trailing zeros were specifically stated in the CSV file imported to Python. TimeManager took the time format to be %Y-%m-%d %H:%M:%S instead of %Y-%m-%d %H:%M:%S.%f. One cannot force %Y-%m-%d %H:%M:%S.%f for time values since a date string like “2014-01-01 00:01:05” cannot be parsed with microseconds – it is simply invalid in Python. To solve the problem the work around was to make a text representation of my ‘dt’ field, called ‘dtstr’, and force that to have all trailing zeros for the sub-second part. In TimeManager the ‘dtstr’ should then instead be used as the ‘start’ time field as TimeManager just reads it in as a string anyway and detects the time format based on this string. The implementation was done in the SQL code I use to create my tables in PostGIS as explained below.

SQL_for_create_table

This gave me the following attribute table for the track points when I open the PostGIS table as a layer in QGIS.

Attribute_table

With the millisecond issue solved another delicate issue surfaced. TimeManager uses the Time Frame size field to define the time period for which to display data for each frame. This time period is a fixed interval and my timestamps are not from equally spaced periods since they are derived from a fraction of 1/30, resulting in timestamps at .033, .067 and .100 due to rounding. Therefore it was not possible to export a sequence with only one track point for each frame unless a shorter time frame size was selected, which would then result in frames with no data for the track points layer. A way to get around this problem was to build a feature that test if there are empty frames in the time managed layers, and if so, not export these frames when exporting the sequence of PNG images to render a video from.

For those who are unfamiliar with the exporting video feature in TimeManager you need to know that it exports a PNG image for each frame it renders to a folder of your choice. One should afterwards use another program to stitch the PNG frames to a movie. On Windows the Movie Maker program  can do so, but there are also several command line tools such as FFMPEG that can do the job. There has been ideas to build the video stitching procedure into TimeManager itself, but since it is very easy for users to stitch the PNG files into a video in another program, there are other features with higher priority waiting to be built.

Coffee, code and new features

When Carolinux and I met over lunch we among other things discussed the idea for the feature to skip empty frames when exporting. We decided to meet again the following Saturday over coffee and work some more on developing and testing new features and ideas. At our meeting I also told Carolinux about the upcoming QGIS Conference in Copenhagen  and that I knew that Anita Graser  would attend the conference. She told me that she had actually never met Anita face-to-face even though they are both developing on TimeManager. The Copenhagen event would be their first opportunity to meet, so she really wanted to attend the conference as well and signed up for it more or less right away.

When Carolinux and I met again the following Saturday she had implemented the ‘do not export empty frames in time managed layers’ feature and had also been playing with implementing linear interpolation of data points in frames with no data. There has been a request for the latter feature on Github. For my high resolution tracking data set there will not be any extra value added with interpolation between frames as they only last 1/30 of a second, but there are many other cases with longer span between frames where such a feature will be very useful. I have thus offered to help with beta-testing of this and other features we might come up with, and I sure learn a lot from it.

In relation to the feature of not exporting empty frames in any of the time managed layers we discovered that this may not be the best solution to the problem described above. In the cases where only one of the time managed data layers have no data in some of the frames, other layers may still hold data points and thus the frames will not be skipped. This is the case when using the ‘end’ field for example when I apply the concept of trailing tracks to my data points. For the trailing tracks there will always be data in all frames since I specifically asked for the tracks to remain visible for five seconds. Thus the way the feature is built at the moment needs to be refined, so instead of checking if for empty frames in all time managed layer, it should be an option to choose for each time managed layer if export of frames should be skipped when data is missing for that particular layer. Similar to the way the interpolation option is made selectable. At the moment we are thinking about how to implement that. Another idea is also to let one particular layer decide the time period for the export or to make the user select a specific time period to export. In that way a long animation of one layer could have several shorter layers appearing during the entire animation. I have made a feature request for that on Github – feel free to comment and contribute.

The visualizations

Since the pedestrians I track do not move that far in a 100 milliseconds period I have chosen just to make my animations with a Time Frame size of 100 milliseconds for now, which gives me three track points displayed more or less on top of each other for each track per frame. This is certainly good enough so far. Below I have embedded a video of the manually annotated ground truth tracks in TimeManger for the ‘facer’ sequence introduced above.

 

The size, font, place and format of the timestamp for each video frame can be set in the Time Display Options menu under Settings as shown here.

Time_display_options

The actual thermal video we used to track people to produce the sequence in TimeManager is shown below.

 

I have originally used the situation with the ‘facer’ in the poster on ‘Measuring Human Movement Patterns and Behaviors in Public Spaces’  which I presented at the Measuring Behavior 2014 conference.

Furthermore I have made another video of a situation I used as an example in a conference proceeding on ‘Taking the Temperature of Pedestrian Movement in Public Spaces’ for the Conference on Pedestrian and Evacuation Dynamics 2014 (PED 2014). This can be seen below.

 

The tracks from the Computer Vision algorithm will not be made available before we have published a paper addressing their accuracy and completeness in relation to the ground truth, but when that is done the plan is to make the entire dataset and corresponding thermal video files publicly available.

For a broader perspective on the work I have done there is also a conference abstract on ‘Movement Pattern Analysis in Smart Cities’ which I made for the Analysis of Movement Data workshop at the GIScience 2014 conference. A video of the slides from presentation is available here.

For more about my PhD project please see my profile on the IGN department’s homepage . On my Youtube channel related to the project  more videos will be added along the way.  You are welcome to contact me if you have any comments, questions or suggestions to interesting research that can be made with the type of data that I have collected.

Lastly I will once again thank Underdark and Carolinux for their work on developing TimeManager for QGIS and all the QGIS developers in general for the awesome work they do. I especially I want to thank Carolinux for good and fun company the times we met in Zürich and I am looking forward to work more with you. I am also looking very much forward to meet all you QGIS users, educators and developers at the conference in Nødebo, Copenhagen, in May. Until then – try and pull the newest version of TimeManager from Github and please test it and comment on the new features.  We are eager to get feedback.