Sign in

PhD student in robotics
Photo by Shubham Sharan on Unsplash

In this blog post, I would like to discuss academic teaching with you; Why teaching is frowned upon by many researchers, and how it can actually help them be better at doing research by teaching great courses.

Part 1: The dilemma of teaching

Researchers at institutions with teaching assignments often find themselves faced with a dilemma when it comes to teaching: How can they spend precious time teaching that brings them no immediate benefit that would help them with their career (writing academic papers and getting funding)? Don’t get me wrong here; I assume many professors and other teaching staff know perfectly well that teaching is…


Today, we will discuss our most recent paper HeatNet: Bridging the Day-Night Domain Gap in Semantic Segmentation with Thermal Images (by Johan Vertens, Jannik Zürn, and Wolfram Burgard). This post serves as a quick and dirty introduction to the topic and to the work itself. For more details, please refer to the original publication, which is available here and the project website will be available soon at http://thermal.cs.uni-freiburg.de/.

Introduction

Robust and accurate semantic segmentation of urban scenes is one of the enabling technologies for autonomous driving in complex and cluttered driving scenarios. …


In this post, I present a summary of my findings from my most recent paper Self-Supervised Visual Terrain Classification from Unsupervised Acoustic Feature Learning. A preprint is available here and the project website is available at http://deepterrain.cs.uni-freiburg.de/.

This post is more technical than my usual posts. If you have any questions about the research, please post questions in the comments. Thanks!

Introduction

Recent advances in robotics and machine learning have enabled the deployment of autonomous robots in challenging outdoor environments for complex tasks such as autonomous driving, last-mile delivery, and patrolling. Robots operating in these environments encounter a wide range…


The machine learning library TensorFlow has had a long history of releases starting from the initial open-source release from the Google Brain team back in November 2015. Initially developed internally under the name DistBelief, TensorFlow quickly rose to become the most widely used machine learning library today. And not without reason.

Number of repository stars over time for the most widely used machine learning libraries

TensorFlow 1.XX — where are we today?

Before we discuss the most important changes for TensorFlow 2.0, let us quickly recap the some of the essential aspects of TensorFlow 1.XX:

Language support

Python was the first client language supported by TensorFlow and currently supports the most features within the TensorFlow ecosystem. Nowadays, TensorFlow is available in a multitude…


In today’s post I would like to give you a quick-and-dirty introduction into a neural network architecture type called Autoencoders. The post is aimed at Machine Learning beginners who would like get some basic insights into Autoencoders and why they are so useful.

Context

Autoencoders are structured to take an input, transform this input into a different representation, an embedding of the input. From this embedding, it aims to reconstruct the original input as precisely as possible. It basically tries to copy the input. The layers of the Autoencoder that create this embedding are called the encoder, and the layers that…


Reinforcement Learning (RL) has had tremendous success in many disciplines of Machine Learning. While the results of RL almost look magical, it is surprisingly easy to get a grasp of the basic idea behind RL. In this post, we will discuss the basic principles of RL and we will implement a simple environment in which an agent tries to stay alive as long as possible by means of RL.

We will use the previously discussed Concept-Math-Code (C-M-C) approach to gain drive our process of understanding RL.

Concept

Introduction

Broadly speaking, Reinforcement Learning allows an autonomous agent to learn to make intelligent choices…


TPU unit with 4 cores

Last week, we talked about training an image classifier on the CIFAR-10 dataset using Google Colab on a Tesla K80 GPU in the cloud. This time, we will instead carry out the classifier training on a Tensor Processing Unit (TPU).

Because training and running deep learning models can be computationally demanding, we built the Tensor Processing Unit (TPU), an ASIC designed from the ground up for machine learning that powers several of our major products, including Translate, Photos, Search, Assistant, and Gmail.

TPU’s have been recently added to the Google Colab portfolio making it even more attractive for quick-and-dirty machine…


Google Colab is a free cloud-based Jupyter notebook environment running completely on Google cloud infrastructure. You can create and execute any Jupyter notebook you want in the browser. Google generously assigns each user a free Tesla K80 with 12GB memory for 12 hours at a time for their small-scale private machine learning needs.

In this article, we will write a Jupyter notebook in order to create a simple object classifier for classifying images from the CIFAR-10 dataset. …


Lately, I came to the realization that the process of thoroughly understanding a scientific theory or fact may be divided into three tiers:

  • Concept (C)
  • Math (M)
  • Code (C)

One cannot master a tier until the previous tier has been mastered thoroughly. Let me walk you through these three tiers:

Concept

A concept or idea of the approach is needed in order wrap your head around the thing you are trying to understand. This concept may be a very vague yet profound statement boiling down some essential scientific findings to a single sentence or a couple of sentences at most. Some…


Meet ‘Robby’. Robby is a robot. Well technically he is a simplistic virtual model of a robot, but that should be fine for our purpose. Robby is lost in his virtual world consisting of a two-dimensional plane and a couple of landmarks. He does have a map of his environment (no mapping is needed), but he does not know where exactly he is located in this environment.

Robby (big red circle) and two landmarks (smaller red circles)

The purpose of this post is to walk you through the steps of robot localization using landmark detection and Extended Kalman Filtering.

Part 1 — Linear Kalman Filters

Kalman Filtering can be understood as a way of making sense…

Jannik Zürn

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store