Smart city

Revision as of 15:33, 4 November 2020 by Yuan4 (talk | contribs) (→‎Schedule)
Imbox content.png Note: The primary platform for communication in this course will be StudIP. All materials will be uploaded there.


Details

Workload/ECTS Credits: 180h, 5-6 ECTS
Module: M.Inf.1222 (Specialisation Computer Networks, 5 ECTS) or M.Inf.1129 (Social Networks and Big Data Methods, 5 ECTS) or M.Inf.1800 (Practical Course Advanced Networking, 6 ECTS)
Lecturer: Prof. Xiaoming Fu
Teaching assistant: MSc. Fabian Wölk (fabian.woelk@cs.uni-goettingen.de), MSc. Weijun Wang (weijun.wang@informatik.uni-goettingen.de), Dr. Tingting Yuan (tingt.yuan@hotmail.com)
Time: Wed. 14:00-16:00
Place: Room 0.103, Institute for Computer Science (mostly will be online)
UniVZ Lunivz link [1]


Announcement

Due to the recent situations in the context of Covid-19, new information will be updated here in time, please check this webpage periodically to get the newest information.

The first lecture will be online on November 4 using StudIP's BBB (online meeting service). Please check the course schedule at the end of this webpage.

General Description

Computer Networks Group, Institute of Computer Science, Universität Göttingen is collaborating with Göttinger Verkehrsbetriebe GmbH (represented by Dipl. Anne-Katrin Engelmann) and setting up this exciting course.

This course covers two aspects on Smart Cities in the context of public transport: event monitoring and passenger counting.

The goal of this course is to:

  • Help students to further understand computer networks and data science knowledge.
  • Help students to use computer science knowledge to build a practical AI system.
  • Guide students to utilize knowledge to improve the performance of the system.

In this course, each student (max. number 30) needs to:

  • Read state-of-art papers.
  • Use programming to build systems including computer vision algorithms, embedded design programs, and SOCKET network programs.
  • Learn how to analyze city public transport sensor data.

For the project we will design, implement, and deploy the system at several buses at specific positions with sub-systems consisting of:

  • Depth camera (e.g. Intel RealSense D435)
  • On-board computers (e.g. Raspberry Pi Zero, NVIDIA Jetson AGX Xavier)
  • Power supply (e.g. EC Technology Powerbank)

All these sub-systems in each bus will be combined into one system which shall be deployed for ideally an initial period of 2 months, thus obtaining sufficient data patterns for further analysis.

Tasks of students and implementation plan The students will be divided into 2 groups consisting of six 2-person teams. Each group will take responsibility to reimplement (and possibly adapt) a different existing software architecture for all the bus lines used in our project. Two of the 2-person teams in each group will be responsible for one specific sub-task inside independently (in case one team can’t compete). The teams inside one group will therefore have to co-operate. Note that we will give a default version of each module to guarantee the basic operation of the whole system.

The main tasks are as follows:

1. Collect the video data of the depth cameras with a predefined interface or preinstalled SD card periodically.

2. Label corresponding objects/events in videos as the dataset.

3. Reimplement existing video analytics architecture (using open source code from papers) with collected depth image video. (We split the architecture into modules. Each 2-person team takes care of one module then the group combines the modules together.)

4. Based on the implemented architecture, each team should develop an idea to improve the architecture. Then implement a demo, deploy in the bus system, show the collected results, and present the results in the final Smart City report.

a) The idea can be a new application.

b) The idea can also be an algorithm or module on how to improve the performance of the architecture.

Learning about such a fast-moving field is an exciting opportunity, but covering it in a traditional course setting comes with some caveats you should be aware of.

  • No canonical curriculum: Many topics in mathematics and computer science such as linear algebra, real analysis, discrete mathematics, data structures and algorithms, etc come with well-established curricula; courses on such subjects can be found at most universities, and they tend to cover similar topics in a similar order. This is not the case for emerging research areas like deep learning: the set of topics to be covered, as well as the order and way of thinking about each topic, has not yet been perfected.
  • Few learning materials: There are very few high-quality textbooks or other learning materials that synthesize or explain much of the content we will cover. In many cases, the research paper that introduced an idea is the best or only resource for learning about it.
  • Theory lags experiments: At present, video analytics is primarily an empirically driven research field. We may use mathematical notation to describe or communicate our algorithms and ideas, and many techniques are motivated by some mathematical or computational intuition, but in most cases, we rely on experiments rather than formal proofs to determine the scenarios where one technique might outperform another. This can sometimes be unsettling for students, as the question “why does that work?” may not always have a precise, theoretically-grounded answer.
  • Things will change: If you were to study deep learning ten years from now, it is very likely that it will look quite different from today. There may be new fundamental discoveries or new ways of thinking about things we already know; there may be some ideas we think are important today, that will turn out in retrospect not to have been. There may be similarly impactful results lurking right around the corner.

Prerequisites

  • You are highly recommended to have completed a course on Data Science (e.g., "Data Science and Big Data Analytics" taught by Dr. Steffen Herbold or the Course "Machine Learning" by Stanford University) before entering this course. You need to be familiar with computer networking and mobile communications.
  • Knowledge of any of the following languages: Python (course language), R, JAVA, Matlab or any language that features proper machine learning libraries

Grading

  • Participation: 50%
    • Task 1: 10%
    • Task 2: 20%
    • Task 3: 20%
  • Presentation: 20%
  • Final report: 30%

Schedule

Time Topic Output
04.11.2020
Lecture I: Course Setup & Smart City (Online) No
11.11.2020
Lecture II: Object Detection [2] & System Architecture-Video Analytics (Online) Papers (release 10 choose 2)
18.11.2020
Warm-up: (Vision Group) run Yolo for object detection

Warm-up: (System Group) initial and run first demo on Jetson nano

No
25.11.2020
Task 1: (Vision Group) train Yolo with a new dataset

Task 1: (System Group) efficient store image from Intel Realsense camera on Jetson nano

Task 1 report (deadline: 30.11.2020)
02.12.2020
Discussion & Task 2: (Vision Group) Yolo for depth image

Discussion & Task 2: (System Group) dynamic object detection pipeline configuration adjustment

Task 2 report (deadline: 21.12.2020)
09.12.2020
Task 2: Yolo for depth image
16.12.2020
Task 2: Yolo for depth image
23.12.2020
Discussion on Task 2
30.12.2020
Holiday
06.01.2021
Holiday
13.01.2021
Task 3: (Vision Group) Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

Task 3 report (deadline: 08.02.2021)
20.01.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

27.01.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

03.02.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

10.02.2021
Discussion & Brainstorming
15.03.2021
Final presentations
31.03.2021
Final report