Smart city

Revision as of 14:24, 23 October 2020 by Wwang (talk | contribs) (→‎Schedule)
Imbox content.png Note: The primary platform for communication in this course will be StudIP. All materials will be uploaded there.


Details

Workload/ECTS Credits: 180h, 5-6 ECTS
Module: M.Inf.1222 (Specialisation Computer Networks, 5 ECTS) or M.Inf.1129 (Social Networks and Big Data Methods, 5 ECTS) or M.Inf.1800 (Practical Course Advanced Networking, 6 ECTS)
Lecturer: Prof. Xiaoming Fu
Teaching assistant: MSc. Fabian Wölk (fabian.woelk@cs.uni-goettingen.de), MSc. Weijun Wang (weijun.wang@informatik.uni-goettingen.de), Dr. Tingting Yuan (tingt.yuan@hotmail.com)
Time: Mon./Wed./Thur. 14:00-16:00 (students may be divided into 3 groups due to Corona)
Place: Room 0.103, Institute for Computer Science
UniVZ Lunivz link [1]


Announcement

Due to the recent situations in the context of Covid-19, new information will be updated here in time, please check this webpage periodically to get the newest information.


General Description

Computer Networks Group, Institute of Computer Science, Universität Göttingen is collaborating with Göttinger Verkehrsbetriebe GmbH (represented by Dipl. Anne-Katrin Engelmann) and setting up this exciting course.

This course covers two aspects on Smart Cities in the context of public transport: event monitoring and passenger counting.

The goal of this course is to:

  • Help students to further understand computer networks and data science knowledge.
  • Help students to use computer science knowledge to build a practical AI system.
  • Guide students to utilize knowledge to improve the performance of the system.

In this course, each student (max. number 30) needs to:

  • Read state-of-art papers.
  • Use programming to build systems including computer vision algorithms, embedded design programs, and SOCKET network programs.
  • Learn how to analyze city public transport sensor data.

For the project we will design, implement, and deploy the system at several buses at specific positions with sub-systems consisting of:

  • Depth camera (e.g. Intel RealSense D435)
  • On-board computers (e.g. Raspberry Pi Zero, NVIDIA Jetson AGX Xavier)
  • Power supply (e.g. EC Technology Powerbank)

All these sub-systems in each bus will be combined into one system which shall be deployed for ideally an initial period of 2 months, thus obtaining sufficient data patterns for further analysis.

Tasks of students and implementation plan The students will be divided into 2 groups consisting of six 2-person teams. Each group will take responsibility to reimplement (and possibly adapt) a different existing software architecture for all the bus lines used in our project. Two of the 2-person teams in each group will be responsible for one specific sub-task inside independently (in case one team can’t compete). The teams inside one group will therefore have to co-operate. Note that we will give a default version of each module to guarantee the basic operation of the whole system.

The main tasks are as follows:

1. Collect the video data of the depth cameras with a predefined interface or preinstalled SD card periodically.

2. Label corresponding objects/events in videos as the dataset.

3. Reimplement existing video analytics architecture (using open source code from papers) with collected depth image video. (We split the architecture into modules. Each 2-person team takes care of one module then the group combines the modules together.)

4. Based on the implemented architecture, each team should develop an idea to improve the architecture. Then implement a demo, deploy in the bus system, show the collected results, and present the results in the final Smart City report.

a) The idea can be a new application.

b) The idea can also be an algorithm or module on how to improve the performance of the architecture.

Learning about such a fast-moving field is an exciting opportunity, but covering it in a traditional course setting comes with some caveats you should be aware of.

  • No canonical curriculum: Many topics in mathematics and computer science such as linear algebra, real analysis, discrete mathematics, data structures and algorithms, etc come with well-established curricula; courses on such subjects can be found at most universities, and they tend to cover similar topics in a similar order. This is not the case for emerging research areas like deep learning: the set of topics to be covered, as well as the order and way of thinking about each topic, has not yet been perfected.
  • Few learning materials: There are very few high-quality textbooks or other learning materials that synthesize or explain much of the content we will cover. In many cases, the research paper that introduced an idea is the best or only resource for learning about it.
  • Theory lags experiments: At present, video analytics is primarily an empirically driven research field. We may use mathematical notation to describe or communicate our algorithms and ideas, and many techniques are motivated by some mathematical or computational intuition, but in most cases, we rely on experiments rather than formal proofs to determine the scenarios where one technique might outperform another. This can sometimes be unsettling for students, as the question “why does that work?” may not always have a precise, theoretically-grounded answer.
  • Things will change: If you were to study deep learning ten years from now, it is very likely that it will look quite different from today. There may be new fundamental discoveries or new ways of thinking about things we already know; there may be some ideas we think are important today, that will turn out in retrospect not to have been. There may be similarly impactful results lurking right around the corner.

Prerequisites

  • You are highly recommended to have completed a course on Data Science (e.g., "Data Science and Big Data Analytics" taught by Dr. Steffen Herbold or the Course "Machine Learning" by Stanford University) before entering this course. You need to be familiar with computer networking and mobile communications.
  • Knowledge of any of the following languages: Python (course language), R, JAVA, Matlab or any language that features proper machine learning libraries

Schedule

Time Topic Slides Exercise

12.10.2020 - 01.11.2020

Register the course

02.11.2020 - 08.11.2020

Lecture I: Course Setup & Smart City (Online) Exercise 1: Read papers

09.11.2020 - 15.11.2020

Lecture II: Object Detection & System Architecture-Video Analytics (Online)

Exercise 2: Coding work

16.11.2020 - 22.11.2020

Install OS, Run the demo of object detection, Change parameters & Observe results based on the exercise last week.

23.11.2020 - 29.11.2020

Implement the file store program, Implement dynamic parameters changing program, Test storage size & processing time & CPU utility varies with different configurations (framerate, resolution, pipelines, content). Exercise 3: Plot test figure. Submit figure, test data, and code (Deadline 06.12.2020).

30.11.2020 - 06.12.2020

Implement SOCKET program (Client-Server architecture), Implement video delivery and store, Test throughput, Compare processing time on device & processing time on laptop & processing time on GPU (option). Exercise 4: Plot test and comparison figure. Submit figure, test data, and code (Deadline 13.12.2020).

07.12.2020 - 13.12.2020

Combine "application group" with "vision group", Implement configuration adaption & video delivery activated by vision program, Test storage size & processing time & CPU utility varies with adaptive configuration. Exercise 4: Plot test figure. Submit figure, test data, and code (Deadline 120.12.2020).

14.12.2020 - 20.12.2020

21.12.2020 - 27.12.2020

28.12.2020 - 03.01.2021

(canceled)

04.01.2021 - 10.01.2021

(canceled)

11.01.2021 - 17.01.2021

18.01.2021 - 24.01.2021

25.01.2021 - 31.01.2021

01.02.2021 - 07.02.2021

08.02.2021 - 14.02.2021

The milestones may be as follows:

1. Understand the design of overall systems and modules (04.11.2020-18.11.2020 2 weeks).

2. Reimplementation and integration in the laboratory (19.11.2020-09.12.2020 4 weeks).

3. Deployment and data collection (10.12.2020-11.02.2021 9 weeks including Christmas).

4. Result in analysis and implement new ideas based on system (06.01.2021-11.03.2021 13 weeks). (Note that there are 5 weeks overlapped with Deployment and data collection in case students need to modified their program.)

5. Final presentations (the week 15.03.2021).

6. Final reports (31.03.2021)

After this course, students will have full-stack knowledge of video analytics systems, including network programming, basic knowledge on video streaming, general knowledge of object detection, and state-of-art video analytics architecture.