Smart city

From NET Wiki
Revision as of 16:05, 8 February 2021 by Yuan4 (talk | contribs) (→‎Grading)
Jump to navigation Jump to search
Imbox content.png Note: The primary platform for communication in this course will be StudIP. All materials will be uploaded there.


Details

Workload/ECTS Credits: 180h, 5-6 ECTS
Module: M.Inf.1222 (Specialisation Computer Networks, 5 ECTS) or M.Inf.1129 (Social Networks and Big Data Methods, 5 ECTS) or M.Inf.1800 (Practical Course Advanced Networking, 6 ECTS)
Lecturer: Prof. Xiaoming Fu
Teaching assistant: MSc. Fabian Wölk (fabian.woelk@cs.uni-goettingen.de), MSc. Weijun Wang (weijun.wang@informatik.uni-goettingen.de), Dr. Tingting Yuan (tingt.yuan@hotmail.com)
Time: Wed. 14:00-16:00
Place: Room 0.103, Institute for Computer Science (mostly will be online)
UniVZ Lunivz link [1]


Announcement

Due to the recent situations in the context of Covid-19, new information will be updated here in time, please check this webpage periodically to get the newest information.


System Group

Warm-up task: The tutorial [2] on warm-up is given on both StudIP and Wiki page.

Tomorrow(Wednesday) system group students need to come to the Institute of Informatic 3.0G to get your devices. If needed, I will show you the complete hardware and basic software system for your tasks.

Please press the doorbell.

Task 1: The Task 1 description [3] has been published. The deadline is 08/12/2020 and the next online class will be on 09/12/2020 which is different from Vision Group.


Vision Group

The tutorial on task 2 is given on StudIP.

The next online meeting is on 23.12.2020 for discussion on task 2 and releasing task 3.

The deadline for the report on task 2 is 21th Dec.

If you have any problem, you can send an email to MSc. Fabian Wölk and Dr. Tingting Yuan.

General Description

Computer Networks Group, Institute of Computer Science, Universität Göttingen is collaborating with Göttinger Verkehrsbetriebe GmbH (represented by Dipl. Anne-Katrin Engelmann) and setting up this exciting course.

This course covers two aspects on Smart Cities in the context of public transport: event monitoring and passenger counting.

The goal of this course is to:

  • Help students to further understand computer networks and data science knowledge.
  • Help students to use computer science knowledge to build a practical AI system.
  • Guide students to utilize knowledge to improve the performance of the system.

In this course, each student (max. number 30) needs to:

  • Read state-of-art papers.
  • Use programming to build systems including computer vision algorithms, embedded design programs, and SOCKET network programs.
  • Learn how to analyze city public transport sensor data.

For the project we will design, implement, and deploy the system at several buses at specific positions with sub-systems consisting of:

  • Depth camera (e.g. Intel RealSense D435)
  • On-board computers (e.g. Raspberry Pi Zero, NVIDIA Jetson AGX Xavier)
  • Power supply (e.g. EC Technology Powerbank)

All these sub-systems in each bus will be combined into one system which shall be deployed for ideally an initial period of 2 months, thus obtaining sufficient data patterns for further analysis.

Tasks of students and implementation plan The students will be divided into 2 groups consisting of six 2-person teams. Each group will take responsibility to reimplement (and possibly adapt) a different existing software architecture for all the bus lines used in our project. Two of the 2-person teams in each group will be responsible for one specific sub-task inside independently (in case one team can’t compete). The teams inside one group will therefore have to co-operate. Note that we will give a default version of each module to guarantee the basic operation of the whole system.

The main tasks are as follows:

1. Collect the video data of the depth cameras with a predefined interface or preinstalled SD card periodically.

2. Label corresponding objects/events in videos as the dataset.

3. Reimplement existing video analytics architecture (using open source code from papers) with collected depth image video. (We split the architecture into modules. Each 2-person team takes care of one module then the group combines the modules together.)

4. Based on the implemented architecture, each team should develop an idea to improve the architecture. Then implement a demo, deploy in the bus system, show the collected results, and present the results in the final Smart City report.

a) The idea can be a new application.

b) The idea can also be an algorithm or module on how to improve the performance of the architecture.

Learning about such a fast-moving field is an exciting opportunity, but covering it in a traditional course setting comes with some caveats you should be aware of.

  • No canonical curriculum: Many topics in mathematics and computer science such as linear algebra, real analysis, discrete mathematics, data structures and algorithms, etc come with well-established curricula; courses on such subjects can be found at most universities, and they tend to cover similar topics in a similar order. This is not the case for emerging research areas like deep learning: the set of topics to be covered, as well as the order and way of thinking about each topic, has not yet been perfected.
  • Few learning materials: There are very few high-quality textbooks or other learning materials that synthesize or explain much of the content we will cover. In many cases, the research paper that introduced an idea is the best or only resource for learning about it.
  • Theory lags experiments: At present, video analytics is primarily an empirically driven research field. We may use mathematical notation to describe or communicate our algorithms and ideas, and many techniques are motivated by some mathematical or computational intuition, but in most cases, we rely on experiments rather than formal proofs to determine the scenarios where one technique might outperform another. This can sometimes be unsettling for students, as the question “why does that work?” may not always have a precise, theoretically-grounded answer.
  • Things will change: If you were to study deep learning ten years from now, it is very likely that it will look quite different from today. There may be new fundamental discoveries or new ways of thinking about things we already know; there may be some ideas we think are important today, that will turn out in retrospect not to have been. There may be similarly impactful results lurking right around the corner.

Prerequisites

  • You are highly recommended to have completed a course on Data Science (e.g., "Data Science and Big Data Analytics" taught by Dr. Steffen Herbold or the Course "Machine Learning" by Stanford University) before entering this course. You need to be familiar with computer networking and mobile communications.
  • Knowledge of any of the following languages: Python (course language), R, JAVA, Matlab or any language that features proper machine learning libraries

Grading

  • Participation: 50%
    • Task 1: 10%
    • Task 2: 20%
    • Task 3: 20%
  • Presentation: 20%
    • Present on your work with a slide to the audience (in English).
    • 20 minutes of presentation followed by 10 minutes Q &A for one student.
    • 30 minutes of presentation followed by 15 minutes Q &A for a team with two students.

Suggestions for preparing the slides:  Get your audiences to quickly understand the general idea. Figures, tables, and animations are better than sentences. Don't forget a summary of your ideas and contributions. All quoted images, tables and text need to indicate their source. Note: The team needs to clearly introduce the division of their work, and both team members need to present their respective work and answer questions. 


  • Final report: 30%

The report must be written in English according to common guidelines for scientific papers, 10-15 pages for a student and 20-25 pages for a team of content (excluding the table of content, bibliography, etc.). Please note that you can not directly copy content from papers or webpages, as this will be considered plagiarism, and we will treat it seriously. All quoted images and tables need to indicate their source.

Schedule

Time Topic Output
04.11.2020
Lecture I: Course Setup [4] & Smart City (Online) No
11.11.2020
Lecture II: Object Detection [5] & System Architecture-Video Analytics [6] (Online) Papers (release 10 choose 2)
18.11.2020
Warm-up: (Vision Group) run Yolo for object detection

Warm-up: (System Group) initial and run first demo on Jetson nano

No
25.11.2020
Task 1: (Vision Group) train Yolo with a new dataset

Task 1: (System Group) Various object detection pipeline configuration adjustment

Task 1 report for Vision Group (deadline: 30.11.2020)

Task 1 report for System Group (deadline: 08.12.2020)

02.12.2020
Discussion & Task 2: (Vision Group) Yolo for depth image

Task 1: (System Group) Various object detection pipeline configuration adjustment

Task 2 report (deadline: 21.12.2020)
09.12.2020
Task 2: Yolo for depth image

Discussion & Task 2: (System Group) efficient store image from Intel Realsense camera on Jetson nano

16.12.2020
Task 2: Yolo for depth image

Task 2: (System Group) efficient store image from Intel Realsense camera on Jetson nano

23.12.2020
Discussion on Task 2
30.12.2020
Holiday
06.01.2021
Holiday
13.01.2021
Task 3: (Vision Group) Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

20.01.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

27.01.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

03.02.2021
Task 3: Yolo for different topics

Task 3: (System Group) object detection pipeline configuration for different topics

01.03.2021
Discussion & Brainstorming
15.03.2021
Final presentations
31.03.2021
Final report