參加人數
Pervasive Artificial Intelligence Research (PAIR) Labs, Ministry of Science and Technology (MOST), Taiwan
The Pervasive AI Research (PAIR) Labs, a group of national research labs funded by the Ministry of Science and Technology (MOST), Taiwan, is commissioned to achieve academic excellence, nurture local AI talents, build international linkage, and develop pragmatic approaches in the areas of applied AI technologies toward services, products, workflows, and supply chains innovation and optimization. PAIR is constituted of 13 distinguished research institutes in Taiwan to investigate various of applied AI topics.
Website: https://pairlabs.ai/
Intelligent Vision System (IVS) Lab, National Chiao Tung University (NCTU), Taiwan
The Intelligent Vision System (IVS) Lab at the National Chiao Tung University is directed by Professor Jiun-In Guo to tackle practical open problems in autonomous driving research, esp. on intelligent vision processing systems, applications, and SoC exploiting deep learning technology.
Website: http://ivs.ee.nctu.edu.tw/ivs/
Object detection in the computer vision area has been extensively studied and making tremendous progress in recent years using deep learning methods. However, due to the heavy computation required in most deep learning-based algorithms, it is hard to run these models on embedded systems, which have limited computing capabilities. In addition, the existing open datasets for object detection applied in ADAS applications usually include pedestrian, vehicles, cyclists, and motorcycle riders in western countries, which is not quite similar to the crowded Asian countries with lots of motorcycle riders speeding on city roads, such that the object detection models trained by using the existing open datasets cannot be directly applied in detecting moving objects in Asian countries.
In this competition, we encourage the participants to design object detection models that can be applied in the competition’s traffic with lots of fast speeding motorcycles running on city roads along with vehicles and pedestrians. The developed models not only can fit for embedded systems but also can achieve high accuracy at the same time.
This competition is divided into two stages: qualification and final competition.
Qualification competition: all participants submit their answers online. A score is calculated. The top 15 teams would be qualified to enter the final round of the competition.
Final competition: the final score will be validated and evaluated over NVIDIA Jetson TX2 by the organizing team for the final score.
The goal is to design a lightweight deep learning model suitable for constrained embedded system design to deal with traffic in Asian countries. We focus on detection accuracy, model size, computational complexity, and performance optimization on NVIDIA Jetson TX2 based on a predefined metric.
Given the test image dataset, participants are asked to detect objects belonging to the following four classes {pedestrian, vehicle, scooter, bicycle} in each image, including class and bounding box.
According to the points of each team in the final evaluation, we select the highest three teams for cash awards.
Champion: $USD 1,500
1st Runner-up: $USD 1,000
2nd Runner-up: $USD 750
Special Awards
Best accuracy award – award for the highest mAP in the final competition: $USD 200;
Best bicycle detection award – award for the highest AP of bicycle recognition in the final competition: $USD 200;
Best scooter detection award – award for the highest AP of scooter recognition in the final competition: $USD 200;
All the award winners must agree to submit contest paper, allow to open source the final codes, and attend the ICME2020 Grand Challenge PAIR Competition Special Session to present their work.
Date | Activity |
---|---|
12/1/2019 | Qualification Competition Start Date |
12/1/2019 | Date to Release Public Testing Data |
1/24/2020 | Date to Release Private Testing Data for Qualification |
1/30/2020 12:00 PM UTC | Qualification Competition End Date |
2/1/2020 12:00 AM UTC | Finalist Announcement |
2/1/2020 | Final Competition Start Date |
2/7/2020 | Date to Release Private Testing Data for Final |
2/14/2020 12:00 PM UTC | Final Competition End Date |
3/1/2020 12:00 PM UTC | Award Announcement |
3/13/2020 | Paper Submission Date |
4/15/2020 | Author notification |
4/29/2020 | Camera-ready submission |
Qualification Competition
The grading rule is based on MSCOCO object detection rule.
The mean Average Precision (mAP) is used to evaluate the results.
Intersection over union (IoU) threshold is set at 0.5.
The resulting average precision (AP) of each class should be calculated and the mAP over all classes is evaluated as the key metric.
Besides, during the qualification competition period, each team has to submit a team composition document, including team name, leader, team members, affiliation, and contact information, etc.
Final Competition
The team with the highest accuracy will get the full score (25%) and the team with the lowest one will get zero. The rest teams will get scores directly proportional to the mAP difference.
The team with the smallest model will get the full score (25%) and the team with the largest one will get zero. The rest teams will get scores directly proportional to the model size difference.
The team with the smallest GOP number per frame will get the full score (25%) and the team with the largest one will get zero. The rest teams will get scores directly proportional to the GOP value difference.
The team with a model to complete the detection task in the shortest time will get the full score (25%) and the team that takes the longest time will get zero score. The rest teams will get scores directly proportional to the execution time difference.
The speed evaluation procedure will measure the time of the overall process from reading the private testing dataset in final to completing submission.csv file, including parsing image list, loading images, and any other overhead to conduct the detection through the testing dataset.
Prof. Ted Kuo, tkuo@nctu.edu.tw
Prof. Jenq-Neng Hwang, hwang@uw.edu
Prof. Jiun-In Guo, jiguo@nctu.edu.tw
Chia-Chi Tsai, apple.35932003@gmail.com