參加人數
Pervasive Artificial Intelligence Research (PAIR) Labs, National Chiao Tung University
(NCTU), Taiwan
The Pervasive AI Research (PAIR) Labs, a group of national research labs funded by the
Ministry of Science and Technology, Taiwan, is commissioned to achieve academic
excellence, nurture local AI talents, build international linkage, and develop pragmatic
approaches in the areas of applied AI technologies toward services, products, workflows, and
supply chains innovation and optimization. PAIR is constituted of 18 distinguished research
institutes in Taiwan to conduct research in various of applied AI areas.
Website: https://pairlabs.ai/
Intelligent Vision System (IVS) Lab, National Yang Ming Chiao Tung University (NYCU), Taiwan (NCTU), Taiwan
The Intelligent Vision System (IVS) Lab at National Yang Ming Chiao Tung University is directed by Professor Jiun-In Guo. We are tackling practical open problems in autonomous driving research, which focuses on intelligent vision processing systems, applications, and SoC exploiting deep learning technology.
Website: http://ivs.ee.nctu.edu.tw/ivs/
AI System (AIS) Lab, National Cheng Kung University (NCKU), Taiwan
The AI System (AIS) Lab at National Cheng Kung University is directed by Professor ChiaChi Tsai. We dedicate our passion on the system with AI technology. Our research includes AI accelerator development, AI architecture improvement, and AI-based solutions to multimedia problems.
MediaTek
MediaTek Inc. is a Taiwanese fabless semiconductor company that provides chips for wireless communications, high-definition television, handheld mobile devices like smartphones and tablet computers, navigation systems, consumer multimedia products and digital subscriber line services as well as optical disc drives. MediaTek is known for advances in multimedia, AI and expertise delivering the most power possible – when and where needed. MediaTek’s chipsets are optimized to run cool and super power-efficient to extend battery life. Always a perfect balance of high performance, power-efficiency, and connectivity.
Website: https://www.mediatek.com/
Wistron-NCTU Embedded Artificial Intelligence Research Center
Sponsored by Wistron and founded in 2020 September, Wistron-NCTU Embedded Artificial Intelligence Research Center (E-AI RDC) is a young and enthusiastic research center leaded by Prof. Jiun-In Guo (Institute of Electronics, National Chiao Tung University) aiming at developing the key technology related to embedded AI applications, ranging from AI data acquisition and labeling, AI model development and optimization and AI computing platform development with the help of easy to use AI toolchain (called ezAIT). The target applications cover AIoT, ADAS/ADS, smart transportation, smart manufacturing, smart medical imaging, and emerging communication systems. In addition to developing the above-mentioned technology, E-AI RDC will also collaborate with international partners as well as industrial partners in cultivating the talents in the embedded AI field to further enhance the industrial competitiveness in Taiwan Industry
The A19 Lab
The A19 Lab is a joint research lab sponsored by AU Optronics (AUO) Corp. and College of Artificial Intelligence, National Yangming Chiaotung University (NYCU), in November 2021. The A19 Lab, directed by Professor Ted Kuo of NYCU, is commissioned to explore leading-edge research in optronics and AI technologies with missions to develop innovative human-machine interfaces (HMI) and systems toward total immersive metaverse.
Dear Participants, The ICME 2022 competition awardees are: - Champion: okt2077 - 1st Runner-up: asdggg - 3rd-place: ACVLab Special Award - Best INT8 model development Award: without a winner Congratulations!
Finalists have been announced as follows. 1 okt2077 2 feishen 3 asdggg 4 TonyTTTTT 5 APTX4869 6 LeeC 7 OzHsu 8 UTS_GBDTC_MMLab 9 ACVLab 10 AiPoG 11 project_test 12 TonyStark 13 jerry88277 14 chingweihsu0809 15 Polybahn
Dear competitor: The Qualification Competition is about to end. Please remember to upload your result. We will send e-mail for Final Competition information.
We have announced “Private Testing Data for Qualification.zip”. Please submit your result (700 images) for qualification. Since the new private testing dataset is different from the previous dataset, the leaderboard is reset. Thank you!
Object detection in the computer vision area has been extensively studied and making tremendous progress in recent years. Furthermore, image segmentation takes it to a new level by trying to find out accurately the exact boundary of the objects in the image. Semantic segmentation is in pursuit of more than just location of an object, going down to pixel level information. However, due to the heavy computation required in most deep learning-based algorithms, it is hard to run these models on embedded systems, which have limited computing capabilities. In addition, the existing open datasets for traffic scenes applied in ADAS applications usually include main lane, adjacent lanes, different lane marks (i.e. double line, single line, and dashed line) in western countries, which is not quite similar to that in Asian countries like Taiwan with lots of motorcycle riders speeding on city roads, such that the semantic segmentation models training by only using the existing open datasets will require extra technique for segmenting complex scenes in Asian countries.
In this competition, we encourage the participants to design semantic segmentation model that can be applied in Taiwan’s traffic scene with lots of fast speeding motorcycles running on city roads along with vehicles and pedestrians. The developed models not only fit for embedded systems but also achieve high accuracy at the same time.
This competition includes two stages: qualification and final competition.
The goal is to design a lightweight deep learning semantic segmentation model suitable for constrained embedded system design to deal with traffic scenes in Asian countries like Taiwan. We focus on segmentation accuracy, power consumption, real-time performance optimization and the deployment on MediaTek’s Dimensity Series platform.
With MediaTek’s Dimensity Series platform and its heterogeneous computing capabilities such as CPUs, GPUs and APUs (AI processing units) embedded into the system-on-chip products, developers are provided the high performance and power efficiency for building the AI features and applications. Developers can target these specific processing units within the system-on-chip or, they can also let MediaTek NeuroPilot SDK intelligently handle the processing allocation for them.
Given the test image dataset, participants are asked to segment each pixel belonging to the following six classes {background, main_lane, alter_lane, double_line, dashed_line, single_line} in each image.
Reference
[1] F. Yu et al., “BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning”,
in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2020.
[2] Google, “Measuring device power : Android Open Source Project,” Android Open Source
Project. [Online]. Available:
https://source.android.com/devices/tech/power/device?hl=en#power-consumption.
[Accessed: 11-Nov-2021].
[3] M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding”, in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2016.
According to the points of each team in the final evaluation, we select the highest three teams for regular awards.
Special Award
All the award winners must agree to submit contest paper and attend the IEEE ICME2022 Grand Challenge PAIR Competition Special Session to present their work.
Deadline for Submission(UTC+8):
Date | Event |
---|---|
1/10/2022 | Qualification Competition Start Date |
1/10/2022 | Date to Release Public Testing Data |
2/14/2022 | Date to Release Private Testing Data for Qualification |
2/21/2022 11:59:59 PM UTC+8 | Qualification Competition End Date |
2/22/2022 12:00 PM UTC+8 | Finalist Announcement |
2/22/2022 | Final Competition Start Date |
2/28/2022 | Date to Release Private Testing Data for Final |
3/7/2022 11:59:59 PM UTC+8 | Final Competition End Date |
3/19/2022 12:00 PM UTC+8 | Award Announcement |
3/31/2022 | Invited Paper Submission Deadline |
Qualification Competition
The grading criteria is the same as used for the Cityscapes [3] Pixel-Level Semantic Labeling Task.
The IoU compares the prediction region with the ground truth region for a class andquantifies this based on the area of overlap between both regions. The IoU is calculatedfor each semantic class in an image and the mean of all class IoU scores makes up the mean Intersection over Union (mIoU) score.
Final Competition
Final Competition
The finalists have to hand in a package that includes SavedModel (should be compatible w/ freeze_graph.py@tensorflow_v1.13.2) and inference script. We will deploy tensorflow model to MediaTek’s Dimensity Series platform and grade the final score by running the model.
A technical report is required to reveal the model structure, complexity, and execution efficiency, etc.
Submission File
Upload the zip file naming submission.zip containing the following files:
Ted Kuo, tkuo@cs.nctu.edu.tw
Jenq-Neng Hwang,hwang@uw.edu
Jiun-In Guo, jiguo@nycu.edu.tw
Marvin Chen, marvin.chen@mediatek.com
Hsien-Kai Kuo, hsienkai.kuo@mediatek.com
Chia-Chi Tsai, cctsai@gs.ncku.edu.tw