BAAI ~$13,986 438 Team490 participants
Ultra-high Resolution EM Images Segmentation Challenge
2019-10-31 - Launch
2020-01-10 - Team Merger Deadline
2020-01-22 - Close
ICON Home     Competitions    


Update 2020-1-16


U-RISC Challenge has entered the Final Stage!

Top 30 teams in each track need to befriend to “biendata Assist” (shujujingsai) to join the WeChat group, with the info “U-RISC Final + Team Name”.

Submission instruction and following schedule will be announced in the WeChat group on 2020/01/16.

Final competition officially begins on 2020/01/17.


All final-stage teams must read the instruction for submitting models on 2020/01/16.

Instruction download:


Update 2020-1-10


NEW Teaming-up Deadline: UTC 15:59 2020.01.10

NEW 1st-Stage Competition Deadline: UTC 15:59 2020.01.15


Update 2019-12-28


Daily submission is now up to 3 times in both tracks.


Update 2019-12-24


Based on the feedback from participants and other experts, the organizers decided to modify the evaluation metric,Please visit “Evaluation” section for details(Link).


Update 2019-12-16


The limit of team numbers to the Final Stage has been raised. Please visit “Timeline & Prize” section for details(Link).


Update 2019-12-13


Important: the size of submiited models is now limited to 1 gigabyte.Please visit the Timeline & Prize for details(Link).


Update 2019-12-3


The evaluation instruction in the Simple Track has been updated. Please check for details(Link).


Update 2019-11-11


To improve the accuracy of the evaluation process, we update the submission format requirements. Please find the details on the "Evaluation" Page.




The visual information from the external environment reaches the retina behind the eyeballs. From here, the optical signal is converted to electrical. Therefore, the retina is the starting point for visual information formation, and it not only converts the signal, but also processes the information, which is further transmitted to the visual cortex of the brain, and finally forms our vision. The information processing is done by a variety of cell types on the retina. They are arranged in a "3+2" network architecture: a three-layer cell group and a two layer-cell group. The traditional anatomical research has revealed the general structure of the retina. And with the development of technology, we can now explore those unknown areas at an unprecedented resolution.


In addition to having an important role in the biological sense, the field of machine learning is also interested in the retina. After Harvard's David Hubel and Torsten Wiesel delved into the principles of the retina and visual cortex (both of them won the 1981 Nobel Prize in Medicine), David Marr of the Massachusetts Institute of Technology further developed a mathematical model for visual information processing. This work has influenced the subsequent artificial neural network research. His colleague Tomaso Poggio is still working on cutting-edge artificial intelligence research at the MIT.


Since then, many models, including Geoffery Hinton's Capsules, have borrowed from the way the retina and visual cortex process information. Computational biologists have also found that deep convolutional neural networks capture the retina's response to the outside world more accurately than many classical computational neurobiological models, suggesting that artificial neural networks and biological neural networks have at least some similarities. 【Deep Learning Models of the Retinal Response to Natural Scenes., Adv Neural Inf Process Syst, 2016.】


The study of the distribution and connection of neurons in the nervous system not only helps us understand how the nervous system works, but also promotes the study of artificial intelligence. More importantly, it can provide a theoretical basis for the nervous system diseases that are currently difficult to treat. Neuroscience names the study of cell distribution and connectivity as "connectomics." Researchers have obtained a huge amount of data at different research scales. For example, the data from a single scanned mouse brain is at TB level. For these massive data, in most cases, researchers can only manually obtain information from it. This is like looking for “a needle in a haystack”. Therefore, how to efficiently and automatically extract valuable information from this data is an important and urgent task.




The competition asks participants to identify, localize and outline the boundary of the neurons from high-resolution Scanning Electron Microscope (SEM) images.


The sample SEM image data with its label is as follows:



Left: EM image of neurons.  Right: a labeled file with cell membrane outlined.




The competition has two tracks: the Complex one and the Simple one.


The dataset of the Simple track has fewer cell counts, smaller image size, lower resolution and fewer pixels on the cell membrane.


The dataset of the Complex track has more cell counts, larger image size, higher resolution (the original resolution of EM image is 10,000×10,000) and more pixels on the cell membrane.

Please note:  No data of the Complex track shall be used for the Simple track.


Paper and Code Reference 


The following two papers are for reference.




Paper:Yu, Zhiding et al. “CASENet: Deep Category-Aware Semantic Edge Detection.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 1761-1770. url






Paper:Hu, Yuan, Yunpeng Chen, Xiang Li and Jiashi Feng. “Dynamic Feature Fusion for Semantic Edge Detection.” IJCAI (2019).  url






Ultra-high Resolution EM Images Segmentation Challenge


490 participants


Final Submissions