Yiming Cui

and 11 more

The structures of roots play an essential role in plant growth, development, and stress responses. Minirhizotron imaging is one of the widely used approaches to capture and analyze root systems. After segmenting minirhizotron images, every individual root is separated from each other and the background. Root traits, like root lengths and diameter distributions, can provide information about the health of the plants. Current methods to analyze minirhizotron images usually rely on manually annotated labels and commercial software tools, which are time and labor-consuming. Unfortunately, these current methods usually generate a statistical analysis of the input image rather than the features of each root. In this work, we propose a pipeline to automatically use deep neural networks to segment roots from the background and then extract root features like lengths and diameter distributions from the individual segmented root. In detail, we first use a pre-trained U-Net to segment the roots in the minirhizotron images. Then, we separate each individual root with the help of connected component analysis. Finally, we extract the features like diameter distribution or root lengths of every individual root with morphological operations, like skeletonization. For evaluation, we conduct experiments on synthetic roots, which are made of strings and threads, and compare results against a benchmark root dataset (PRMI) of real switchgrass roots and compare the estimated results with the existing commercial software.

Yiming Cui

and 11 more

Most current phenotype plant research focuses primarily on above-ground traits, like leaves and flowers. Roots often get comparatively less attention because they are challenging to examine and image. Minirhizotron (MR) systems are one of the imaging approaches to studying plant roots underground. In MR systems, a tube is inserted into the ground to allow a camera to be inserted to capture the images of root systems. Unlike minirhizotron imaging, X-ray computed tomography (CT) captures the three-dimensional (3D) information of soil cores extracted from the soil. For a better analysis of roots, the first step is always to segment the roots from the background in the images or image sequences. The results of root segmentation play an essential role in further analysis like root diameter and length estimation. Current fully-supervised segmentation methods mainly use pixel/point-level annotated labels, which require much manual effort and time. In this work, we propose a weakly supervised root segmentation approach with graph convolutional networks. Our model only requires image-level annotations to segment roots from the images or image sequences. In detail, our model first constructs graphs for the neighboring pixels/points and then learns the distinguishable features used as hints for segmentation by training a classifier based on the image-level annotations. Finally, post-processing procedures like principal component analysis (PCA) are applied to refine the final segmentation results. We conduct experiments on the challenging 2D PRMI minirhizotron benchmark and 3D switchgrass root X-ray CT datasets for evaluation.