Work program

1. Creation of annotated datasets of growing plants

4DPlants studies two different plant species with a range of architecture complexity. The model plant Arabidopsis thaliana is used as a reference since partner ENS Lyon masters its culture and corresponding virtual models. In particular, different sets of culture conditions and natural or mutant genetic contexts are used to maximize the variability of architecture and test the robustness of our tools. To further widen the genericity of our data, we also study the tomato plant (Solanum lycopersicum), a crop with a very contrasted architecture at different levels (leaf, stems, inflorescences, branching patterns).

Our goal is threefold:

  1. To generate annotated 4D datasets of virtually growing plants with L-systems;
  2. To generate annotated 4D datasets of real growing plants using our Plant Imager robot;
  3. To track time correspondence at botanically relevant scales (organs) through a semantic space-time registration of plant growth.

2. Learning of growth models

Our goal is to learn models of plant growth by first studying compact representations for spatio-temporal 3D plants, and subsequently learning 4D growth models. The models are learned and validated numerically using virtual plants. This allows for theoretically unlimited simulated training data with suitable annotations. Annotated datasets of real plants are also be used to fine-tune the models. Our goal is to achieve high-precision models by leveraging this data in supervised learning techniques. To start with, we favour precision over computing time or memory consumption, even though frugality can be an additional criterion later on. We explore how the balance between virtual and real plant data can affect these criteria. We also explore if some specific properties of the virtual plants are key for efficient training.

Our work can be decomposed in three successive steps:

  1. Learning 4D plant representations using deep neural networks that allow for alignment information to be encoded;
  2. Learning models that predict plant growth given a short input sequence of a 3D plant model in motion;
  3. Learning temporally-coherent 4D segmentations into organs by combining static plant segmentation and the previously learned growth prediction model.

3. Robotized phenotyping platform, validation and integration for real phenotyping assays

We integrate outputs from other tasks to first validate the results and test their utility for plant phenotyping, and then bridge the gap from research to applications by adapting the tools to the needs of the phenotyping platform's everyday users. In return, this work feeds up the two other tasks by facilitating more acquisitions and further analysis. Overall, it will provide useful guidelines to design and scale up a phenotyping experiment, both upstream to generate a training dataset, or downstream to perform plant trait measurements using our methods.

We work on three topics:

  1. The low-cost automation of time-lapse acquisition with the Plant Imager;
  2. The development of a modular architecture of the Plant Imager software so that new algorithms can easily be tested and included in the pipelines;
  3. The validation and assessment of our plant growth learning models in phenotyping experiments on two important applications: the prediction of plant growth from a short sequence of raw captures (including interpolation of growth curves to increase time resolution) and the segmentation of a growing plant into its organs in a consistent way over time.

Publications