Publications

Process-based models are widely used to predict the agroecosystem dynamics, but such modeled results often contain considerable uncertainty due to the imperfect model structure, biased model parameters, and inaccurate or inaccessible model inputs. Data assimilation (DA) techniques are widely adopted to reduce prediction uncertainty by calibrating model parameters or dynamically updating the model state variables using observations. However, high computational cost, difficulties in mitigating model structural error, and low flexibility in framework development hinder its applications in large-scale agroecosystem predictions. In this study, we addressed these challenges by proposing a novel DA framework that integrates a Knowledge-Guided Machine Learning (KGML)-based surrogate with tensorized ensemble Kalman filter (EnKF) and parallelized particle swarm optimization (PSO) to effectively assimilate historical and in-season multi-source remote sensing data. Specifically, we incorporate knowledge from a process-based model, ecosys, into a Gated Recurrent Unit (GRU)-based hierarchical neural network. The hierarchical architecture of KGML-DA mimics key processes of ecosys and builds a causal relationship between target variables. Using carbon budget quantification in the US Corn-Belt as a context, we evaluated KGML-DA's performance in predicting key processes of the carbon cycle at three agricultural sites (US-Ne1, US-Ne2, US-Ne3), along with county-level (627 counties) and 30-m pixel-level (Champaign County, IL) grain yield. The site experiments show that updating the upstream variable, e.g., gross primary production (GPP), improved the prediction of downstream variables such as ecosystem respiration, net ecosystem exchange, biomass, and leaf area index (LAI), with RMSE reductions ranging from 9.2% to 30.5% for corn and 4.8% to 24.6% for soybean. Uncertainty in downstream variables was automatically constrained after correcting the upstream variables, demonstrating the effectiveness of the causality linkages in the hierarchical surrogate. We found joint use of in-season GPP and evapotranspiration (ET) products along with historical GPP and surveyed yields achieved the best prediction for county-level yields, while assimilating in-season LAI observations benefitted the prediction in extreme years. Uncertainty and error analysis of regional yield estimation demonstrated that KGML-DA could reduce prediction error by 26.5% for corn and 36.2% for soybean. Remarkably, the GPU-based tensor operation design makes this DA framework more than 7000 times faster than the PB model with a High-Performance Computing system, indicating the high potential of the proposed framework for in-season, high-resolution agroecosystem predictions.

Leaf area index (LAI) is an important variable for characterizing vegetation structure. Contemporary satellite-based LAI products with moderate spatial resolution, such as those derived from the MODIS observations, offer unique opportunities for large-scale monitoring but are insufficient for resolving heterogeneous landscapes. Although high-resolution satellite observations can derive detailed LAI maps, the low revisit frequency and the presence of cloud cover disrupt the temporal continuity of these high-resolution data, thereby leading to the fact that most current LAI inversion models for high spatial resolution data are based on single pixel and date without fully utilizing temporal information. Moreover, LAI estimation models trained solely using satellite products or simulations are also impeded by the inconsistencies between field and satellite LAI exist owing to local atmospheric, soil, and canopy conditions, while in-situ LAI measurements are sparse and insufficient for large-scale missions. To address these challenges, this study proposed a new framework based on deep transfer learning, which includes three key features that contribute to its high performance. Firstly, a Bi-directional Long Short-Term Memory (Bi-LSTM) model is pre-trained using MODIS reflectance and MODIS LAI products to capture the general non-linear relationship between reflectance and LAI and incorporate temporal dependencies as prior information to reduce uncertainties associated with ill-posed inversion problems and noises. Secondly, this pre-trained Bi-LSTM is transferred from satellite to field by fine-tuning with sparse in-situ LAI measurements to overcome issues arising from local inconsistencies. Thirdly, reconstructed Landsat time-series images via fusing MODIS and Landsat reflectance images are used as inputs of the Bi-LSTM to generate high-quality Landsat LAI products at both high spatial and temporal resolutions. To validate the proposed approach, field LAI measurements were collected at nine locations across the contiguous U.S. from 2000 to 2018, including three land cover types: croplands, grasslands, and forest. Quantitative assessments demonstrate that the Bi-LSTM outperforms three benchmarks, including a PROSAIL-based Look-Up Table (LUT) method, a random forest-based LAI retrieval and MODIS LAI (MCD15A3H), exhibiting lower RMSE and higher R2 in most cases. Additionally, the Bi-LSTM predictions yield lower random fluctuations than estimations from the LUT·, random forest and MODIS LAI, indicating the higher robustness of the proposed framework. The findings of this study highlight the value of transfer learning in estimation of vegetation biophysical parameters, which involves pre-training using sufficient existing satellite products to produce a generalized model and transferring knowledge from in-situ measurements to bridge gaps between satellite and field. By leveraging advanced transfer learning techniques and multi-source and multi-scale data, the proposed framework enables the production of long-term LAI maps at fine resolutions, facilitating downstream applications in regions characterized by high spatial heterogeneity.

Context or problem

in crop growth data assimilation systems, the mismatch between simulated and observed phenology significantly deteriorates the performance of crop growth modeling. This situation may be more severe for smallholder farmers-managed fields, where the phenological heterogeneity was high even when climate condition was relatively uniform. Previous studies investigated the non-sequential methods to retrospectively assimilate historical phenology observations. However, approaches to dynamically assimilating phenological measurements through sequential data assimilation methods remain unexplored

Objective or research question

one of the most intractable challenges of dynamic phenology assimilation is that a considerable proportion of model parameters and variables are entangled with phenology, therefore simply assimilating phenological measurements could disturb the model clock. This study aims to establish a robust crop data assimilation framework capable of assimilating phenological measurements in real time without disturbing the model clock

Methods

the framework used an open-source version of the AquaCrop model to simulate crop growth and used the ensemble Kalman filter (EnKF) to assimilate observations sequentially. A parameter refresh method was proposed to restore the phenological consistency of model parameters after updating the phenology state. Assimilation strategies with different observation types and compositions of state vectors were designed after a global sensitivity analysis of model parameters. These strategies were evaluated through the Observing System Simulation Experiments (OSSE), and the selected strategies were tested in a real-world case.

Results

the results of the OSS Experiments show that the phenological mismatch problem greatly affects crop growth simulation, and this mismatch could not be narrowed effectively by assimilating non-phenological observations. Assimilating phenological measurements with the proposed parameters refresh method and assimilation strategies closed this mismatch and produced better performance compared to the Restart-EnKF method. In the real-world paddy rice case, assimilating phenology with the proposed strategies significantly improved yield estimation in low-yield plots (less than 4 ton/ha) compared to assimilating canopy cover (CC) alone, with an R2 increase from 0.07 to 0.48. Assimilating CC, biomass and phenology simultaneously produced the best yield estimation for all plots, with R2 = 0.57 and RMSE = 1.00 ton/ha.

Conclusions

assimilating phenology under a consistent model clock significantly improved yield estimation when the phenological heterogeneity of plots was high.

Implications or significance

the results highlight the effectiveness and robustness of the established data assimilation framework for dynamic crop growth simulation, indicating the potentials of the proposed data assimilation framework for regional in-season crop modeling and yield forecasting.

Accurate monitoring of crop biophysical parameters such as the plant area index (PAI) is essential for regional crop growth simulations and crop management. Many efforts have been made to estimate PAI by vegetation index (VI)-based methods, such as the widely used single empirical regression method and the piecewise relationship method. However, model structure error is inherent to the single-relationship method since it neglects the influence of phenology, and the discontinuity of the piecewise method can result in abrupt changes in estimates in the stage-transition period. Here we introduce a generalized and more accurate approach, termed the VI-based phenology adaptation (VPA) method, to estimate PAI of rice (Oryza sativa L.) using unmanned aerial vehicle (UAV)-based multispectral data. The VPA method automatically adapted the VI-PAI relationship at each observation time by bridging the VI, phenology and PAI into a continuous model. Moreover, the capability to monitor aboveground biomass of rice via the VPA method was evaluated. Intensive aerial and ground experiments were conducted in the controlled experimental plots and the randomly selected farmer-managed plots during two consecutive years (one year for calibration and another year for testing). The trajectory of the anchor-point that linked the discrete VI-PAI/biomass relationship for each phenological stage was developed and parameterized. The estimates of the single-relationship method were scattered when PAI was high, and this method failed to estimate PAI for the entire growing season. Significant underestimation was observed at the flowering stage (BBCH 64) for the piecewise methods. In contrast, the VPA approach outperformed the other methods for both controlled plots (R2 of 0.799 and RMSE of 0.536) and farmer-managed plots (R2 of 0.543 and RMSE of 0.671). Moreover, the VPA method exhibited superior generality for estimating other crop biophysical parameters such as biomass. Compared with the piecewise methods, the proposed method best estimated aboveground biomass, with 93% of biomass variation captured. The results demonstrated the steady performance of the VPA method for estimating various biophysical parameters throughout the entire growing season, indicating the promising potential of this method for cross-year and cross-site crop status monitoring.

The accurate prediction and estimation of grain yield for smallholder farms are crucial for agricultural management. In this study, we propose an image-driven data assimilation framework that enables smallholder farmers to estimate crop yield. In the framework, a convolutional neural network was trained using label distribution learning to estimate the probability distribution of rice crop states from images. The mean and variance of the estimated probability were regarded as observed values and variances, which were then assimilated into the ORYZA2000 crop growth model to estimate the final yield. Compared with the accuracy (R2 = 0.410, RMSE = 1.059 t ha−1) of yield estimation by assimilating field sampling observations, the proposed method significantly improved accuracy, with R2 = 0.646 and RMSE = 0.679 t ha−1. The results demonstrated the feasibility of driving a crop growth model using easily available camera images. Additionally, the probability distribution estimation method made it possible to quantify the error for each observation. The method provides a new perspective for quantifying the observation error, thereby indicating a potential research direction for data assimilation. After that, we used numerous simulation cases to determine the most valuable observation combination for final grain yield estimation, the final yield prediction ability at different development stages, and the model performance affected by number of images. Finally, a series of factors that may affect the applicability of the framework was discussed. This study establishes an intelligent data assimilation framework for smallholder farmers that can absorb crop phenotyping data from digital cameras with a low cost and simple implementation.

Smallholder farmers play an important role in the global food supply. As smartphones become increasingly pervasive, they enable smallholder farmers to collect images at very low cost. In this study, an efficient deep convolutional neural network (DCNN) architecture was proposed to detect development stages (DVS) of paddy rice using photographs taken by a handheld camera. The DCNN model was trained with different strategies and compared against the traditional time series Green chromatic coordinate (time-series Gcc) method and the manually extracted feature-combining support vector machine (MF-SVM) method. Furthermore, images taken at different view angles, model training strategies, and interpretations of predictions of the DCNN models were investigated. Optimal results were obtained by the DCNN model trained with the proposed two-step fine-tuning strategy, with a high overall accuracy of 0.913 and low mean absolute error of 0.090. The results indicated that images taken at large view angles contained more valuable information and the performance of the model can be further improved by using images taken at multiple angles. The two-step fine-tuning strategy greatly improved the model robustness against the randomness of view angle. The interpretation results demonstrated that it is possible to extract phenology-related features from images. This study provides a phenology detection approach to utilize handheld camera images in real time and some important insights into the use of deep learning in real world scenarios.

Near real-time crop phenology detection is essential for crop management, estimation of harvest time and yield estimation. Previous approaches to crop phenology detection have relied on time-series (multi-temporal) vegetation index (VI) data, and have included threshold-based, phenometrics-based and shape-model-fitting-based (SMF) methods. However, the performance of these methods depends on the duration and temporal resolution of the time-series data. In this study, we propose a new approach which identifies the principal growth stages of rice (Oryza sativa L.) directly from RGB images. Only a mono-temporal unmanned aerial vehicle (UAV) imagery was required for a large-area phenology detection via the well-trained network. An efficient convolutional neural network (CNN) architecture was designed to estimate rice phenology. The CNN incorporated spatial pyramid pooling (SPP), transfer learning and an auxiliary branch with external data. A total of 82 plots across a 160-hectare rice cultivation area of Southern China were selected to evaluate the proposed network. CNN predictions were ground truthed using rice phenology measurements taken from each plot throughout the growing season. Aerial data were collected using a fixed-wing UAV equipped with multispectral and RGB cameras. The performance of traditional SMF methods deteriorated when time-series VI data were of short duration. In contrast, the phenological stage estimated by the proposed network showed good agreement with ground observations, with a top-1 accuracy rate of 83.9% and mean absolute error (MAE) of 0.18. The spatial distribution of harvest dates for 627 plots in the study area were computed from the phenological stage estimates. The estimates matched well with the observed harvest dates. The results demonstrated the excellent performance of the proposed deep learning approach in near real-time phenology detection and harvest time estimation.

Forecasting rice grain yield prior to harvest is essential for crop management, food security evaluation, food trade, and policy-making. Many successful applications have been made in crop yield estimation using remotely sensed products, such as vegetation index (VI) from multispectral imagery. However, VI-based approaches are only suitable for estimating rice grain yield at the middle stage of growth but have limited capability at the ripening stage. In this study, an efficient convolutional neural network (CNN) architecture was proposed to learn the important features related to rice grain yield from low-altitude remotely sensed imagery. In one major region for rice cultivation of Southern China, a 160-hectare site with over 800 management units was chosen to investigate the ability of CNN in rice grain yield estimation. The datasets of RGB and multispectral images were obtained by a fixed-wing, unmanned aerial vehicle (UAV), which was mounted with a digital camera and multispectral sensors. The network was trained with different datasets and compared against the traditional vegetation index-based method. In addition, the temporal and spatial generality of the trained network was investigated. The results showed that the CNNs trained by RGB and multispectral datasets perform much better than VIs-based regression model for rice grain yield estimation at the ripening stage. The RGB imagery of very high spatial resolution contains important spatial features with respect to grain yield distribution, which can be learned by deep CNN. The results highlight the promising potential of deep convolutional neural networks for rice grain yield estimation with excellent spatial and temporal generality, and a wider time window of yield forecasting.

Crop growth models and vegetation index (VI) based methods have been commonly used to estimate rice grain yield. However, the complicated model calibration procedure and the narrow time window limit the application of these two methods, respectively. The convolutional neural network (CNN) performs better than VI-based approaches on yield estimation at the ripening stage, but the generalization of CNN still needs to be improved. The objective of this study is to improve the generality of CNN in estimating plot-scale rice grain yield using high-resolution UAV-based RGB images. A new deep learning architecture with deep features decomposition is proposed. The results showed that the proposed network is more robust than the network without deep features decomposition when the phenological stage of the test set is different from the training set. The results indicate that the time-invariant features which only relate to rice yield can be decomposed by the proposed network, and demonstrate the stable performance of proposed CNN in a wider time window for rice grain yield forecasting.

The red-green-blue (RGB) digital camera on unmanned aerial vehicle (UAV) with the relatively low cost and near real-time image acquisition renders a remote sensing platform, which is an ideal tool for crop monitoring in precision agriculture. Some successful applications have been made in biomass and yield estimation. However, retrieval of leaf area index (LAI) using plant height information extracted by crop surface models (CSMs) has been paid very limited attention to. Therefore, the objective of this study was to demonstrate the feasibility of estimating LAI with CSMs-based plant height. The study was conducted in warm and wet southern China where the sugarcane was planted widely. In this study, we acquired RGB imaging data of sugarcane in whole growing stage (8 flights) by this platform. Afterward, 42 ground control points (GCPs) were evenly distributed across the field due to the rugged terrain of the experimental area. The CSMs were built with the GCPs data and the UAV-based RGB image with very high resolution using the structure from motion (SfM) algorithm, and then the plant height information derived from CSMs was applied to estimate the LAI of sugarcane. The estimated LAI values were validated using the ground measurement data, which were collected simultaneously with the image acquisition. To assess the accuracy of plant height extracted from the CSMs without geo-referencing by GCPs data, we also constructed the ground elevation model by inverse distance weighted (IDW) interpolation to obtain plant height. In addition, we applied 6 visible band vegetation indices including green-red vegetation index (GRVI), normalized redness intensity (NRI), normalized greenness intensity (NGI), green leaf index (GLI), atmospherically resistant vegetation index (ARVI), and modified green-red vegetation index (MGRVI) from RGB image to predict the LAI, respectively. The performance of prediction models based on 6 vegetation indices was assessed by comparing with that based on plant height. The predicted plant heights based on GCPs geo-referenced CSMs matched well with the observations in the validation set, with the R 2 value of 0.961 2 and the root mean square error (RMSE) of 0.215 2 at the 0.01 significance level. This result demonstrated that the UAV-based CSMs with geo-referencing by GCPs were more effective in monitoring the characteristics of sugarcane canopy over rugged terrain. In all the selected visible band vegetation indices, GRVI had the decent agreement with LAI prior to late elongation stage, with the R 2 value of 0.779 0, the RMSE value of 0.556 1, and the mean relative error (MRE) of 0.168 0 in the validation set. In contrast, the plant height models showed a better performance than the visible band VIs over the same period, and the best estimate for LAI was obtained from CSMs-based plant height (R 2=0.904 4, RMSE=0.366 2, and MRE=0.124 3). Unfortunately, due to that leaves turned to be withering since late elongation stage, all models in this study had relatively poor performance in estimating the LAI in the whole growing stage. NRI performed the best for the LAI estimation in the whole growing stage (R 2=0.668 4, RMSE=0.636 0, and MRE=0.187 5), while its effect was poorer compared with the result before late elongation stage. Hence, it was unsuitable for LAI estimation from visible band VIs and plant height after late elongation stage. Furthermore, all above visible band VIs in this study were affected by the saturation phenomenon with varying degrees at high LAI levels. Conversely, the CSMs-based plant height model, which showed a linear trend without saturation at high LAI, proved to be the best predictor before late elongation stage. Because the key growing stage covered the period from seedling stage to late elongation stage, and the plant height models overcame the saturation limits of visible band VIs, it was better to estimate LAI with plant height. The results of this study indicate that using CSMs-based plant height to retrieve LAI of sugarcane in the important growth period is feasible. Moreover, since the excellent fitting of CSMs-based plant height to the ground observations, this technology is a powerful tool to obtain crop canopy features accurately and rapidly and provides a new approach to the crop condition monitoring in large areas.