Prediction of maize yield in Uganda using CNN-LSTM architecture on a multimodal climate and remote sensing dataset

dc.contributor.authorTaremwa,Danison
dc.contributor.authorAhishakiye,Emmanuel
dc.contributor.authorObbo,Aggrey
dc.contributor.authorKisozi , Paul Kategaya
dc.contributor.authorKaggwa , Fred
dc.date.accessioned2026-04-27T10:51:25Z
dc.date.issued2026-01-29
dc.description.abstractMaize is a staple crop in Uganda, underpinning both food security and rural livelihoods. Accurate forecasting of maize yields is therefore crucial for guiding agricultural planning, resource allocation, and policy design. Yet traditional statistical methods are often limited by low accuracy, poor scalability, and weak integration of diverse inputs, leaving them unable to capture complex, nonlinear, and spatiotemporal dynamics of crop growth. To overcome these constraints, we developed a hybrid convolutional neural network and long short-term memory (CNN-LSTM) model. This model integrates remotely sensed climatic variables and vegetation indices with biannual maize yield records from Uganda’s Zonal Agricultural Research and Development Institute (ZARDI) zones for the period 2018–2020. Due to the scarcity of high-quality yield data, we applied the Synthetic Minority Oversampling Technique for Regression (SMOGN) alongside feature selection to balance the dataset and improve predictive robustness. The CNN-LSTM model’s ability to select features and perform extensive hyperparameter tuning enabled it to outperform baseline models. It achieved a Mean Squared Error (MSE) of 0.107 tonnes2, a Mean Absolute Error (MAE) of 0.267 tonnes, a Root Mean Squared Error (RMSE) of 0.327 tonnes, and an R2 score of 0.783. A comparative analysis revealed that the CNN + Random Forest (RF) model achieved an MSE of 0.137 tonnes2, a MAE of 0.281 tonnes, an RMSE of 0.370 tonnes, and an R2 score of 0.722. These results outperformed the standalone CNN (MSE = 0.216, R2 = 0.562) and RF (MSE = 0.211, R2 = 0.573) models, underscoring the advantage of combining spatial–temporal learning for improved predictive accuracy. Residual analysis further confirmed the model's stability, showing minimal bias and close agreement between observed and predicted yields. These findings highlight the potential for integrating spatial–temporal deep learning and ensemble methods to deliver accurate crop yield forecasts in data-limited smallholder systems. By offering a scalable framework for evidence-based farm planning and food security policy, our study demonstrated that advanced machine learning can directly support sustainable development in sub-Saharan Africa. Future research will extend the framework to incorporate Transformer architectures, high-resolution satellite imagery, and explainable AI, further enhancing accuracy, interpretability, and decision-support capacity.
dc.description.sponsorshipNo organization, institution, or research centre funded this study.
dc.identifier.citationTaremwa, D., Ahishakiye, E., Obbo, A., Kisozi, P. K., & Kaggwa, F. (2026). Prediction of maize yield in Uganda using CNN-LSTM architecture on a multimodal climate and remote sensing dataset. Discover Artificial Intelligence.
dc.identifier.urihttps://ir.must.ac.ug/handle/123456789/4351
dc.language.isoen_US
dc.publisherSpringer
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/
dc.subjectMaize yield prediction
dc.subjectEnsemble learning
dc.subjectPrecision agriculture
dc.subjectCNN-LSTM
dc.subjectVegetation indices
dc.titlePrediction of maize yield in Uganda using CNN-LSTM architecture on a multimodal climate and remote sensing dataset
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Prediction of maize yield in Uganda using CNNLSTM architecture on a multimodal climate.pdf
Size:
2.19 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: