Please login first
Isaac Mugume   Mr.  Graduate Student or Post Graduate 
Timeline See timeline
Isaac Mugume published an article in August 2018.
Top co-authors See all
Joachim Reuder

82 shared publications

Geophysical Institute, University of Bergen, and Bjerknes Centre for Climate Research, Bergen, Norway

Michel D. S. Mesquita

34 shared publications

Future Solutions, Håvikbrekka 92, 5440 Mosterhamn, Norway

Bob Alex Ogwang

10 shared publications

Uganda National Meteorological Authority, P. O. Box 7025, Kampala, Uganda

Didier Ntwali

7 shared publications

Institute of Atmospheric Physics, Laboratory for Middle Atmosphere and Global Environmental Observation, University of Chinese Academy of Sciences, Beijing 100029, China

Yazidhi Bamutaze

5 shared publications

Department of Geography, Geoinformatics and Climatic Sciences, Makerere University, P. O. Box 7062, Kampala, Uganda

5
Publications
3
Reads
0
Downloads
8
Citations
Publication Record
Distribution of Articles published per year 
(2016 - 2018)
Total number of journals
published in
 
4
 
Publications
Article 0 Reads 0 Citations Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case Study of Uganda Isaac Mugume, Michel D. S. Mesquita, Yazidhi Bamutaze, Didie... Published: 22 August 2018
Atmosphere, doi: 10.3390/atmos9090328
DOI See at publisher website ABS Show/hide abstract
Accurate and timely rainfall prediction enhances productivity and can aid proper planning in sectors such as agriculture, health, transport and water resources. However quantitative rainfall prediction is normally a challenge and for this reason, this study was conducted with an aim of improving rainfall prediction using ensemble methods. It first assessed the performance of six convective schemes (Kain–Fritsch (KF); Betts–Miller–Janjić (BMJ); Grell–Fretas (GF); Grell 3D ensemble (G3); New–Tiedke (NT) and Grell–Devenyi (GD)) using the root mean square error (RMSE) and mean error (ME) focusing on the March–May 2013 rainfall period over Uganda. 18 ensemble members were then generated from the three best performing convective schemes (i.e., KF, GF and G3). The daily rainfall predicted by the three ensemble methods (i.e., ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM)) was then compared with the observed daily rainfall and the RMSE and ME computed. The results shows that the ENS presented a smaller RMSE compared to individual schemes (ENS: 10.02; KF: 23.96; BMJ: 26.04; GF: 25.85; G3: 24.07; NT: 29.13 and GD: 26.27) and a better bias (ENS: −1.28; KF: −1.62; BMJ: −4.04; GF: −3.90; G3: −3.62; NT: −5.41 and GD: −4.07). The EMA and MAEM presented 13 out of 21 stations and 17 out of 21 stations respectively with smaller RMSE compared to ENS thus demonstrating additional improvement in predictive performance. This study proposed and described MAEM and found it producing comparatively better quantitative rainfall prediction performance compared to the other ensemble methods used. The MAEM method should be valid regardless the nature of the rainfall season.
PREPRINT 1 Read 0 Citations Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case study of Uganda Isaac Mugume, Michel D. S. Mesquita, Yazidhi Bamutaze, Didie... Published: 31 October 2017
EARTH SCIENCES, doi: 10.20944/preprints201710.0199.v1
DOI See at publisher website ABS Show/hide abstract
Accurate and timely rainfall prediction enhances productivity and can aid proper planning in sectors such as agriculture, health, transport and water resources. This study is aimed at improving rainfall prediction using ensemble methods. It first assesses the performance of six convective schemes (Kain–Fritsch (KF); Betts–Miller–Janji´c (BMJ); Grell–Fretas (GF); Grell 3D ensemble (G3); New–Tiedke (NT) and Grell–Devenyi (GD)) using the root mean square error (RMSE) and mean error (ME) focusing on the March–May 2013 rainfall period over Uganda. 18 ensemble members are generated from the three best performing convective schemes (i.e. KF, GF & G3). The performance of three ensemble methods (i.e. ensemble mean (EM); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM)) is also analyzed using the RMSE and ME. The EM presented a smaller RMSE compared to individual schemes (EM:10.02; KF:23.96; BMJ:26.04; GF:25.85; G3:24.07; NT:29.13 & GD:26.27) and a better bias (EM:-1.28; KF:-1.62; BMJ:-4.04; GF:-3.90; G3:-3.62; NT:-5.41 & GD:-4.07). The EMA and MAEM presented 13 out of 21 stations & 17 out of 21 stations respectively with smaller RMSE compared to EM thus demonstrating additional improvement in predictive performance. The MAEM is a new approach proposed and described in the study.
Article 1 Read 2 Citations Patterns of Dekadal Rainfall Variation Over a Selected Region in Lake Victoria Basin, Uganda Isaac Mugume, Michel D. S. Mesquita, Charles Basalirwa, Yazi... Published: 22 November 2016
Atmosphere, doi: 10.3390/atmos7110150
DOI See at publisher website ABS Show/hide abstract
Understanding variations in rainfall in tropical regions is important due to its impacts on water resources, health and agriculture. This study assessed the dekadal rainfall patterns and rain days to determine intra-seasonal rainfall variability during the March–May season using the Mann–Kendall (MK) trend test and simple linear regression (SLR) over the period 2000–2015. Results showed an increasing trend of both dekadal rainfall amount and rain days (third and seventh dekads). The light rain days (SLR = 0.181; MK = 0.350) and wet days (SLR = 0.092; MK = 0.118) also depict an increasing trend. The rate of increase of light rain days and wet days during the third dekad (light rain days: SLR = 0.020; MK = 0.279 and wet days: SLR = 0.146; MK = 0.376) was slightly greater than during the seventh dekad (light rain days: SLR = 0.014; MK = 0.018 and wet days: SLR = 0.061; MK = 0.315) dekad. Seventy-four percent accounted for 2–4 consecutive dry days, but no significant trend was detected. The extreme rainfall was increasing over the third (MK = 0.363) and seventh (MK = 0.429) dekads. The rainfall amount and rain days were highly correlated (r: 0.43–0.72).
Article 1 Read 7 Citations Projected Crop Production under Regional Climate Change Using Scenario Data and Modeling: Sensitivity to Chosen Sowing D... Sulin Tao, Shuanghe Shen, Yuhong Li, Qi Wang, Ping Gao, Isaa... Published: 27 February 2016
Sustainability, doi: 10.3390/su8030214
DOI See at publisher website ABS Show/hide abstract
A sensitivity analysis of the responses of crops to the chosen production adaptation options under regional climate change was conducted in this study. Projections of winter wheat production for different sowing dates and cultivars were estimated for a major economic and agricultural province of China from 2021 to 2080 using the World Food Study model (WOFOST) under representative concentration pathways (RCPs) scenarios. A modeling chain was established and a correction method was proposed to reduce the bias of the resulting model-simulated climate data. The results indicated that adjusting the sowing dates and cultivars could mitigate the influences of climate change on winter wheat production in Jinagsu. The yield gains were projected from the chosen sowing date and cultivar. The following actions are recommended to ensure high and stable yields under future climate changes: (i) advance the latest sowing date in some areas of northern Jiangsu; and (ii) use heat-tolerant or heat-tolerant and drought-resistant varieties in most areas of Jiangsu rather than the currently used cultivar. Fewer of the common negative effects of using a single climate model occurred when using the sensitivity analysis because our bias correction method was effective for scenario data and because the WOFOST performed well for Jiangsu after calibration.
Article 0 Reads 1 Citation Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Joachim Reud... Published: 01 January 2016
Modelling and Simulation in Engineering, doi: 10.1155/2016/7530759
DOI See at publisher website ABS Show/hide abstract
Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE), the mean absolute error (MAE), mean error (ME), skewness, and the bias easy estimate (BES)) and nonparametric (the sign test, STM) methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves) show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.1. IntroductionThe models are used in many fields such as engineering, agriculture, health, business, and weather and climate for simulation and prediction. They help to understand the different subprocesses underlying a given process and have undergone tremendous improvements due to developments in computing technology. These models range from simple (e.g., linear regression models) to complex models (e.g., weather and climate prediction models); Glahn and Lowry [1] categorized the models as dynamical and statistical. A combination of dynamical and statistical models is also used in operational forecasting especially using statistical techniques to correct output from a dynamical model.The national meteorological services usually operate high resolution numerical weather prediction models so as to give accurate guidance to users of weather information [2]. The accuracy of a given model is the measure of how close the model predicted fields are compared to independently observed atmospheric fields [3, 4] but it can be affected by errors in initial conditions, imperfections in the model, and inappropriate parameterizations. When a model agrees with observations, the confidence in using the model is higher [5] but the present agreement does not necessarily guarantee the skill for the future model prediction.The main advantage of models is their objectivity [1]. However, the presence of systematic errors is due to bias [6] which occurs due to difference in model response to external forcing [7] such as errors in initial conditions. This bias can manifest as overprediction or underprediction and is defined by the World Meteorology Organization as the mean difference between forecast values and mean actual observations [8] while Haerter et al. [9] define bias as time independent component of error in model output.A couple of methods have been proposed to correct for the bias. Maraun [10] used quantile-quantile method and found that uncorrected regional climate models underestimated precipitation and produced many drizzle cases. Durai and Bhradwaj [11] investigated four statistical bias correction methods (namely, best easy systematic method, lagged linear regression, nearest neighbor, and running mean removal) and noted that the running mean and nearest neighbor methods improved the forecast skill. These methods attempt to reduce the bias in the next forecast using the information from the bias of the previous forecast [12]; however they influence the model output if prediction is based on bias corrected data [8] and they cannot correct improper representation of processes producing the model output [9].Many studies have employed the parametric methods such as RMSE [13–15], MAE [14, 15], and ME [16] relative error [13, 16] to analyze the bias of numerical models but have put less emphasis on graphical tools as well as the nonparametric method. In the present study, we investigate the performance of the bias analysis methods on actual January 2015 temperature data and simulated temperature data using the Weather Research and Forecast (WRF) model (Tables 1 and 2). The rest of the paper is organized as follows: Section 2 describes the data sources, Section 3 presents overview of the methods of bias analysis, Section 4 presents results and discussion, and Section 5 gives summary and conclusion.Table 1: Statistical bias measures of actual and model simulation for maximum temperatures.Table 2: Statistical bias measures of actual and model simulation for minimum temperatures.2. DataWe simulate January 2015 temperature using WRF model version 3.7 [17], with parameterizations schemes: WRF single moment 6-class scheme microphysics, the Kain-Fritsch cumulus parameterization, the Asymmetric Convective Model option for planetary boundary layer, the Rapid Radiative Transfer Model for longwave radiation, and the Dudhia scheme for shortwave radiation. This data is compared with observed January 2015 temperature (maximum and minimum temperature) data obtained from the Uganda National Meteorological Authority (UNMA). We use six stations (namely, Arua (arua), Entebbe (ebb), Kasese (ksse), Jinja (jinja), Mbarara (mbra), and Gulu (gulu)). For a given day and station, the maximum simulated temperature is compared with the maximum observed temperature and the minimum simulated temperature is compared with the minimum observed temperature.3. Methods of Bias AnalysisIn order to comprehensively investigate the performance of numerical models, it is important to evaluate them on many metrics other than using a single method [5]. In this section, we present the popular methods for analyzing bias of numerical models. The parametric methods are presented in Sections 3.1–3.6 while the nonparametric method considered is described in Section 3.7.3.1. The Difference MeasuresWillmott et al. [3] suggested a difference variable, , given by the difference between the model predicted value, , and observed value, , that is,This is appropriate for point measurements. It is this measure that gives rise to other measures like the root mean square error (RMSE), the bias or mean error (ME), and the mean absolute error (MAE).For a model , with time-ordered data set we define the difference as follows:where is the th data point and is the corresponding th observed value from time-ordered actual observed data set . A positive (negative) value indicates that model output is higher (lower) than the actual values.3.2. The RMSEThe RMSE is the square root of the average squared differences () and is a popular statistical measure for the performance of numerical model in atmospheric research [15]. For a model, , the RMSE is thus defined as follows:The RMSE is a good criteria to classify the accuracy of a model and a low index indicates higher accuracy.3.3. The MAEThe MAE is the average of the magnitudes of differences ( taken as positive) and is also a popular index for estimating bias in atmospheric studies. For a model, , the MAE is defined as follows:and, just like RMSE, a low index indicates higher accuracy.3.4. The BiasThe bias, also known as the mean error (ME), is obtained by averaging the differences () over the number of cases. For a given model output, , the ME is calculated fromThe magnitude of ME is equal to the MAE if all the predicted values of the model are higher (or lower) than the actual values. A value of bias close to zero indicates that model values are in fair agreement with actual values with zero implying no bias.The relative bias is another bias measure suggested by Christakis et al. [16] in which ME is divided by average observations and given as follows:The bias given by (5) and (6) gives both the direction and probable magnitude of the error.3.5. The Skewness CoefficientThe skewness coefficient is a moment measure based on symmetry [18]. Having obtained the differences between the model and actual values (), positive (or negative) skewness indicates that model outputs are largely lower (or higher) than actual observations. The skewness coefficient is defined as follows:with as the standard error of the sample biases forming a distribution and calculated as follows:3.6. The Bias Easy Systematic MethodThe bias easy systematic (BES) method considers location measures (especially quartiles) and is given by Durai and Bhradwaj [11] as follows:where , , and are the sample lower quartile, median, and upper quartile, respectively, of the differences, , and it is commended for its robustness for taking care of extreme values by Woodcock and Engel [12].3.7. Sign Test MethodThe sign test method (STM) is a nonparametric method based on assigning a score, , that compares the prediction, , and observation, , at a given point. If the model predicts higher values than observation , we assign positive one (i.e., ), if the model prediction is equal to observed value , we assign zero (i.e., ), and if the model predicts a value lower than observation , we assign negative one (i.e., ); thusFor a model forming a distribution of scores, , of size , such that , the mean is computed as follows:If the mean score, , for a given model is positive, the model is generally considered to overpredict; if it is negative then the model underpredicts. Otherwise there is no significant bias. We suggest the hypothesis asand consider for unbiased model (i.e., zero bias) For a distribution of sample size less than 30 , we propose the use of Student’s -distribution and make approximation to normal distribution for large samples . The standard error is computed usingThe nonparametric statistic for measuring bias is then corrected and calculated using We can then test this for a given significance level and make statistical inferences.4. Results and DiscussionIn comparing model results with
Top