Results
4.1 Evaluation of multi-precipitation
products
4.1.1 Evaluation at
basin-scale
According to Fig. 5 , except TRMM, the data from other
precipitation products show a decreasing trend from south-east to
north-west, which is consistent with the results of Hu et al(2011). Compared with GO, the precipitation data of IMERG and IMERG_T
are the closest, while the precipitation data of TRMM and CFSR are
significantly overestimated, and the precipitation data of CMADS are
significantly underestimated. Several literatures (Ghodichore et
al ., 2018; Graham et al ., 2019; Saha et al ., 2014) found
that reanalysis precipitation products obviously overestimated or
underestimated observed precipitation.
The further to reflect the
difference between the precipitation products and the GO, the PBIAS, CC,
and RMES of the precipitation products and the GO were counted on a
monthly time-scale. Based on Fig. 4 , the PBIAS values of TRMM,
IMERG_T, IMERG, CMADS, and CFSR were characterized by low warm season
precipitation and high cold season precipitation. TRMM precipitation
data were underestimated in January and February, and overestimated at
other times, especially from October to December. IMERG_T precipitation
data were underestimated in the rainy season (MayNovember) and
overestimated in the dry season (DecemberApril). IMERG precipitation
data were underestimated in the dry season (DecemberApril), but
IMERG performed best in observing precipitation in the rainy season
(average PBIAS = -2.26%). CMADS precipitation data were underestimated
in other months except December. The precipitation data of CFSR
overestimated the precipitation in all months. Except IMERG, the CC
values of other precipitation products also show characteristics of
being lower in the warm season and higher in the cold season, among
which CFSR has the best correlation with GO (average CC = 0.73), while
CMADS, TRMM, IMERG_T, and IMERG perform poorly, with mean CC values of
0.23, 0.01, -0.01, and -0.28, respectively. However, the RMSE values of
five types of precipitation products show seasonal characteristics
related to the greater precipitation in the warm season and lower
precipitation in the cold season in the YRSR (Hu et al ., 2011).
IMERG precipitation products have the smallest deviation, with RMSE
average of 13.71 mm, followed by CMADS (17.35 mm), CFSR (21.32 mm),
IMERG_T (32.42 mm), and TRMM (47.89 mm).
To reveal whether different precipitation products can capture
precipitation events within various precipitation intensity groups, we
use the probability density function approach to evaluate the daily
precipitation intensity (PI), divided PI into nine bins (0 ≤ PI
< 0.1, 0.1 ≤ PI < 1, 1 ≤ PI < 5, 5 ≤ PI
< 10, 10 ≤ PI < 15, 15 ≤ PI < 20, 20 ≤ PI
< 30, 30 ≤ PI < 40, and PI ≥ 40). Based onFig. 5 , IMERG, IMERG_T, CMADS, and CFSR can correctly capture
precipitation classifications, but TRMM overestimates high rainfall of
> 10 mm/day. IMERG and CFSR overestimate the intensity of
all precipitation events, especially CFSR, which significantly
overestimates moderate precipitation events of 110 mm/d. The
precipitation underestimation by CMADS is mainly concentrated within the
range of 120 mm/d, while events within the range of 0.11 mm/d
are overestimated.
4.1.2 Evaluation at
grid-scale
According to Fig. 6 , the qualities of the TRMM, IMERG_T,
IMERG, CMADS, and CFSR were generally better in the south-east than in
the north-west. The north-western areas are covered with snow all year
round, owing to their high altitude and higher latitude. This leads to
poor-quality precipitation observations in this area (Mark et
al ., 2016; Noh et al ., 2009). The overestimation of TRMM is the
largest with PBIAS of 33.11% to 59.74%, and this gradually increases
from downstream to upstream. The precipitation data of CFSR were
overestimated except for the station at Dari, while CMADS precipitation
data were underestimated except for the station at Maqu. IMERG
precipitation data were overestimated in the downstream area and
underestimated upstream. Compared with satellite precipitation products
(CC of 0.090.40), the reanalysis precipitation products (CC of
0.340.58) have a better correlation with GO. The RMSE values of five
precipitation products were large in the south-east and small in the
north-west. According to the statistical indicators pertaining to
various precipitation products, the overall performance of CMADS
precipitation products is the best, with PBIAS of 27.22%2.48%,
CC of 0.430.58, and RMSE of 2.684.96 (mm/d), followed by IMERG,
CFSR, IMERG_T, and TRMM.
IMERG_T and TRMM have the same detection index value [Figs.
7(a) and (b) ], and the specific reason for this is given inSection 2.2.1 , so here we only analyzed TRMM. According toFig. 7 , the four precipitation products have high detection
rates (POD ≥ 0.60), of which CFSR performs best (POD ≥ 0.90), followed
by IMERG (0.67 ≤ POD ≤ 0.82), CMADS (0.63 ≤ POD) ≤ 0.84), and TRMM (0.60
≤ POD ≤ 0.70). FAR values of four precipitation products increase with
latitude. Among the four precipitation products, TRMM shows the highest
false alarm ratio (0.40 ≤ FAR ≤ 0.57), followed by IMERG (0.40 ≤ FAR ≤
0.57), CFSR (0.29 ≤ FAR ≤ 0.57) and CMADS (0.30 ≤ FAR ≤ 0.48). CFSR has
the highest comprehensive forecasting ability, with a CSI of
0.480.69, followed by CMADS and IMERG, and TRMM exhibits the worst
comprehensive forecasting ability. According to the detection indicators
of various precipitation products, the overall performance of CFSR
precipitation products is the best, with a POD of 0.900.98, FAR of
0.290.51, and CSI of 0.48-0.69, followed by CMADS, IMERG, and TRMM.
4.2 Evaluation of hydrological
simulations
4.2.1 Results of streamflow simulation using different
precipitation
datasets
According to Fig. 8 , the runoff simulation results of Scenario
S1 are the best overall, with R 2 and NSE values
of 0.85/0.75, 0.84/0.51 in the calibration/validation periods at TNH and
0.81/0.57, 0.80/0.39 in the calibration/validation periods at JM.
Scenario S6 performed second best, and in the validation periods
(R 2 = 0.78, NSE = 0.53 at TNH;R 2 = 0.64, NSE = 0.53 at JM) yielded the
satisfactory performance and outperformed Scenario S1, but it performed
poorly in the calibration periods. Scenario S6 underestimates the runoff
during the dry season, owing to the CMADS precipitation data being
underestimated (Fig. 4 ). The runoff simulation results of
Scenarios S3 and S7 were significantly overestimated, and neither TNH
nor JM reached a satisfactory performance, especially with respect to
Scenario S3 at JM. The reason for this is that the precipitation data of
TRMM and CFSR were overestimated (Fig. 4 ), and the
precipitation data of TRMM overestimate the upstream precipitation
[Figs 3(c) and 6(b) ].
Based on Figs 8 and 9 , the runoff simulation results of
Scenario S5 were significantly better than those of Scenario S3, but
slightly worse than in Scenario S2. In calibration periods, scenario S2
(R 2 = 0.76, NSE = 0.75 at TNH;R 2 = 0.77, NSE = 0.70 at JM) and S5
(R 2 = 0.70, NSE = 0.65 at TNH;R 2 = 0.66, NSE = 0.66 at JM), the runoff
simulation results yielded a satisfactory performance, but the
performance of the two in the validation periods was extremely poor (NSE
≤ 0.26). This may be due to the short time-series of precipitation data
in Scenarios S2 and S5, and the limited number of calibration times of
parameters, which leads to significant differences in the performance of
simulation results in the calibration and validation periods. In
summary, the runoff simulation results based on GO performed best
overall, followed by IMERG, CMADS, CFSR, IMERG_T, and TRMM. IMERG and
CMADS precipitation products can be used in this data-scarce alpine
region.
4.2.2 Results of streamflow simulation using corrected
precipitation
datasets
As mentioned in Section 4.1 , the GO and the reanalyzed
precipitation products have a high correlation at basin and grid-scales,
but the correlation with the satellite precipitation products is poor
(Figs 4 and 6 ). Therefore, we only corrected the precipitation
data of CMADS and CFSR. We used GO to perform daily-scale regression
analysis on CMADS and CFSR precipitation data at basin-scale, owing to
scarcity of data in the YRSR. Comparing the fitting effects of different
functions, it is found that R 2 of the resulting
cubic polynomial is the highest. According to cubic polynomial fitting,R 2 of CMADS is 0.827, andR 2 of CFSR is 0.934 (Fig. 10 ).
Fig. 11 shows that the corrected CFSR precipitation data has
improved the simulation results at TNH. The simulation results have
changed from unsatisfactory to satisfactory, and theR 2 (NSE) value during the calibration and
validation periods increased (increased) by 0.28 (0.34) and 0.22 (0.27),
respectively. However, the overall performance of CMADS after correction
remains unsatisfactory because the correlation between GO and CFSR
precipitation data is better than that of CMADS (Fig. 4 ).
Compared with TNH, the corrected CMADS and CFSR precipitation data
generate no improvements in runoff model results of JM, and the
simulated results remain unsatisfactory.
4.2.3 Results of streamflow simulation using combined
precipitation
datasets
By using R 2 and NSE indicators, it is found
that the simulated results of IMERG and CMADS precipitation data are
close to, or even better than, the GO in calibration or validation
periods (Figs 8 and 9 ). The performance of CFSR precipitation
data after correction is better (Fig. 11 ). Therefore, we choose
the combination of CMADS, CFSR_C, and IMERG precipitation data and GO,
corresponding to Scenarios S10, S11, and S12. The spatial distribution
of precipitation stations is shown in Fig. 1(b) .
According to Table 2 , the overall performance of Scenario S10
combining GO and CMADS is the best, and the simulation results at TNH
resulted in good performance (R 2 = 0.77, NSE =
0.72), which is superior to Scenario S1 (R 2 =
0.80, NSE = 0.68) and Scenario S8 (R 2 = 0.59,
NSE = 0.50). Although the simulation results at JM yielded
unsatisfactory performance, they were close to being deemed satisfactory
(calibration periods: R 2 = 0.50, NSE = 0.48;
validation periods: R 2 = 0.55, NSE = 0.47). The
runoff simulation results of Scenarios S11 and S12 are not as good as
those of Scenarios S1 and S2, but slightly better than those of
Scenarios S5 and S9.