We propose reconstructive reservoir computing (RRC) for anomaly detection working for time-series signals. This paper investigates its fundamental properties with experiments employing echo state networks (ESNs). The RRC model is a reconstructor to replicate a normal input time-series signal with no delay or a certain delay (delay ≥ 0). In its anomaly detection process, we evaluate instantaneous reconstruction error defined as the difference between input and output signals at each time. Experiments with a sound dataset from industrial machines demonstrate that the error is low for normal signals while it becomes higher for abnormal ones, showing successful anomaly detection. It is notable that the RRC models’ behavior is very different from that of conventional anomaly detection models, that is, those based on forecasting (delay < 0). The error of the proposed reconstructor is explicitly lower than that of a forecaster, resulting in superior distinction between normal and abnormal states. We show that the RRC model is effective over a large range of reservoir parameters. We also illustrate the distribution of the output weights optimized through a training to discuss their roles in the reconstruction. Then, we investigate the influence of the neuronal leaking rate and the delay time shift amount on the transient response and the reconstruction error, showing high effectiveness of the reconstructor in anomaly detection. The proposed RRC will play a significant role for anomaly detection in the present and future sensor network society