Reviewing software changes is crucial, as it mitigates the introduction of defects and thereby saves time and reduces costs. Just-in-time (JIT) defect prediction has emerged as an approach to support the review process by predicting the likelihood of defects in new commits. Effort-aware evaluations were proposed to better manage developers' limited time to review changes and analyze the applicability of JIT defect prediction approaches. However, current effort-aware approaches neglect the timedependent nature of software engineering, thus overstating the performance and applicability of such approaches. Further, they do not reflect state-of-the-art software development practices. In this work, we discuss these limitations and propose a paradigm shift to evaluate JIT defect prediction more realistically and redirect the focus to saving effort under the condition that defective commits are still reviewed. Thus, we propose performance metrics that better represent applicable JIT defect prediction. We further analyze reliability techniques that adapt the prediction results and allow for a risk-based application of JIT defect prediction models. Taking this new perspective, we find that while still reviewing 95% of defective commits, on average, 46% of the non-defective commits are correctly identified by JIT defect prediction models and can be skipped; therefore, 20% of the total avoidable effort can be saved by employing JIT defect prediction models.