Relation classification (RC) is a fundamental task in knowledge discovery. Prototypical networks are commonly used for few-shot RC tasks to tackle labeled data scarcity and long-tail relations, but they always overlook classification reliability and outlier features, potentially leading to sub-optimal similarity measurement. In a dynamic world with emerging novel semantic relations, incremental few-shot learning of relations gains attention, involving learning of both base and novel relations in new tasks while retaining base relation knowledge. However, this becomes challenging with increasing novel relations, requiring that models should reduce reliance on learning base relations for the new task. Therefore, this paper explores class-incremental few-shot relation classification (CIFRC) using a Gaussian-augmented prototypical network (GA-Proto). GA-Proto refines the similarity measurement by incorporating discrepancies between ideal and actual classifications via Gaussian Mixture Model and analyzing Gaussian outlier features of query instances. It also employs reliability learning and knowledge distillation to mitigate encoding space distortion and base relation forgetting by enhancing classification reliability and transferring base relation knowledge, respectively. Additionally, GA-Proto uses label smoothing to alleviate novel relation overfitting. Experimental results on three public datasets demonstrate that GA-Proto outperforms existing methods on CIFRC, achieving up to 19.64% improvement in accuracy.