In this paper, we propose a novel method for compressing and optimizing deep neural networks through channel pruning based on spectral norm. As deep learning models grow in complexity, the demand for efficient deployment, particularly in resource-constrained environments, has increased. Traditional pruning methods often rely on magnitude-based criteria, which can overlook the model's stability and generalization capabilities. In contrast, our approach leverages the spectral norm, the largest singular value of a layer's weight matrix, as a more principled criterion for pruning decisions. The spectral norm provides a measure of the network's sensitivity to input perturbations, offering a theoretically grounded method for identifying and removing less critical channels without significantly impacting model performance. We evaluate our method on CIFAR-10 with VGG-16 and ImageNet with ResNet-50, achieving substantial reductions in both model size and computational cost. Our approach leads to a decrease of over 80% in parameters and a 50% reduction in floating-point operations (FLOPs), while maintaining or even improving accuracy compared to traditional pruning techniques. These results demonstrate that spectral norm-based pruning is a robust and efficient method for deep network compression, opening avenues for further research into stability-aware optimization techniques.