Explainability is becoming increasingly crucial in machine learning studies and, as the complexity of the model increases, so does the complexity of its explanation. However, the higher the complexity of the problem the higher the amount of information it may provide, and this information can be exploited to generate a more precise explanation of how the model works. One of the most valuable ways to recover such relation between input and output is to extract counterfactual explanations. In binary classification, counterfactuals allow us to find minimal changes from an observation to another one belonging to the opposite class. But how do counterfactuals work in multi-class problems? In this article, we propose a novel methodology to extract multiple counterfactual explanations (MUCH, MUlti Counterfactual via Halton sampling) from an original Multi-Class Support Vector Data Description algorithm (MC-SVDD). To evaluate the performance of the proposed method, we extracted a set of counterfactual explanations from three state-of-the-art datasets achieving satisfactory results that pave the way to a range of real-world applications, for example disease prevention.