Automated translation systems are useful and essential tools for organisations operating in multilingual environments, offering significant advantages in efficiency and scalability. However, the challenge remains in optimising translation models for specific in-house tasks while maintaining high accuracy across diverse languages, particularly those with limited linguistic resources. Fine-tuning GPT-4 for multilingual translation presents a novel approach that addresses both high-resource and low-resource languages, providing a more adaptable solution for domain-specific translation needs. Through a comprehensive performance evaluation using automatic metrics such as BLEU, ROUGE, METEOR, and CHRF, the fine-tuned model demonstrated superior performance in high-resource languages, while revealing notable gaps in low-resource languages, indicating the need for further refinement. Additionally, the study emphasises the limitations of relying solely on automatic evaluation methods, as they may fail to capture subtle linguistic complexities. Despite these challenges, the findings suggest that fine-tuning offers substantial improvements in efficiency, accuracy, and scalability for in-house translation tasks, paving the way for more targeted applications in diverse organisational contexts.