Mechanistic interpretability[9] is the study of taking a trained neural network, and analysing the weights to reverse engineer the algorithms learned by the mode. We propose multiple forms of mechanistic interpretability MI [9] to support financial engineering utilising large language models. As these models scale, their open-endedness and high capacity creates an increasing scope for unexpected and sometimes harmful behaviors. [9] [1] Antrhopic discuss even years after a large model is trained, both creators and users routinely discover model capabilities including problematic behaviors they were previously unaware of. For our usecase it is imperative traders, or decision makers have an insight into the model, that might produce interesting signals to support decision making. We seek to provide mechanistic interpretability exploring BERTology [13] for open models and proposing alternative methods for closed source generative models. We find we can identify hallucinations from generated sentiment justifications from a finetuned GPT3.5 model, when we use an adjusted BLEU scores. When considering the use of these applications in production systems, human review remains critical, but creating a suite of hallucination prevention metrics should be considered.