Federated Learning (FL) allows multiple clients to collaboratively train a model without outsourcing their datasets. This process is orchestrated by a server that only accesses the model parameters and aggregates them during each round of FL training. However, the server's access to these parameters can compromise data privacy through attacks like model inversion. To mitigate these risks, Homomorphic Encryption (HE) encrypts the model parameters, enabling secure server-side aggregation. Despite this, the high value of a trained deep neural network poses the threat of malicious clients attempting to steal the model during training. FL watermarking combats this by embedding a secret watermark to protect the model's intellectual property. Yet, embedding watermarks into encrypted parameters from the server side has remained unaddressed-until now. In this paper, we present FedCrypt, the first dynamic white-box watermarking technique compatible with HE in FL. FedCrypt involves training a projection function on the activations of the encrypted model using a trigger set, preserving client privacy and enabling new verification protocols for joint ownership proof between the server and clients without disclosing private information. Our experimental results demonstrate FedCrypt's effectiveness, achieving performance on par with unencrypted models in terms of accuracy and robustness against several attacks.