The principle of maximum entropy is a well-established technique for choosing a distribution that matches available information while minimizing bias. It finds broad use across scientific disciplines and in machine learning. However, the principle as defined by Jaynes (1957) is susceptible to noise and error in observations. This forces real-world practitioners to use relaxed versions of the principle in an ad hoc way, negatively impacting interpretation. To address this situation, we present a new principle we call uncertain maximum entropy that generalizes the classic principle and provides interpretable solutions irrespective of the observational methods in use. We introduce a convex approximation and expectationmaximization based algorithm for finding solutions to our new principle. Finally, we contrast this new technique with two simpler generally applicable solutions theoretically and experimentally show our technique provides superior accuracy.