Faced with concerns over the correctness of safety critical systems, we should be able to take comfort that some parts of them may be proved correct: that is, we should be able to trust proofs. However, the methods by which formal proofs are built today makes trusting them difficult. While proof assistants are used to build formal proofs, those proofs are often locked to the technology behind that assistant. Formal proofs produced by one proof assistant cannot usually be checked and trusted by another proof assistant nor by a future version of itself. Thus, one of the components of the scientific method that ensures trust—reproducibility—is missing from proof assistants today. Building on recent developments in the theory of proof, I will present the Foundational Proof Certificate framework for defining the semantics of proof evidence. Since such definitions are executable, it is possible to build proof checkers that can check a wide range of formal proofs in both classical and intuitionistic versions of logic and arithmetic. In this scheme, the logic engine that executes such definitions is the only thing that needs to be trusted. Since such a logic engine is based on well-known computational logic topics, anyone can write a logic engine in their choice of programming language and hardware in order for them to build a checker they might trust. Possible consequences of employing this framework include much richer networking of existing provers as well as enabling both globe-spanning libraries and marketplaces of proofs.