Protecting secret-manipulating programs against adversaries that may observe side-channels is challenging, and often involves complex countermeasures that are difficult to reason about. The evaluation of such countermeasures and of their deployment is therefore often split between proofs of security in relatively simple models that serve as a first filter before implementation, and practical and empirical evaluations against known attacks, which validates that the model does not depart too much from the reality of such attacks. However, proofs of side-channel security, even in the simplest models considered (such as the probing model), are large, tedious and error-prone. In this talk, I will discuss recent advances in the automated and formal verification of probing security. Starting from some observations relating security in the probing model to standard language-based security notions, I will present a strong notion of security that supports both automated verification for small programs at small security orders, and compositional proofs of security for large programs at arbitrary orders.