Tuesday, December 17th - 10h00. INRIA Saclay - Bâtiment Alan Turing, Salle 2024

**Programme**

**Title**: Abstract channels and their robust information-leakage ordering**Speaker**: Annabelle McIver (Macquarie University)**Abstract**:The observable output of a probabilistic system that processes a secret input might reveal some information about that input. The system can be modelled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures can then quantify the amount of information leakage caused by the channel. But it turns out that the conventional channel representation, as a matrix, contains structure that is redundant with respect to that leakage, such as the labeling of columns, and columns that are scalar multiples of each other. In this talk we introduce abstract channels by quotienting over those redundancies; the mathematical properties of the resulting structure offer some insight into generalising the lattice of information from deterministic to probabilistic channels, as well as how to compose channels in a compositional way. Furthermore, the more abstract viewpoint suggests a generalisation of leakage measures, and a study of how they might give an optimal bound for any (reasonably)leakage measure and any prior.**Title**: Compositionality of a new information order wrt programs and channels**Speaker**: Carroll Morgan (UNSW)**Abstract**: Recently an information-leakage order has been proposed that has a number of promising connections to earlier work in security and information flow. In the order's structural formulation, it generalises Landauer and Redmond's "Lattice of Information" from the deterministic- to the probabilistic case. In its (Coriaceously equivalent) testing formulation, it generalises a number of other existing "entropies" such as Bayes Risk/Vulnerability, Guessing Entropy and even Shannon Entropy (at a stretch). (Notably, it does not include alpha-marginal guesswork, however.) Furthermore, it is the "compositional closure" of Bayes Risk wrt a simple probabilistic programming language (pGCL) augmented with hidden state. Via even more recent work, however, it seems that the new order is not compositional wrt channel cascading on the right. That is, given channels C and C' with C -o C', where we write (-o) for "is no more secure than", it is -not- the case in general that C*D -o C'*D where (*) denotes cascading (ie multiplication of the channel matrices). This surprising situation holds in spite of the fact that pGCL can express channels, and that the pGCL operators are monotonic wrt (-o). How can this be? One possibility of course is that we have made a technical mistake somewhere. But, if that is not so, it can only be that although pGCL can "encode" channels it cannot encode channel -cascading- and indeed it seems that that is the case: it cannot. The reason is that pGCL's semantics for hidden state mandates "perfect recall", which is that if a hidden value "leaks" into a visible variable, the leak cannot subsequently be "erased" by overwriting that visible variable: the toothpaste cannot go back into the tube. I will explain how pGCL came to have perfect recall "designed in", and why. Rather than being motivated by any of the above (which was not known at the time), it is instead a consequence of including compositionality as a goal from the very start, even in advance of a semantic model, via "gedanken experiments" over the program algebras that would result from the various alternative semantic models that we were considering. The only reasonable algebra seemed to require perfect recall. Finally (if there is time) I will have a quick look at the fact that some familiar and even "obvious" security improvements are in fact -not- allowed by the (-o) order. For example, "choose uniformly a 32-bit password" is not improved by "choose uniformly a 64-bit password" in the (-o) order --- and this is in spite of the fact that the latter is less Bayes Vulnerable and has more Guessing- and Shannon- Entropy than the former. Digging into the reason that (-o) rejects this relationship seems to offer some interesting insights.**Title**: Six scenarios for g-capacity**Speaker**: Geoffrey Smith (FIU)**Abstract**: In thinking about the g-capacity of a channel C, we can consider either multiplicative or additive g-leakage, and with respect to the gain function g and the prior pi, we can universally quantify g, pi, or both. This leads to a total of six scenarios, each of which seems interesting to think about. In this informal and preliminary talk, I will discuss what is known and unknown (by me, at least!) about each of the six scenarios. I will also demonstrate the use of Kostas's linear programming-based tools for calculating some of these capacities.**Title**: TBA**Speaker**: Mário Alvim (Upenn)**Abstract**: TBA