Maks Ovsjanikov will defend his habilitation thesis next week on May, 19th. It will take place at Télécom ParisTech, 46 rue Barrault, Paris, amphi Estaunié.
Abstract: The work presented in my habilitation dissertation describes a set of approaches for analyzing and processing 3D shapes and their relations. The main unifying theme of this work is the observation that many concepts in geometric data analysis can be considered, both in theory and in practice, as linear operators acting on real-valued functions defined on the shapes. Although this point of view has been common in some areas of mathematics, such as dynamical systems, representation theory or parts of differential geometry, it has only recently been adopted in digital geometry processing, where it has led to novel insights and efficient algorithms for a wide variety of problems including shape matching, tangent vector field analysis and shape comparison to name a few. I will give an overview of these and related techniques and demonstrate, in particular, how the functional operator point of view can be helpful in a variety of practical settings, both by providing a common language in which many operations can be expressed and by enabling the use of classical linear-algebraic tools in novel, and sometimes unexpected scenarios.
Schnyder woods are combinatorial structures on planar triangulations (maximal planar graphs embedded on the sphere) that can be formulated as a certain partition of the edges into 3 spanning trees. These structures have found many algorithmic applications, for instance in graph drawing, succinct encoding of meshes, efficient routing in planar networks, etc. I will present some of these applications, with an emphasis on the interplay with combinatorics.
The next Comète seminar will take place next Wednesday 17th of May 2017 at 14h00 in Salle Grace Hooper (LIX - Alan Turing building, École Polytechnique). Giovanni Cherubin, PhD candidate in Information Security with the CDT in Cyber Security at Royal Holloway University of London (RHUL), will talk about "'Bayes, not Naïve': Security Bounds on Website Fingerprinting Defenses" (abstract below).
Abstract: Website Fingerprinting attacks allow an adversary to predict which web pages a victim visits, even when she browses through Tor/VPN, by using Machine Learning classification techniques on the encrypted traffic she produces. To date, the common method for evaluating Website Fingerprinting defences is testing them against state-of-the-art attacks. This generated a 15 years-long arms race.
This talk presents a practical method for deriving security bounds for Website Fingerprinting defences, which is based on results of the Machine Learning theory (specifically, on the optimality of the Bayes classifier). The method gives, with respect to the set of features used by an adversary, a lower bound estimate of what the adversary can achieve, for any classifier he may use. This result: i) allows practitioners to evaluate and compare defences in terms of their security, and ii) it favours a shift of WF research to a classifier-agnostic identification of optimal features. The talk will then consider open questions, and future applications of the method.
Le prochain GT Combi du Plateau de Saclay aura lieu ce mercredi 3 mai à 11h dans la salle Philippe Flajolet du LIX. Il s'agira d'une séance participative, où chacun pourra présenter ses questions, conjectures, ou travaux en cours en lien avec la combinatoire. Le programme du GT Combi du Plateau de Saclay est disponible ici
Abstract: We present integer linear models with a polynomial number of variables and constraints for combinatorial optimization problems in graphs: optimum elementary cycles (whose traveling salesman problem), optimum elementary paths even in a graph with negative cycles, and optimum trees (whose Steiner tree problem) problems. Mots clefs : modeling combinatorial optimization problems, optimization in graphs, characterization of connected subgraphs.
Les systèmes de réécriture de mots fournissent, lorsqu'ils sont convergents, une façon agréable et efficace pour décider de l'égalité dans des monoïdes (ou dans des groupes). On peut se demander si cette méthode est universelle, c'est-à-dire si tout monoïde admet une présentation convergente finie. Une réponse négative a été apportée dans les années 80 par Squier, qui a élaboré une preuve utilisant des invariants "géométriques" des monoïdes (leurs groupes d'homologie), montrant ainsi l'intérêt de prendre en compte la "géométrie des calculs". Je présenterai ce résultats et, si le temps le permet, des extensions récentes à des problèmes d'algèbre universelle. Aucune connaissance préalable dans les domaines sus-mentionnés n'est bien sûr requise.
Abstract: We propose a symbolic control synthesis method for nonlinear sampled switched systems whose vector fields are one-sided Lipschitz. The main idea is to use an approximate model obtained from the forward Euler method to build a guaranteed control. The benefit of this method is that the error introduced by symbolic modeling is bounded by choosing suitable time and space discretizations. The method is implemented in the interpreted language Octave. Several examples of the literature are performed and the results are compared with results obtained with a previous method based on the Runge-Kutta integration method.
Abstract: Many problems in geometry processing, graph theory, and machine learning involve optimizations whose variables are defined over a geometric domain. The geometry of the domain gives rise to geometric structure in the optimization problem itself. In this talk, I will show how leveraging geometric structure in the optimization problem gives rise to efficient and stable algorithms applicable to a variety of application domains. In particular, I will describe new methods for problems arising in shape analysis/correspondence, flows on graphs, and surface parameterization.
Abstract: The intrinsic Laplacian operator plays a central role in geometry processing, editing, and analysis. However, intrinsic approaches cannot capture the spatial embedding (extrinsic geometry) of a shape. Instead, I will discuss the Dirichlet-to-Neumann operator, whose spectrum (known as the Steklov eigenvalues), encodes extrinsic geometry information. The use of this operator provides a principled approach to analyze and process extrinsic and volumetric geometries.
Today data is being generated at an unprecedented rate, so much that 90% of the data in the world has been created in the past two years. However, the human ability to comprehend data remains as limited as before. As such, the Big Data era is presenting us with an increasing gap between the growth of data and the human ability to comprehend data. Consequently, there has been a growing demand of data management tools that can bridge this gap and help the user retrieve high-value content from data more effectively. To respond to such needs, our team is developing a new database service for interactive exploration in a framework called “explore-by-example.” In this talk, I introduce the explore-by-example framework, which iteratively seeks user relevance feedback on database samples and uses such feedback to finally predict a query that retrieves all objects of interest to the user. The goal is to make such exploration converge fast to the true user interest model, while minimizing the user labeling effort and providing interactive performance. I discuss a range of technical issues to do so for complex user interest patterns. I finally conclude the talk by pointing out a host of new challenges, from application of learning theory, to database optimization, to heterogeneous data sets, to visualization.
Abstract: A well-identified issue in algorithms and, in particular, in online computation is the weakness of the worst case paradigm. Several more accurate models have been proposed in the direction of eliminating this weakness. Resource augmentation is one of these models which has been successfully used for providing theoretical evidence for several heuristics in scheduling with good performance in practice. According to it, the algorithm is applied to a more powerful environment than that of the adversary. Several types of resource augmentation for scheduling problems have been proposed up to now, including speed augmentation, machine augmentation and more recently rejection. In this context, we propose algorithms for online scheduling problems for minimizing variations of the total weighted flow time objective based on mathematical programming and a primal-dual scheme. Inspired by these algorithms, we present an online heuristic for non-preemptive scheduling sequential jobs, which outperforms standard scheduling policies as shown by an extensive campaign of simulations on instances obtained by real traces.
Abstract: In the setting of the modal logic that characterizes modal refinement over modal transition systems, Boudol and Larsen showed that the formulae for which model checking can be reduced to preorder checking, that is, the characteristic formulae, are exactly the consistent and prime ones. This paper presents general, sufficient conditions guaranteeing that characteristic formulae are exactly the consistent and prime ones. It is shown that the given conditions apply to the logics characterizing all the semantics in van Glabbeek's branching-time spectrum.
Abstract: We discuss theory, formulations and optimization methods for embedded problems. First similarities and differences between bilevel, semi-infinite and generalized semi-infinite programs are discussed along with challenges due to nonconvexity. Discretization methods are presented that guarantee approximately global solution of the embedded programs despite the presence of nonconvex lower-level programs. Numerical results are shown for test problems. Existing and new formulations from operation research and chemical engineering are shown along with results.
Pierre Karpman, ancien doctorant du LIX sous la direction de Daniel Augot et de Pierre-Alain Fouque, fait partie d'une équipe internationale qui vient de démontrer une faille majeure dans l'algorithme cryptographique SHA1. Cette attaque a nécessité l'équivalent de 6500 années de calcul et permet de produire des documents falsifiés. Elle a des conséquences industrielles importantes du fait que SHA1 est encore utilisé pour la sécurisation des sites web (SSL/TLS), du courrier électronique (PGP/GPG) ou dans des outils courants comme Git.
Tous les détails concernant cette attaque sont disponibles sur le site de la collaboration:
Faced with concerns over the correctness of safety critical systems, we should be able to take comfort that some parts of them may be proved correct: that is, we should be able to trust proofs. However, the methods by which formal proofs are built today makes trusting them difficult. While proof assistants are used to build formal proofs, those proofs are often locked to the technology behind that assistant. Formal proofs produced by one proof assistant cannot usually be checked and trusted by another proof assistant nor by a future version of itself. Thus, one of the components of the scientific method that ensures trust—reproducibility—is missing from proof assistants today. Building on recent developments in the theory of proof, I will present the Foundational Proof Certificate framework for defining the semantics of proof evidence. Since such definitions are executable, it is possible to build proof checkers that can check a wide range of formal proofs in both classical and intuitionistic versions of logic and arithmetic. In this scheme, the logic engine that executes such definitions is the only thing that needs to be trusted. Since such a logic engine is based on well-known computational logic topics, anyone can write a logic engine in their choice of programming language and hardware in order for them to build a checker they might trust. Possible consequences of employing this framework include much richer networking of existing provers as well as enabling both globe-spanning libraries and marketplaces of proofs.