Les systèmes de réécriture de mots fournissent, lorsqu'ils sont convergents, une façon agréable et efficace pour décider de l'égalité dans des monoïdes (ou dans des groupes). On peut se demander si cette méthode est universelle, c'est-à-dire si tout monoïde admet une présentation convergente finie. Une réponse négative a été apportée dans les années 80 par Squier, qui a élaboré une preuve utilisant des invariants "géométriques" des monoïdes (leurs groupes d'homologie), montrant ainsi l'intérêt de prendre en compte la "géométrie des calculs". Je présenterai ce résultats et, si le temps le permet, des extensions récentes à des problèmes d'algèbre universelle. Aucune connaissance préalable dans les domaines sus-mentionnés n'est bien sûr requise.
Abstract: Many problems in geometry processing, graph theory, and machine learning involve optimizations whose variables are defined over a geometric domain. The geometry of the domain gives rise to geometric structure in the optimization problem itself. In this talk, I will show how leveraging geometric structure in the optimization problem gives rise to efficient and stable algorithms applicable to a variety of application domains. In particular, I will describe new methods for problems arising in shape analysis/correspondence, flows on graphs, and surface parameterization.
Abstract: The intrinsic Laplacian operator plays a central role in geometry processing, editing, and analysis. However, intrinsic approaches cannot capture the spatial embedding (extrinsic geometry) of a shape. Instead, I will discuss the Dirichlet-to-Neumann operator, whose spectrum (known as the Steklov eigenvalues), encodes extrinsic geometry information. The use of this operator provides a principled approach to analyze and process extrinsic and volumetric geometries.
Today data is being generated at an unprecedented rate, so much that 90% of the data in the world has been created in the past two years. However, the human ability to comprehend data remains as limited as before. As such, the Big Data era is presenting us with an increasing gap between the growth of data and the human ability to comprehend data. Consequently, there has been a growing demand of data management tools that can bridge this gap and help the user retrieve high-value content from data more effectively. To respond to such needs, our team is developing a new database service for interactive exploration in a framework called “explore-by-example.” In this talk, I introduce the explore-by-example framework, which iteratively seeks user relevance feedback on database samples and uses such feedback to finally predict a query that retrieves all objects of interest to the user. The goal is to make such exploration converge fast to the true user interest model, while minimizing the user labeling effort and providing interactive performance. I discuss a range of technical issues to do so for complex user interest patterns. I finally conclude the talk by pointing out a host of new challenges, from application of learning theory, to database optimization, to heterogeneous data sets, to visualization.
Abstract: A well-identified issue in algorithms and, in particular, in online computation is the weakness of the worst case paradigm. Several more accurate models have been proposed in the direction of eliminating this weakness. Resource augmentation is one of these models which has been successfully used for providing theoretical evidence for several heuristics in scheduling with good performance in practice. According to it, the algorithm is applied to a more powerful environment than that of the adversary. Several types of resource augmentation for scheduling problems have been proposed up to now, including speed augmentation, machine augmentation and more recently rejection. In this context, we propose algorithms for online scheduling problems for minimizing variations of the total weighted flow time objective based on mathematical programming and a primal-dual scheme. Inspired by these algorithms, we present an online heuristic for non-preemptive scheduling sequential jobs, which outperforms standard scheduling policies as shown by an extensive campaign of simulations on instances obtained by real traces.
Abstract: In the setting of the modal logic that characterizes modal refinement over modal transition systems, Boudol and Larsen showed that the formulae for which model checking can be reduced to preorder checking, that is, the characteristic formulae, are exactly the consistent and prime ones. This paper presents general, sufficient conditions guaranteeing that characteristic formulae are exactly the consistent and prime ones. It is shown that the given conditions apply to the logics characterizing all the semantics in van Glabbeek's branching-time spectrum.
Abstract: We discuss theory, formulations and optimization methods for embedded problems. First similarities and differences between bilevel, semi-infinite and generalized semi-infinite programs are discussed along with challenges due to nonconvexity. Discretization methods are presented that guarantee approximately global solution of the embedded programs despite the presence of nonconvex lower-level programs. Numerical results are shown for test problems. Existing and new formulations from operation research and chemical engineering are shown along with results.
Pierre Karpman, ancien doctorant du LIX sous la direction de Daniel Augot et de Pierre-Alain Fouque, fait partie d'une équipe internationale qui vient de démontrer une faille majeure dans l'algorithme cryptographique SHA1. Cette attaque a nécessité l'équivalent de 6500 années de calcul et permet de produire des documents falsifiés. Elle a des conséquences industrielles importantes du fait que SHA1 est encore utilisé pour la sécurisation des sites web (SSL/TLS), du courrier électronique (PGP/GPG) ou dans des outils courants comme Git.
Tous les détails concernant cette attaque sont disponibles sur le site de la collaboration:
Faced with concerns over the correctness of safety critical systems, we should be able to take comfort that some parts of them may be proved correct: that is, we should be able to trust proofs. However, the methods by which formal proofs are built today makes trusting them difficult. While proof assistants are used to build formal proofs, those proofs are often locked to the technology behind that assistant. Formal proofs produced by one proof assistant cannot usually be checked and trusted by another proof assistant nor by a future version of itself. Thus, one of the components of the scientific method that ensures trust—reproducibility—is missing from proof assistants today. Building on recent developments in the theory of proof, I will present the Foundational Proof Certificate framework for defining the semantics of proof evidence. Since such definitions are executable, it is possible to build proof checkers that can check a wide range of formal proofs in both classical and intuitionistic versions of logic and arithmetic. In this scheme, the logic engine that executes such definitions is the only thing that needs to be trusted. Since such a logic engine is based on well-known computational logic topics, anyone can write a logic engine in their choice of programming language and hardware in order for them to build a checker they might trust. Possible consequences of employing this framework include much richer networking of existing provers as well as enabling both globe-spanning libraries and marketplaces of proofs.
Gustavo Dias soutiendra sa thèse de doctorat le mardi 24 Janvier à 14h00 en salle Gilles Kahn. Son travail s'intitule «Symétries et Distances: deux défis fascinants dans la Programmation Mathématique» et sera présenté devant le jury composé de:
- Frédéric Messine (Rapporteur)
- Elloumi Sourour (Examinateur)
- Amélie Lambert (Examinateur)
- Fabio Furini (Examinateur)
- Leo Liberti (Directeur de thèse)
- Nelson Maculan (Co-Directeur de thèse)
Abstract: Originally studied by Conway and Coxeter, friezes appeared in various recreational mathematics publications in the 1970s. More recently, in 2015, Baur, Parsons, and Tschabold constructed periodic infinite friezes and related them to matching numbers in the once-punctured disk and annulus. In this paper, we study such infinite friezes with an eye towards cluster algebras of type D and affine A, respectively. By examining infinite friezes with Laurent polynomials entries, we discover new symmetries and formulas relating the entries of this frieze to one another. Lastly, we also present a correspondence between Broline, Crowe and Isaacs’s classical matching tuples and combinatorial interpretations of elements of cluster algebras from surfaces.
Abstract: This talk will explain Higher Dimensional Automata (HDA) model of concurrency emphasising the fact that events/activities have duration. My view of HDAs is more from a CS point of view, than geometrical, and thus I explain how transitions are now first class citizens, the same as nodes are in standard FSM. Finite state machines are the models on which standard modal logics are interpreted, i.e., in states, which contain propositional formulas, whereas the modalities move on transitions from state to state. In comparison, I will present how Higher Dimensional Modal Logic (HDML) is a natural extension of modal logic to consider the transitions as first class citizens and also to consider the concurrency. I will try to explain the attempt at proving the completeness of HDML over HDAs. Difficulty of obtaining the completeness of HDML may be because HDML is close to the ST structures, which are the natural extension of the configuration structures of van Glabbeek and Plotkin by considering the transitions as first class citizens, i.e., the notion of "during" is made explicit. My results suggest that ST structures do not capture all HDAs, but capture only Sculptures. Sculptures are strange, elusive, since they want to capture the intuition given by Pratt about how to build HDAs, but do not capture full HDAs. This brings now the discussion that maybe this is why the completeness of HDML fails. Thus, if we pin point where the sculptures sit wrt. HDAs, we may be able to understand where the completeness fails for HDML. Note that this talk will be given in a manner understandable also for non-experts in HDA or in HDML. Though some familiarity with modal logics or with event-based models of concurrency would help.