Deep inference is paradigm for designing deductive proof systems. The inference rules in such systems can perform arbitrary rewriting inside formulas. This is very different from what one would expect from more traditional formalisms, like sequent calculus or natural deduction, where formulas are always decomposed along their main connective.

On one side, this provides much more freedom in the design of proof systems, and on the other side this causes a massive breakdown of all important proof theoretical results. Consequently, the early reasearch in deep inference went into reestablishing traditional results, like cut elimination, that one would expect from a deductive proof system.

After that many new results for deep inference have been obtained in recent years, in particular:

- proof systems for logics that cannot be expressed in traditional sequent systems,
- proof systems whose inference rules are all local,
- new notions of normalizations, in addition to cut elimination or focusing,
- to type optimal versions of the lambda-calculus that are typable with traditional formalisms,
- shorter analytic (cut-free) proofs by exponential factors (compared to Gentzen systems).

Besides that, deep inference inspired new notions of proof nets and semantics for proofs, and new research in proof complexity.

The purpose of this workshop is to present this vast growing field in a coherent, easy accessible way to other communities in all areas of logic in computer science, and bring together researchers in the area of deep inference to exchange ideas and to discuss their current work.

An up-to-date bibliography on deep inference can be found here.

TBA

TBA

The workshop is part of FLoC 2018.