generated from nerf/texTemplate
Compare commits
2 commits
9124739cca
...
88b294c767
Author | SHA1 | Date | |
---|---|---|---|
88b294c767 | |||
666b13385c |
13 changed files with 1085 additions and 349 deletions
2
.gitignore
vendored
2
.gitignore
vendored
|
@ -308,6 +308,6 @@ TSWLatexianTemp*
|
|||
|
||||
# Nix buid output
|
||||
/result
|
||||
|
||||
/src/Unterschrift_dennis.png
|
||||
# language correction
|
||||
.aspell.*
|
||||
|
|
16
src/Main.tex
16
src/Main.tex
|
@ -5,7 +5,7 @@
|
|||
\Subject{see title}
|
||||
\Keywords{keyword}
|
||||
\end{filecontents*}
|
||||
\documentclass[DIV=calc,fontsize=12pt,draft=true,parskip]{scrartcl}
|
||||
\documentclass[DIV=calc,fontsize=12pt]{scrartcl}
|
||||
\usepackage[a-2b,mathxmp]{pdfx}[2018/12/22]
|
||||
%\usepackage{pdfmanagement-testphase}
|
||||
%\DeclareDocumentMetadata{
|
||||
|
@ -71,6 +71,7 @@
|
|||
\newtheorem{notation}[theorem]{Notation}
|
||||
\theoremstyle{remark}
|
||||
\newtheorem{remark}[theorem]{Remark}
|
||||
\newtheorem*{remark*}{Remark}
|
||||
|
||||
\newcommand*{\fancyrefthmlabelprefix}{thm}
|
||||
\newcommand*{\fancyreflemlabelprefix}{lem}
|
||||
|
@ -175,16 +176,21 @@
|
|||
|
||||
|
||||
% \AtBeginDocument{\DeclareMathSymbol{→}{\mathbin}{symbols}{"21}} % I don't know why this does not work
|
||||
\title{A uniform equivariant model for cubical Type Theory}
|
||||
\title{A Uniform Equivariant Model for Cubical Type Theory}
|
||||
\author{Dennis Frieberg}
|
||||
\date{\today}
|
||||
\date{2024-08-09}
|
||||
\begin{document}
|
||||
\maketitle
|
||||
\maketitle
|
||||
\newpage
|
||||
% \subfile{Preliminaries/erklaerung}
|
||||
\newpage
|
||||
\tableofcontents
|
||||
\newpage
|
||||
\subfile{Preliminaries/Introduction}
|
||||
\newpage
|
||||
\subfile{Preliminaries/Preliminaries}
|
||||
\newpage
|
||||
\subfile{main/Mainpart}
|
||||
\listoftodos
|
||||
% \printindex
|
||||
\printbibliography
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
\newcommand*{\LF}{\ensuremath{\mathrm{L}}\xspace}
|
||||
\newcommand*{\RF}{\ensuremath{\mathrm{R}}\xspace}
|
||||
\renewcommand*{\d}{\ensuremath{\mathbf{d}}\xspace}
|
||||
\section{Algebraic Weak Factorization Systems}
|
||||
\subsection{Algebraic Weak Factorization Systems}
|
||||
In this section, we will revisit the basic ideas of algebraic weak factorization systems (awfs).
|
||||
We won't use them much explicitly through this paper, but we need one major result about them.
|
||||
Also, while we don't talk about them explicitly, their ideas permeate through most of the arguments.
|
||||
|
|
|
@ -8,15 +8,15 @@
|
|||
%TODO connections
|
||||
|
||||
The first question we need to answer, what site to consider. In the simplicial world there is more
|
||||
or less a consesus, what exactly the site should be. In the cubical world there are multiple models
|
||||
with slightly different sites. Our main protagonist will be the cartesian model, but we will also
|
||||
need to introduce the Dedekind model, as it will appear in an argument later. For a comperative
|
||||
or less a consensus, what exactly the site should be. In the cubical world there are multiple models
|
||||
with slightly different sites. Our main protagonist will be the Cartesian model, but we will also
|
||||
need to introduce the Dedekind model, as it will appear in an argument later. For a comparative
|
||||
analysis we direct the reader to \cite{cavalloUnifyingCubicalModels2016} and \cite{ABCHFL}.
|
||||
|
||||
\subsubsection{Cartesian cubes}
|
||||
|
||||
The most uniform description is probably in form of Lawvere theories,
|
||||
but our site has a more direct description \cite{mathematicalsciencesresearchinstitutemsriEquivariantUniformKan2020}.
|
||||
but our site has a more direct description \cite{riehlVid}.
|
||||
|
||||
\begin{definition}
|
||||
Let \(\mathrm{FinSet}\) be the category whose objects are the subsets of \(ℕ\), of the form \(\{0,..,n\}\), for some
|
||||
|
@ -25,13 +25,13 @@ but our site has a more direct description \cite{mathematicalsciencesresearchins
|
|||
|
||||
\begin{remark}
|
||||
Up to equivalence this is the category of finite sets. Choosing a skeleton will circumvent later size issues, but
|
||||
for convenience we will pretend that these are finite sets and not always reindex our sets to have an initial segment
|
||||
for convenience we will pretend that these are finite sets and not always re-index our sets to have an initial segment
|
||||
of \(ℕ\).
|
||||
\end{remark}
|
||||
|
||||
\begin{definition}\label{def:cartCubeSet}
|
||||
Let \( □ ≔ {\mathrm{FinSet}\rlap{\raisebox{0.75ex}{\rotatebox{90}{*}}}\raisebox{-0.3ex}{\rotatebox{90}{*}}}^\mathsf{op}\)
|
||||
the opposite category of double-pointed finite sets.
|
||||
be the opposite category of double-pointed finite sets.
|
||||
\end{definition}
|
||||
|
||||
An alternative definition would be
|
||||
|
@ -41,13 +41,13 @@ An alternative definition would be
|
|||
\end{definition}
|
||||
|
||||
\begin{remark}
|
||||
As in light of \fref{def:cartCubeSet} and \fref{def:cartCubeTheory} we sometimes talk about morphisms
|
||||
In light of \cref{def:cartCubeSet} and \cref{def:cartCubeTheory} we sometimes talk about morphisms
|
||||
as functions \(X + 2 → Y + 2\) preserving both points in \(2\) or equivalently ordinary functions \(X → Y+2\).
|
||||
\end{remark}
|
||||
|
||||
We will now spend some time exploring, why this might be viewed as the cube category.
|
||||
We write \(\{⊥|a,b,c|⊤\}\) for double pointed finite sets, where \(⊥\) and \(⊤\) are the
|
||||
two distunguished points. A cube of dimension \(n\) is now interpreted as such a set
|
||||
two distinguished points. A cube of dimension \(n\) is now interpreted as such a set
|
||||
with \(n+2\) (\(n\) without the distinguished points) many elements. We think of these
|
||||
as dimension variables. We could for example think about the zero dimensional cube
|
||||
(a point) \(\{⊥|⊤\}\) and the 2 dimensional cube (a square) \(\{⊥|a,b|⊤\}\).
|
||||
|
@ -57,13 +57,13 @@ also exactly what we get, a fitting map in \( \FinSetOp \) corresponds to a func
|
|||
In the other direction we have four maps. These encode the corners. For example the corner “\((⊥,⊥)\)”
|
||||
gets encode by sending \(a\) and \(b\) to \(⊥\).
|
||||
|
||||
Lets increase the dimensions a bit and look at the maps from a 1 dimensional cube (a line segment)
|
||||
Let us increase the dimensions a bit and look at the maps from a 1 dimensional cube (a line segment)
|
||||
\(\{⊥|x|⊤\}\}\) into the square. These correspond to functions from \(\{a,b\} → \{⊥,x,⊤\}\). We
|
||||
can think of this as a dimensional description of a line in a square. We have 2 dimensions in which
|
||||
the line can move \(a\) and \(b\). And in this two dimennsions the line can either be constant (sending
|
||||
the line can move \(a\) and \(b\). And in these two dimensions the line can either be constant (sending
|
||||
this dimension to \(⊥\) or \(⊤\), or can advance along its dimension (sending the dimension to \(x\)).
|
||||
So we get the degenerate corner points (as lines), by sending both of \(a\) and \(b\) to \(⊥\) or \(⊤\).
|
||||
We get the for sides of the square by sending one of \(a\) or \(b\) to \(⊥\) or \(⊤\). The two sides
|
||||
So we get the degenerate corner points (as lines), by sending both \(a\) and \(b\) to \(⊥\) or \(⊤\).
|
||||
We get the four sides of the square by sending one of \(a\) or \(b\) to \(⊥\) or \(⊤\). The two sides
|
||||
that run along the dimension \(a\) send \(a\) to \(x\).
|
||||
One function is left, sending both \(a\) and \(b\) to \(x\). This is the line that advances in both
|
||||
dimensions. Or in other words the diagonal.
|
||||
|
@ -73,7 +73,7 @@ dimensions. Or in other words the diagonal.
|
|||
important later.
|
||||
\end{remark}
|
||||
|
||||
As we will need some of these element througout the document again we will give them names.
|
||||
As we will need some of these element throughout the document again we will give them names.
|
||||
\begin{definition}[Notation for cubes]\label{def:cubeNotation}
|
||||
We call:
|
||||
\begin{itemize}
|
||||
|
@ -87,14 +87,14 @@ As we will need some of these element througout the document again we will give
|
|||
\begin{figure}
|
||||
\centering
|
||||
\includesvg[inkscapearea=nocrop,inkscapedpi=500]{figures/square_example_picture.svg}
|
||||
\caption{A square with some lines maped into it}
|
||||
\caption{A square with some lines mapped into it}
|
||||
\todo[inline]{fix 0 to \(⊥\) and same for 1}
|
||||
\end{figure}
|
||||
|
||||
|
||||
Also while \(□\) is not a strict Reedy-category (as \(Δ\) is) it is almost, namely an Eilenberg-Zilber category.
|
||||
So most of Reedy theory from the simplicial case can be salvaged. This will play an integral part in the
|
||||
prove that our model structure on \□ is equivalent to spaces.
|
||||
Also while \(□\) is not a strict Reedy-category (as \(Δ\) is) it almost is, more precisely it is an Eilenberg-Zilber category.
|
||||
Most of Reedy theory from the simplicial case can be salvaged. This will play an integral part in the
|
||||
proof that our model structure on \□ is equivalent to spaces.
|
||||
|
||||
\begin{definition}[generalized Reedy-category] %TODO add cite nlab
|
||||
A \emph{generalized Reedy category} is a category \C together with two
|
||||
|
@ -105,13 +105,13 @@ prove that our model structure on \□ is equivalent to spaces.
|
|||
\item every non-isomorphism in \(\C_-\) lowers degree,
|
||||
\item every isomorphism in \C preserves degree,
|
||||
\item \(\C_+ ∩ \C_-\) is the core of \C
|
||||
\item every morphism factors as a mapn in \(\C_-\) followed by a map in \(\C_+\), uniquely up
|
||||
\item every morphism factors as a map in \(\C_-\) followed by a map in \(\C_+\), uniquely up
|
||||
to isomorphism
|
||||
\item if \(f ∈ \C_-\) and \(θ\) is an isomorphism such that \(θf = f\), then \(θ = \id\)
|
||||
\end{enumerate}
|
||||
\end{definition}
|
||||
|
||||
Which will be an important concept later, but in our special case we can do even better
|
||||
This will be an important concept later, but in our special case we can do even better
|
||||
|
||||
\begin{definition}[Eilenberg-Zilber category] %TODO add cite nlab
|
||||
An \emph{Eilenberg-Zilber category} or short \emph{EZ-category} is a small category \C
|
||||
|
@ -129,7 +129,15 @@ Which will be an important concept later, but in our special case we can do even
|
|||
\end{definition}
|
||||
|
||||
\begin{remark}
|
||||
Cleary every EZ-category is a generalized Reedy category
|
||||
Clearly every EZ-category is a generalized Reedy category
|
||||
\end{remark}
|
||||
|
||||
\begin{remark}
|
||||
A lot of homotopical construction one knows from Reedy categories can be carried out with EZ-categories.
|
||||
A word of caution: if we try to do the usual Reedy style decomposition of a presheaf into
|
||||
a cell-complex, such that the attached cells are boundary inclusions in the representables, we will fail.
|
||||
Instead we need to consider all quotients of boundary inclusions in quotients of representables.
|
||||
For more details see \cites{riehlTheoryPracticeReedy2014}{shulmanREEDYCATEGORIESTHEIR}{campionCubicalSitesEilenbergZilber2023}.
|
||||
\end{remark}
|
||||
|
||||
|
||||
|
@ -149,7 +157,7 @@ Before we give a proof we will give a technical lemma about absolute pushouts of
|
|||
be a diagram in a category \C of split epis such that, \(bc' = d_1'd_2\). Then it is already a pushout square.
|
||||
\end{lemma}
|
||||
\begin{proof}
|
||||
We look at an arbitrary cocone
|
||||
We look at an arbitrary co-cone
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
A \arrow[r, two heads, "b"] \arrow[d, two heads, "c"] & B \arrow[d, "e_1"] \\
|
||||
|
@ -164,10 +172,10 @@ Before we give a proof we will give a technical lemma about absolute pushouts of
|
|||
C \arrow[r, "e_2"] & E
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
We get our candidate map from \(e_1d_1' : D → E\). We need to show that it commuts with \(e_1\) and \(d_1\) as well as
|
||||
\(e_2\) and \(d_2\). The second case \(e_1d_1'd_2 = e_2\) is directly obvious from the diagram.
|
||||
We get our candidate map from \(e_1d_1' : D → E\). We need to show that it commutes with \(e_1\) and \(d_1\) as well as
|
||||
\(e_2\) and \(d_2\). The second case \(e_1d_1'd_2 = e_2\) is immediately obvious from the diagram.
|
||||
To check that \(e_1d_1'd_1 = e_1\) it is enough to check that \(be_1d_1'd_1 = be_1\), as \(b\) is epic. Which is a
|
||||
straight forward diagramm chase, if one adds all the undrawn arrwos to the diagram above. Since \(d_1\), are epic \(e_1d'_1\) is unique in fulfilling already one of the two conditions.
|
||||
straight forward diagram chase, if one adds all the not drawn arrows to the diagram above. Since \(d_1\) are epic, \(e_1d'_1\) is unique in fulfilling already one of the two conditions.
|
||||
\end{proof}
|
||||
|
||||
\begin{remark}
|
||||
|
@ -186,23 +194,23 @@ As we will use the following statement multiple times in the proof we put it int
|
|||
\end{lemma}
|
||||
\begin{proof}
|
||||
Let \(f\) be an epi in \(□\). It is thus a monomorphism in \(\FinSet\). We construct a retraction
|
||||
\(g\) (that will be a split in \(□\)). Let \(g(x)\) by the unique pre image of \(f\) if it is not empty. If it is empty we choose
|
||||
any element (for example \(⊥\)). It is trivial to verify that this is a retraction. And as \(f\) needs to
|
||||
preserve the destinguished points, \(g\) does to.
|
||||
\(g\) that will be a split in \(□\). Let \(g(x)\) by the unique pre-image of \(f\) if it is not empty. If it is empty we choose
|
||||
any element (for example \(⊥\)). It is trivial to verify that this is a retraction. As \(f\) needs to
|
||||
preserve the distinguished points, \(g\) does too.
|
||||
\end{proof}
|
||||
|
||||
\begin{proof}[Proof of theorem \ref{thm:CisEZ}]
|
||||
\begin{proof}[Proof of \cref{thm:CisEZ}]
|
||||
Property \ref{EZ:1} follows from basic facts about surjectivity and injectivity between finite sets.
|
||||
Property \ref{EZ:2} needs us to find a (epi,mono) factorization in \(\FinSet\). This factorization
|
||||
can be obtained in the same way one would obtain it \(\Set\), by factoring an morphism through its image.
|
||||
It is straight forward to verify, that the distinguished points are preserved by those maps. And the
|
||||
that the epi (in \(□\)) splits follows from \ref{EZ:split}.
|
||||
fact that the epi (in \(□\)) splits follows from \cref{EZ:split}.
|
||||
|
||||
For property \ref{EZ:3}, we only need find a pushout and splits of the form described in \ref{absolutePushout}.
|
||||
So let \(c : A → C\) and \(b : A → B\) spilt epis. We will do this by
|
||||
constructing the dual digram in \(\FinSet\). We will keep the names for the morphisms. As pullbacks over non-empty diagrams
|
||||
For property \ref{EZ:3}, we only need find a pushout and splits of the form described in \cref{absolutePushout}.
|
||||
So let \(c : A → C\) and \(b : A → B\) split epis. We will do this by
|
||||
constructing the dual diagram in \(\FinSet\). We will keep the names for the morphisms. As pullbacks over nonempty diagrams
|
||||
work in \(\FinSet\) like in \(\Set\) we can construct a pullback. Now we must construct the
|
||||
dotted arrows that satisfy the dual properties of lemma \ref{absolutePushout}.
|
||||
dashed arrows that satisfy the dual properties of \cref{absolutePushout}.
|
||||
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
|
@ -214,7 +222,7 @@ As we will use the following statement multiple times in the proof we put it int
|
|||
their image to \(0\).
|
||||
We will show the required commutativity by case distinction
|
||||
Let \(x ∈ C\). Note that \(x ∈ \im c^*b\) if and only if \(c(x) ∈ \im b\) as this is a pullback diagram. Thus it is
|
||||
immediatly clear that if \(x\) is not contained in \(\im c^*b\) that \((b^*c)(c^*b)'(x) = 0 = b'c(x)\). Let us turn to
|
||||
immediately clear that if \(x\) is not contained in \(\im c^*b\) that \((b^*c)(c^*b)'(x) = 0 = b'c(x)\). Let us turn to
|
||||
the case that \(x ∈ \im c^*b \). As restricted to those \(x\) the map \(c^*b\) is surjective it suffices to check
|
||||
\( b'c(c^*b) = (b^*c)(c^*b)'(c^*b) \).
|
||||
\begin{equation*}
|
||||
|
@ -222,27 +230,31 @@ As we will use the following statement multiple times in the proof we put it int
|
|||
\end{equation*}
|
||||
|
||||
|
||||
Thus the corresponding diagram in \(□\) fulfills the requirements of lemma \ref{absolutePushout}, and is thus
|
||||
Therefore the corresponding diagram in \(□\) fulfills the requirements of \cref{absolutePushout}, and is thus
|
||||
an absolute pushout square.
|
||||
\end{proof}
|
||||
|
||||
Next we start looking at the presheaf topos on our cube category \(\hat{□}\).
|
||||
We will recall some important concepts, and give some construtcion, that we will
|
||||
use later. Also we introduce some notation.
|
||||
Next we start looking at the presheaf topos on our cube category \(\□\).
|
||||
We will recall some important concepts and give some constructions, that we will
|
||||
use later. We also introduce some notation.
|
||||
|
||||
\begin{definition}[Notation for Cubical Sets]\label{def:notationCubeSets}
|
||||
We fix the following notations for Cartesian cubical sets:
|
||||
\begin{itemize}
|
||||
\item \(* ≔ \Yo{[0]}\)
|
||||
\item \(\I ≔ \Yo{[1]}\)
|
||||
\item \(δ_k ≔ \Yo \mathrm{d}_k\)
|
||||
\item \(δ_k ≔ \Yo \mathrm{d}_k\) with \(k ∈ \{0,1\}\)
|
||||
\item \(δ ≔ ⟨\id_{\I},\id_{\I}⟩ : \I → \I × \I\)
|
||||
\end{itemize}
|
||||
\end{definition}
|
||||
|
||||
|
||||
\subsubsection{Dedekind cubes}
|
||||
We will also encounter the so called Dedekind cubes. There are also multiple ways to look at them. We will present
|
||||
multiple definitions, but skip the equivalence proofs as they don't bear much inside into the subject.
|
||||
We will also encounter the so-called Dedekind cubes. As they are not in the focus
|
||||
of this discussion, we give the needed definitions, but omit most of the proofs of their
|
||||
properties.
|
||||
There are also multiple ways to look at them. We will present
|
||||
multiple definitions, but skip the equivalence proofs as they don't bear much insight into the subject.
|
||||
|
||||
\begin{definition}\leavevmode
|
||||
\begin{itemize}
|
||||
|
@ -260,22 +272,21 @@ multiple definitions, but skip the equivalence proofs as they don't bear much in
|
|||
\end{definition}
|
||||
|
||||
Unraveling this definition gives us a quite similar description to our description of Cartesian cubes.
|
||||
Let \(F\) be the functor that assigns to a set its free lattice. We have then as objects
|
||||
Let \(F\) be the functor that assigns to a set its free lattice. We then have as objects
|
||||
natural numbers (or up to equivalence finite sets). And again we will write elements of these as lower
|
||||
roman letters instead of natural numbers to keep the confusion between sets and their elements minimal.
|
||||
A morphism \(m → n\) is a function
|
||||
from \(n → F(m)\). This adds a few more degeneracy maps, but the general geometric intuition stays the
|
||||
same. The elments of of an object are dimensions, and maps do give us for every dimension in the target
|
||||
a term, that describes the behaiviour of our domain in that particular direction. If we look at an
|
||||
same. The elements of an object are dimensions, and maps do give us for every dimension in the target
|
||||
a term, that describes the behavior of our domain in that particular direction. If we look at an
|
||||
example again one might ask what the difference is, between the new degeneracies map from a square
|
||||
to the interval in comparison to the old one. We can view these as degenrate squares where we map the top
|
||||
to the interval in comparison to the old one. We can view these as degenerate squares where we map the top
|
||||
and right (or bottom and left) side along the interval and the other to sides to \(⊥\) (or \(⊤\)).
|
||||
This wasn't possible before and will make a drastic difference once we start considiring filling and
|
||||
composition problems.
|
||||
This wasn't possible before and will make a drastic difference once we start considering lifting problems.
|
||||
|
||||
\begin{figure}
|
||||
\missingfigure{draw pictures for this}
|
||||
\caption{A squre degenrated with a connection. The upper and right side are equal while the left and bottom side is degenrated to point.}
|
||||
\caption{A square degenerated with a connection. The upper and right side are equal while the left and bottom side is degenerated to point.}
|
||||
\end{figure}
|
||||
|
||||
\todo{add figure for \(2→2\) degeneracy map}
|
||||
|
@ -284,11 +295,11 @@ composition problems.
|
|||
The Dedekind cubes are an EZ-category
|
||||
\end{proposition}
|
||||
\begin{proof}[Proof sketch:]
|
||||
This can be proven by an analog argument as \fref{thm:CisEZ}
|
||||
This can be proven by an analogous argument as \fref{thm:CisEZ}
|
||||
\end{proof}
|
||||
|
||||
If it is clear from the context we will use the notation from \fref{def:cubeNotation} and \fref{def:notationCubeSets} for
|
||||
the anologous objects of the Dedekind cubes.
|
||||
the analogous objects of the Dedekind cubes.
|
||||
|
||||
\todo{think about if I want to write down the proof}
|
||||
|
||||
|
|
66
src/Preliminaries/Introduction.tex
Normal file
66
src/Preliminaries/Introduction.tex
Normal file
|
@ -0,0 +1,66 @@
|
|||
\documentclass[../Main.tex]{subfiles}
|
||||
\begin{document}
|
||||
\section{Introduction}
|
||||
In recent years there has been a rise in interest in homotopy type theory, a form of dependent type theory
|
||||
that treats identities as homotopies between the relevant objects. Notably there is an axiom
|
||||
that establishes an equivalence between homotopy equivalences of two types and identities between
|
||||
these types. The model structure that was built by Voevodsky to prove consistency of these axioms, is
|
||||
built on simplicial sets, and the well known Kan-Quillen model structure\cite{kapulkinSimplicialModelUnivalent2021}.
|
||||
One might even say that this is not only a homotopical version of type theory, but the language of
|
||||
homotopy types. This aligns very well with interpretation, types are interpreted as fibrations.
|
||||
But from a constructive view point this was not satisfying as parts of the interpretation were based
|
||||
on classical logic. A bit later it was shown that this model inherently needs some classical principles
|
||||
\cite{bezemNonConstructivityKanSimplicial2015}.
|
||||
|
||||
One of the approaches to get a constructive model was to go over to cubical models, the first of these
|
||||
is due to Bezem, Coquand, and Huber \cite{bezemModelTypeTheory2014}. From there different variants
|
||||
of cubical models where considered \cites{CCHM,angiuliCartesianCubicalComputational2018a,coquandHigherInductiveTypes2018}
|
||||
one of them the Cartesian cubes \cite{awodeyCubicalModelHomotopy2018}. These are all different model
|
||||
construction with slightly different attached type theories. There happened a lot of work to unify
|
||||
some of these constructions \cite{ABCHFL}.
|
||||
|
||||
It is not at all clear that these give rise to model structures, but also not very surprising considering
|
||||
the languages from which these are build try to capture homotopical notions. Now one might ask if
|
||||
these induced homotopy theories are again equivalent to the original Kan-Quillen model structure.
|
||||
For this we have a multiple negative results. These results are not well documented and the
|
||||
reference I could find is on this mailing list \cite{QuillenModelStructure}. For the model structure
|
||||
induced on the Dedekind cubes this question remains an open. The equivariant model structure on Cartesian
|
||||
cubes discussed in this thesis is the
|
||||
first construction for which this equivalence is known. A little later Cavallo and Sattler found
|
||||
that the in the case with only one connection the equivalence to spaces can also be shown
|
||||
\cite{cavalloRelativeEleganceCartesian2022}.
|
||||
|
||||
This thesis gives a proof that the uniform equivariant model of cubical type theory is equivalent to spaces.
|
||||
This model came out of the joint work of Awodey, Cavallo, Coquand, Riehl, and Sattler. The thesis
|
||||
is split in three parts, the first part are short introductions to topics of category theory
|
||||
that one might not be familiar with. The reader acquainted with these might safely skip them.
|
||||
The second part introduces the main object, the topos \(\□\), and the equivariant premodel structure on it.
|
||||
It then continues by constructing universes for those fibrations, and uses them to prove that the premodel
|
||||
structure is indeed a model structure. The third part shows the equivalence of this model structure to spaces.
|
||||
|
||||
In section two we first describe the site and afterwards the equivariant premodel structure.
|
||||
We spend then most of the section with the construction of fibrant universes. This mostly follows
|
||||
\cite{awodeyCartesianCubicalModel2023}. The main idea of this argument is to internalize
|
||||
the structure of the algebraic factorization system, attached to the fibrations. These
|
||||
are called classifying types. These can then be used to extract
|
||||
universes from Hofmann-Streicher universes.
|
||||
|
||||
In section three we show equivalence to spaces and follow the argument sketched in
|
||||
\cite{riehlVid}. To compare the simplex category \(Δ\) to the Cartesian cubes \(□\), we
|
||||
pass through the Dedekind cubes. We establish a pair of left Quillen functors that
|
||||
induce an equivalence of the desired homotopy categories. An important fragment of the
|
||||
proof was laid out in \cite{sattlerIdempotentCompletionCubes2019}.
|
||||
|
||||
When this project started there was no written account on this model, and I tried to remedy
|
||||
this fact.
|
||||
So the content is mainly based on a talk given by Riehl \cite{riehlVid}, the account about
|
||||
premodel structures in \cite[Section 3]{cavalloRelativeEleganceCartesian2022}, and universe
|
||||
construction techniques from \cite{awodeyCartesianCubicalModel2023}. Shortly before this project
|
||||
was finished, an extensive written report on this subject was released by the original creators
|
||||
of this model structure \cite{solution}. I changed some formulations in the universe construction
|
||||
to highlight the similarities in the argument. For example there are now explicit mentions
|
||||
of cubical species in the argument. We also verify an older argument for the equivalence to
|
||||
spaces that is no longer present in \cite{solution} in this form.
|
||||
|
||||
|
||||
\end{document}
|
|
@ -1,37 +1,36 @@
|
|||
\documentclass[../Main.tex]{subfiles}
|
||||
\begin{document}
|
||||
\section{Leibniz Construction}
|
||||
We will see use a well know construction in homotopy theory, to
|
||||
We will use a well-know construction in homotopy theory to
|
||||
elegantly construct a lot of interesting objects, the Leibniz construction.
|
||||
This section will mostly give a definition and some examples to get familiar with this construction.
|
||||
If the reader is already familiar with it, they might skip this section without any problems.
|
||||
|
||||
We start be giving the definition
|
||||
We start by giving the definition.
|
||||
\begin{definition}[Leibniz Construction]
|
||||
Let \(\A\), \(\B\) and \(\C\) be categories and \(\C\) has finite pushouts. Let \(⊗ : \A × \B → \C \) be a bifunctor.
|
||||
Then we define the \emph{Leibniz Construction} \(\hat{⊗} : \arr{\A} × \arr{\B} → \arr{\C}\) to be the functor
|
||||
Let \(\A\), \(\B\) and \(\C\) be categories and \(\C\) have finite pushouts. Let \(⊗ : \A × \B → \C \) be a bifunctor.
|
||||
Then we define the \emph{Leibniz Construction} \(\⊗ : \arr{\A} × \arr{\B} → \arr{\C}\) to be the functor
|
||||
that sends \(f : A → A' \) in \(\A\) and
|
||||
\( g : B → B' \) in \(\B\), to \( f \hat{⊗} g \) as defined as in the following diagram.
|
||||
\( g : B → B' \) in \(\B\), to \( f \⊗ g \) as defined via the following diagram.
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
A ⊗ B \arrow[r, "f ⊗ \id"] \arrow[d, "\id ⊗ g"] \arrow[dr, phantom, "\ulcorner", very near end] & A' ⊗ \mathrlap{B} \phantom{A'} \arrow[ddr, bend left, "\id ⊗ g"] \arrow[d] & \\
|
||||
\phantom{B'} \mathllap{A} ⊗ B' \arrow[drr, bend right, "f ⊗ \id"] \arrow[r] & A ⊗ B' ∐\limits_{A ⊗ B} A' ⊗ B \arrow[dr, dashed ,"f \hat{⊗} g"] & \\
|
||||
\phantom{B'} \mathllap{A} ⊗ B' \arrow[drr, bend right, "f ⊗ \id"] \arrow[r] & A ⊗ B' ∐\limits_{A ⊗ B} A' ⊗ B \arrow[dr, dashed ,"f \⊗ g"] & \\
|
||||
& & A' ⊗ B'
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
If \(⊗\) is the tensor functor of a monoidal category we also call it the \emph{Leibniz product} or \emph{pushout-product}. If \(⊗\) is
|
||||
the functor application functor we also call it the \emph{Leibniz application}.
|
||||
If \(⊗\) is the tensor functor of a monoidal category, we also call it the \emph{Leibniz product} or \emph{pushout-product}. If \(⊗\) is
|
||||
the functor application functor, we also call it the \emph{Leibniz application}.
|
||||
\end{definition}
|
||||
|
||||
The following examples are true for any category of nice spaces, we will state them for simplicial sets as the reader is probably
|
||||
The following examples are true for any category of nice spaces. We will state them for simplicial sets as the reader is probably
|
||||
familiar with them.
|
||||
|
||||
\begin{example}\label{ex:leib:prism}
|
||||
Let \(δ_k : * → Δ^1\) for \(k ∈ {0,1}\) be one of the endpoint inclusions into the interval and \(i : \∂ Δ^n → Δ^n\) the boundary
|
||||
inclusion of the \(n\)-simplex, then \(δ_k \hat{×} i\) is the inclusion of a prism without its interior and one cap, into the prism.
|
||||
\begin{example}\label{a}
|
||||
Let \(δ_k : * → Δ^1\) for \(k ∈ \{0,1\}\) be one of the endpoint inclusions into the interval and \(i : \∂ Δ^n → Δ^n\) the boundary
|
||||
inclusion of the \(n\)-simplex. Then \(δ_k \× i\) is the inclusion of a prism without its interior and one cap into the prism.
|
||||
\end{example}
|
||||
This gives us a description of a possible set of generating trivial cofibrations. This will be an ongoing theme. For cubical sets we
|
||||
will define trivial cofibrations in this fashion. We can even observe something stronger. If we would replace the boundary inclusion
|
||||
This gives us a description of a possible set of generating trivial cofibrations. This will be an ongoing theme.
|
||||
We can even observe something stronger. If we would replace the boundary inclusion
|
||||
of the \(n\)-simplex with the boundary inclusion of an arbitrary space \(X\). We get the inclusion of the cylinder object of \(X\)
|
||||
without a cap and its interior into the cylinder.
|
||||
\begin{figure}
|
||||
|
@ -40,14 +39,14 @@ without a cap and its interior into the cylinder.
|
|||
\end{figure}
|
||||
|
||||
\begin{observation}
|
||||
If \(f\) and \(g\) are cofibrations then \(f \hat{×} g\) is too. If one of them is a trivial cofibration then so is \(f \hat{×} g\).
|
||||
If \(f\) and \(g\) are cofibrations then \(f \× g\) is too. If \(f\) or \(g\) is a trivial cofibration, then so is \(f \× g\).
|
||||
\end{observation}
|
||||
This has far reaching consequences, and one can define a monoidal model category where this are one of the axioms
|
||||
for the tensor functor. For more
|
||||
detail see \cite{hoveyModelCategories2007}. We are not going to need much of this theory explicitly. But it is worthy to note
|
||||
that all examples of model categories that we are going to encouter are of this form.
|
||||
This has far reaching consequences and one can define the notion of a monoidal model category where this one of the axioms
|
||||
for the tensor functor. This axiom basically states that the tensor product behaves homotopicaly. For more
|
||||
detail see \cite{hoveyModelCategories2007}. We are not going to need much of this theory explicitly. But it is worthy noting
|
||||
that all examples of model categories that we are going to encounter are of this form.
|
||||
|
||||
We will also encouter the basically same construction as \cref{ex:leib:prism} in another way. We can also get the Cylinder object
|
||||
We will also encounter basically the same construction as \ref{a} in another way. We can also get the Cylinder object
|
||||
functorialy in \Δ, such that the cap inclusions are natural transformations.
|
||||
We can also get the inclusion the Leibniz product produces in a natural manner via the Leibniz application.
|
||||
|
||||
|
@ -56,26 +55,88 @@ We can also get the inclusion the Leibniz product produces in a natural manner
|
|||
\( F @ x ≔ F(x)\) the functor application functor. Let \(\I × (−)\) be the functor that sends \(X\) to its cylinder object (also known as the
|
||||
product with the interval).
|
||||
Let be one of the inclusions \( δ_k : \Id → \I ⊗ (−) \), the natural transformation that embeds the space in one of the cylinder caps.
|
||||
Let \( \∂ X → X \) be the boundery inclusion of \(X\).
|
||||
Then \( δ_k \hat{@} (∂X → X) \) is the filling of a cylinder with one base surface of shape \(X\).
|
||||
Let \( \∂ X → X \) be the boundary inclusion of \(X\).
|
||||
Then \( δ_k \widehat{@} (∂X → X) \) is the filling of a cylinder with one base surface of shape \(X\).
|
||||
\end{example}
|
||||
This kind of construction will later play an important role in the construction of our desired model categories.
|
||||
And as we will constructions like this more often we add a bit of notation for it.
|
||||
|
||||
We will also encouter another form of Leibniz application, which has at first glance not to do alot with homotopy. On
|
||||
\begin{notation}
|
||||
Let \(⊗ : \A × \B → \C\) be a bifunctor, \(A\) be an object of \A, \(F : \B → \C\) be a functor , and let \(η : F ⇒ A ⊗ (−)\)
|
||||
be a natural transformation. We write \(η \⊗ (−) ≔ (η ⊗ (−)) \widehat{@} (−)\).
|
||||
\end{notation}
|
||||
|
||||
Here is a property that we will exploit fairly often.
|
||||
\begin{proposition}\label{rem:leibniz:adhesive}
|
||||
Since every topos is adhesive \cite{lackAdhesiveQuasiadhesiveCategories2005}, the pushout-product and the
|
||||
Leibniz application of \(A × (−)\), are stable under monomorphisms in every topos.
|
||||
\end{proposition}
|
||||
|
||||
The Leibniz construction has a dual construction, the Leibniz pullback construction
|
||||
\begin{definition}[Leibniz pullback Construction]
|
||||
Let \(\A\), \(\B\) and \(\C\) be categories and \(\C\) has finite pullbacks. Let \(⊗ : \A × \B → \C \) be a bifunctor.
|
||||
Then we define the \emph{Leibniz pullback construction} \(\widecheck{⊗ }: \arr{\A} × \arr{\B} → \arr{\C}\) to be the functor
|
||||
that sends \(f : A → A' \) in \(\A\) and
|
||||
\( g : B → B' \) in \(\B\), to \( f \widecheck{⊗} g \) as defined as in the following diagram.
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
A ⊗ B
|
||||
\arrow[rrd, "f ⊗ \id",bend left]
|
||||
\arrow[ddr, "\id ⊗ g",bend right]
|
||||
\arrow[rd, "", dashed]
|
||||
& &
|
||||
\\
|
||||
{} & A ⊗ B' \displaystyle\prod_{A'⊗B'} A'⊗ B
|
||||
\arrow[d]
|
||||
\arrow[r]
|
||||
\pb
|
||||
& A' ⊗ \mathrlap{B} \phantom{A'}
|
||||
\arrow[d, "\id ⊗ g"]
|
||||
\\
|
||||
{}
|
||||
& \phantom{B'} \mathllap{A} ⊗ B'
|
||||
\arrow[r, "f ⊗ \id"]
|
||||
& A' ⊗ B'
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
If \(⊗\) is the exponential functor, we also call it the \emph{Leibniz pullback Hom}, \emph{pullback-power}, or \emph{pullback-hom}.
|
||||
If \(⊗\) is the functor application functor we also call it the \emph{Leibniz pullback application}.
|
||||
\end{definition}
|
||||
Analog to the notation above we introduce the following notation
|
||||
\begin{notation}
|
||||
Let \(⊗ : \A × \B → \C\) be a bifunctor, \(A\) be an object of \A, \(F : \B → \C\) be a functor , and let \(η : F ⇒ A ⊗ (−)\)
|
||||
be a natural transformation. We write \(η \widehat⊗ (−) ≔ (η ⊗ (−)) \widehat{@} (−)\). As it is common in the literature we
|
||||
also write for the special case \(A⊗ (−) = (−)^A\), the following \(η ⇒ (−) ≔ η \widecheck⊗ (−)\).
|
||||
\end{notation}
|
||||
|
||||
Dual to our observation above it will be a theme that the pullback-power sends fibrations to trivial fibrations.
|
||||
Or more formally:
|
||||
\begin{lemma}[{\cite[Lemma 4.10]{riehlTheoryPracticeReedy2014}}]
|
||||
There is an adjunction \({η \× (−) ⊣ η ⇒ (−)}\), between the pushout-product and the pullback-power.
|
||||
\end{lemma}
|
||||
\begin{remark}
|
||||
This holds in much more generality, for details see \cite[Lemma 2.1.15]{solution}
|
||||
\end{remark}
|
||||
\begin{remark}
|
||||
Our main application of this lemma will be, that it gives us the ability to transpose lifting problems
|
||||
between cofibrations and trivial fibrations, to lifting problems between trivial cofibrations and fibrations.
|
||||
\end{remark}
|
||||
|
||||
We will also encounter another form of Leibniz application, which has at first glance not to do a lot with homotopy. On
|
||||
the other hand one might read this as, getting the component of an natural transformation is some sort of a homotopic construction.
|
||||
\begin{lemma}
|
||||
Let \(F,G : A → B \) and \(η : F ⇒ G\) be a natural transformation, and \A has in initial object \(∅\). Let
|
||||
Let \(F,G : A → B \) and \(η : F ⇒ G\) be a natural transformation, and \A has an initial object \(∅\). Let
|
||||
\(F\) and \(G\) preserve the initial object.
|
||||
We write \(∅ → X\) for the unique map with the same type. Then we get \(η \at (∅ → X) = η_X\).
|
||||
\end{lemma}
|
||||
\begin{proof}\todo{this is contravariant and I'm too tired}
|
||||
We will draw the diagram from the definiton
|
||||
\begin{proof}
|
||||
We will draw the diagram from the definition
|
||||
\begin{eqcd*}
|
||||
F (∅) \arrow[r, "F(∅ → X)"] \arrow[d, "η_∅"] \arrow[dr, phantom, "\ulcorner", very near end] & F(X) \arrow[ddr, bend left, "η_X"] \arrow[d] & \\
|
||||
G(∅) \arrow[drr, bend right, "G(∅ → X)"] \arrow[r] & G(∅) ∐\limits_{F(∅)} \mathrlap{F(X)} \phantom{G(∅)} \arrow[dr, dashed ,"f \hat{⊗} g"] & \\
|
||||
G(∅) \arrow[drr, bend right, "G(∅ → X)"'] \arrow[r] & G(∅) ∐\limits_{F(∅)} \mathrlap{F(X)} \phantom{G(∅)} \arrow[dr, dashed ,"η \widehat{@} (∅ → X)" description] & \\
|
||||
& & G(X)
|
||||
\end{eqcd*}
|
||||
If we substitude \(F(\emptyset)\) and \(G(∅)\) with \(∅\) the claim directly follows.
|
||||
If we substitute \(F(\emptyset)\) and \(G(∅)\) with \(∅\) the claim directly follows.
|
||||
\end{proof}
|
||||
% TODO Left adjoint
|
||||
|
||||
|
|
143
src/Preliminaries/MWE.tex
Normal file
143
src/Preliminaries/MWE.tex
Normal file
|
@ -0,0 +1,143 @@
|
|||
\documentclass[../Main.tex]{subfiles}
|
||||
\begin{document}
|
||||
\subsection{Leibniz Construction}
|
||||
We will use a well-know construction in homotopy theory to
|
||||
elegantly construct a lot of interesting objects, the Leibniz construction.
|
||||
This section will mostly give a definition and some examples to get familiar with this construction.
|
||||
If the reader is already familiar with it, they might skip this section without any problems.
|
||||
We start by giving the definition.
|
||||
\begin{definition}[Leibniz Construction]
|
||||
Let \(\A\), \(\B\) and \(\C\) be categories and \(\C\) have finite pushouts. Let \(⊗ : \A × \B → \C \) be a bifunctor.
|
||||
Then we define the \emph{Leibniz Construction} \(\⊗ : \arr{\A} × \arr{\B} → \arr{\C}\) to be the functor
|
||||
that sends \(f : A → A' \) in \(\A\) and
|
||||
\( g : B → B' \) in \(\B\), to \( f \⊗ g \) as defined via the following diagram.
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
A ⊗ B \arrow[r, "f ⊗ \id"] \arrow[d, "\id ⊗ g"] \arrow[dr, phantom, "\ulcorner", very near end] & A' ⊗ \mathrlap{B} \phantom{A'} \arrow[ddr, bend left, "\id ⊗ g"] \arrow[d] & \\
|
||||
\phantom{B'} \mathllap{A} ⊗ B' \arrow[drr, bend right, "f ⊗ \id"] \arrow[r] & A ⊗ B' ∐\limits_{A ⊗ B} A' ⊗ B \arrow[dr, dashed ,"f \⊗ g"] & \\
|
||||
& & A' ⊗ B'
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
If \(⊗\) is the tensor functor of a monoidal category, we also call it the \emph{Leibniz product} or \emph{pushout-product}. If \(⊗\) is
|
||||
the functor application functor, we also call it the \emph{Leibniz application}.
|
||||
\end{definition}
|
||||
|
||||
The following examples are true for any category of nice spaces. We will state them for simplicial sets as the reader is probably
|
||||
familiar with them.
|
||||
|
||||
\begin{example}\label{a}
|
||||
Let \(δ_k : * → Δ^1\) for \(k ∈ \{0,1\}\) be one of the endpoint inclusions into the interval and \(i : \∂ Δ^n → Δ^n\) the boundary
|
||||
inclusion of the \(n\)-simplex. Then \(δ_k \× i\) is the inclusion of a prism without its interior and one cap into the prism.
|
||||
\end{example}
|
||||
This gives us a description of a possible set of generating trivial cofibrations. This will be an ongoing theme.
|
||||
We can even observe something stronger. If we would replace the boundary inclusion
|
||||
of the \(n\)-simplex with the boundary inclusion of an arbitrary space \(X\). We get the inclusion of the cylinder object of \(X\)
|
||||
without a cap and its interior into the cylinder.
|
||||
%\begin{figure}
|
||||
% \missingfigure{cylinder inclusion as pushout product}
|
||||
% \caption{cylinder inclusion as Leibniz product}
|
||||
%\end{figure}
|
||||
|
||||
\begin{observation}
|
||||
If \(f\) and \(g\) are cofibrations then \(f \× g\) is too. If \(f\) or \(g\) is a trivial cofibration, then so is \(f \× g\).
|
||||
\end{observation}
|
||||
This has far reaching consequences and one can define the notion of a monoidal model category where this one of the axioms
|
||||
for the tensor functor. This axiom basically states that the tensor product behaves homotopicaly. For more
|
||||
detail see \cite{hoveyModelCategories2007}. We are not going to need much of this theory explicitly. But it is worthy noting
|
||||
that all examples of model categories that we are going to encounter are of this form.
|
||||
|
||||
We will also encounter basically the same construction as \cref{a} in another way. We can get the Cylinder object
|
||||
functorialy in \Δ, such that the cap inclusions are natural transformations.
|
||||
The inclusion the Leibniz product produces the inclusion in a natural manner via the Leibniz application.
|
||||
|
||||
\begin{example}\label{ex:leib:functorialCylinder}
|
||||
Let the functor application functor \( @ : \C^{\C} × \C → \C \) defined as
|
||||
\( F @ x ≔ F(x)\). Let \(\I × (−)\) be the functor that sends \(X\) to its cylinder object (also known as the
|
||||
product with the interval).
|
||||
Let \( δ_k : \Id → \I ⊗ (−) \) be one of the, the natural transformation that embeds the space in one of the cylinder caps.
|
||||
Let \( \∂ X → X \) be the boundary inclusion of \(X\).
|
||||
Then \( δ_k \widehat{@} (∂X → X) \) is the filling of a cylinder with one base surface of shape \(X\).
|
||||
\end{example}
|
||||
This kind of construction will later play an important role in the construction of our desired model categories.
|
||||
And as we will use constructions like this more often we add a bit of notation for it.
|
||||
|
||||
\begin{notation}
|
||||
Let \(⊗ : \A × \B → \C\) be a bifunctor, \(A\) an object of \A, \(F : \B → \C\) a functor, and \(η : F ⇒ A ⊗ (−)\)
|
||||
a natural transformation. We write \(η \⊗ (−) ≔ (η ⊗ (−)) \widehat{@} (−)\).
|
||||
\end{notation}
|
||||
|
||||
Here is a property that we will exploit fairly often.
|
||||
\begin{proposition}\label{rem:leibniz:adhesive}
|
||||
Since every topos is adhesive \cite{lackAdhesiveQuasiadhesiveCategories2005}, monomorphisms are stable under the pushout-product and the
|
||||
Leibniz application of \(A × (−)\) in every topos.
|
||||
\end{proposition}
|
||||
|
||||
The Leibniz construction has a dual construction, the Leibniz pullback construction.
|
||||
\begin{definition}[Leibniz pullback Construction]
|
||||
Let \(\A\), \(\B\) and \(\C\) be categories and \(\C\) have finite pullbacks. Let \(⊗ : \A × \B → \C \) be a bifunctor.
|
||||
Then we define the \emph{Leibniz pullback construction} \(\widecheck{⊗ }: \arr{\A} × \arr{\B} → \arr{\C}\) to be the functor
|
||||
that sends \(f : A → A' \) in \(\A\) and
|
||||
\( g : B → B' \) in \(\B\) to \( f \widecheck{⊗} g \) as defined by the following diagram.
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}
|
||||
A ⊗ B
|
||||
\arrow[rrd, "f ⊗ \id",bend left]
|
||||
\arrow[ddr, "\id ⊗ g",bend right]
|
||||
\arrow[rd, "", dashed]
|
||||
& &
|
||||
\\
|
||||
{} & A ⊗ B' \displaystyle\prod_{A'⊗B'} A'⊗ B
|
||||
\arrow[d]
|
||||
\arrow[r]
|
||||
\pb
|
||||
& A' ⊗ \mathrlap{B} \phantom{A'}
|
||||
\arrow[d, "\id ⊗ g"]
|
||||
\\
|
||||
{}
|
||||
& \phantom{B'} \mathllap{A} ⊗ B'
|
||||
\arrow[r, "f ⊗ \id"]
|
||||
& A' ⊗ B'
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
If \(⊗\) is the exponential functor, we also call it the \emph{Leibniz pullback Hom}, \emph{pullback-power} or \emph{pullback-hom}.
|
||||
If \(⊗\) is the functor application functor we also call it the \emph{Leibniz pullback application}.
|
||||
\end{definition}
|
||||
Analog to the notation above we introduce the following notation
|
||||
\begin{notation}
|
||||
Let \(⊗ : \A × \B → \C\) be a bifunctor, \(A\) be an object of \A, \(F : \B → \C\) be a functor , and let \(η : F ⇒ A ⊗ (−)\)
|
||||
be a natural transformation. We write \(η \widehat⊗ (−) ≔ (η ⊗ (−)) \widehat{@} (−)\). As it is common in the literature, we
|
||||
also write the following for the special case \(A⊗ (−) = (−)^A\), \(η ⇒ (−) ≔ η \widecheck⊗ (−)\).
|
||||
\end{notation}
|
||||
|
||||
Dual to our observation above it will be a theme that the pullback-power sends fibrations to trivial fibrations.
|
||||
Or more formally:
|
||||
\begin{lemma}[{\cite[Lemma 4.10]{riehlTheoryPracticeReedy2014}}]
|
||||
There is an adjunction \({η \× (−) ⊣ η ⇒ (−)}\) between the pushout-product and the pullback-power.
|
||||
\end{lemma}
|
||||
\begin{remark}
|
||||
This holds in much more generality, for details see \cite[Lemma 2.1.15]{solution}.
|
||||
\end{remark}
|
||||
\begin{remark}
|
||||
Our main application of this lemma will be that it gives us the ability to transpose lifting problems
|
||||
between cofibrations and trivial fibrations, to lifting problems between trivial cofibrations and fibrations.
|
||||
\end{remark}
|
||||
|
||||
We will encounter another form of Leibniz application, which at first glance does not has a lot to do with homotopy. Though,
|
||||
one might read this as getting the component of a natural transformation some sort of a homotopic construction.
|
||||
\begin{lemma}\label{leibnizNatural}
|
||||
Let \(F,G : A → B \) be functors, \(η : F ⇒ G\) a natural transformation, and \A have an initial object \(∅\). Let
|
||||
\(F\) and \(G\) preserve the initial object.
|
||||
We write \(∅ → X\) for the unique map. Then we get \(η \at (∅ → X) = η_X\).
|
||||
\end{lemma}
|
||||
\begin{proof}
|
||||
We will draw the diagram from the definition
|
||||
\begin{eqcd*}
|
||||
F (∅) \arrow[r, "F(∅ → X)"] \arrow[d, "η_∅"] \arrow[dr, phantom, "\ulcorner", very near end] & F(X) \arrow[ddr, bend left, "η_X"] \arrow[d] & \\
|
||||
G(∅) \arrow[drr, bend right, "G(∅ → X)"'] \arrow[r] & G(∅) ∐\limits_{F(∅)} \mathrlap{F(X)} \phantom{G(∅)} \arrow[dr, dashed ,"η \widehat{@} (∅ → X)" description] & \\
|
||||
& & G(X)
|
||||
\end{eqcd*}
|
||||
If we substitute \(F(\emptyset)\) and \(G(∅)\) with \(∅\) the claim directly follows.
|
||||
\end{proof}
|
||||
% TODO Left adjoint
|
||||
|
||||
\end{document}
|
File diff suppressed because it is too large
Load diff
|
@ -9,7 +9,9 @@
|
|||
|
||||
|
||||
% \subfile{Model_Categories}
|
||||
\subfile{Leibniz_Construction}
|
||||
\section{Preliminaries}
|
||||
\subfile{Algebraic_Weak_Factorization_System}
|
||||
\subfile{MWE}
|
||||
\subfile{Cubical_Set}
|
||||
\subfile{Model_Structure}
|
||||
% \subfile{Reedy_Categories}
|
||||
|
|
26
src/Preliminaries/erklaerung.tex
Normal file
26
src/Preliminaries/erklaerung.tex
Normal file
|
@ -0,0 +1,26 @@
|
|||
% Keine Kopf-/Fußzeile auf dieser Seite
|
||||
\documentclass[../Main.tex]{subfiles}
|
||||
\begin{document}
|
||||
\thispagestyle{empty}
|
||||
|
||||
%\vspace*{4cm}
|
||||
|
||||
|
||||
\section*{Thesis Statement pursuant to \S22 paragraph 7 of APB TU Darmstadt}
|
||||
|
||||
I herewith formally declare that I, Dennis Frieberg, have written the submitted thesis independently pursuant to \S 22 paragraph 7 of APB TU Darmstadt without any outside support and using only the quoted literature and other sources. I did not use any outside support except for the quoted literature and other sources mentioned in the paper. I have clearly marked and separately listed in the text the literature used literally or in terms of content and all other sources I used for the preparation of this academic work. This also applies to sources or aids from the Internet.
|
||||
This thesis has not been handed in or published before in the same or similar form.
|
||||
|
||||
I am aware, that in case of an attempt at deception based on plagiarism (\S38 Abs. 2 APB), the thesis would be graded with 5,0 and counted as one failed examination attempt. The thesis may only be repeated once.
|
||||
|
||||
\vspace{20pt}
|
||||
|
||||
\noindent
|
||||
Darmstadt, 9.8.2024\vspace{30pt}
|
||||
|
||||
% Unterschrift (handgeschrieben)
|
||||
|
||||
\noindent
|
||||
Signature of author \\
|
||||
\includegraphics[scale=0.15]{../Unterschrift_dennis.png}
|
||||
\end{document}
|
|
@ -7,18 +7,18 @@ For this section we fix the following notation. We write \(n\) for a set with ca
|
|||
and identify \(2\) with the set \(\{⊥,⊤\}\). We write \([n]\) for the well ordered sets
|
||||
\(\{0,…,n\}\).
|
||||
|
||||
We would love to compare the model categories on \(\□\) and \(\Δ\). For this we want at least
|
||||
We would love to compare the model categories on \(\□\) and \(\Δ\). For this, we want at least
|
||||
an adjoint pair between those. If we somehow could get a functor on the site level, we could
|
||||
get an adjoint triple. But there is no immediate obvious functor to do the job.
|
||||
The usual trick would be to go look at the idempotent completions of these categories. These
|
||||
usually have :w
|
||||
usually have
|
||||
more objects and are sometimes more suitable as the codomain of a functor, while
|
||||
the presheaf category stays the same (up to equivalence).
|
||||
For example, the idempotent completion of \dCube is the category \FL of finite lattices and
|
||||
monotone maps \cites{streicherSimplicialSetsCubical2021}{sattlerIdempotentCompletionCubes2019}.
|
||||
So we have automatically \(\widehat{\FL} = \dcSet\),
|
||||
and there is an obvious inclusion \(i: Δ → \FL\). Sadly for us \(□\) is already idempotent complete.
|
||||
To get around this issue we can embed \(j: □ → \dCube → \FL\), and get into the following situation.
|
||||
and there is an obvious inclusion \(i: Δ → \FL\). Sadly for us, \(□\) is already idempotent complete.
|
||||
To get around this issue, we can embed \(j: □ → \dCube → \FL\), and get into the following situation.
|
||||
|
||||
\begin{equation*}
|
||||
\begin{tikzcd}[column sep=large]
|
||||
|
@ -39,14 +39,14 @@ We can also define \(j\) explicitly like this:
|
|||
\end{definition} \todo{\printline move this somewhere sane}
|
||||
|
||||
\begin{definition}\label{def:j}
|
||||
Let \(j : □ → \FL\) the functor given by the following data:
|
||||
Let \(j : □ → \FL\) be the functor given by the following data:
|
||||
\begin{itemize}
|
||||
\item On objects: \(f(\{⊥|x_1…x_n|⊤\}) ≔ 𝟚^n\) \todo{\printline use \(⊥⊤\) notation in introduction chapter}
|
||||
\item On morphisms: \(j(f)(g)(x) ≔ \begin{cases}g(f(x)) & f(x) \notin 𝟚 \\ f(x) & f(x) ∈ 𝟚\end{cases}\)
|
||||
\end{itemize}
|
||||
\end{definition}
|
||||
|
||||
In \cite{riehlVid} it is claimed that \(i^*j_!\) is the triangulation functor and \(j^*i_!\) a
|
||||
In \cite{riehlVid}, it is claimed that \(i^*j_!\) is the triangulation functor and \(j^*i_!\) a
|
||||
left Quillen homotopy inverse. In the mean time Reid Barton observed in an yet unpublished paper,
|
||||
\todo{\printline how to cite this? claimed in \cite{solution}}
|
||||
that this triangulation can be described by a single functor on the level of sites \(Δ → □\). One
|
||||
|
@ -60,16 +60,16 @@ We are going to verify the original argument. To give an overview of the necessa
|
|||
\end{enumerate}
|
||||
|
||||
Before we jump into the formal proofs we like to get some intuition for the four functors of interest.
|
||||
The category \FL has the luxury as containing
|
||||
The category \FL has the luxury of containing
|
||||
both \(Δ\) as the well orderings and \dCube as \(𝟚^n\). As \FL is the idempotent completion of \dCube,
|
||||
we also know that a presheaf on \FL is completely determined by its behavior on \dCube. The two functors
|
||||
\(i^*\) and \(j^*\) are just the restrictions, and we can also understand \(i_!\) and \(j_!\) geometrically.
|
||||
Let \(F ∈ \Δ\), then \(F(x) = i_!(F)(i(x))\). The interesting question is what does \(i_!\) define
|
||||
Let \(F ∈ \Δ\), then \(F(x) = i_!(F)(i(x))\). The interesting question is: what does \(i_!\) define
|
||||
on cubes (aka \(\Yo 𝟚^n\)) and \(j_!\) on simplicials (aka \(\Yo [n]\)). \(j_!\) will basically triangulate the cube (see \cref{prop:triangulation}),
|
||||
while \(i_!\) will add just all possible cubes that are degenerated to simplicials.
|
||||
|
||||
To understand \(i^*\) a bit better we might take a look how things that are defined on \(𝟚^n\) extend to simplicials.
|
||||
For this we need to exhibit \([n]\) as a retract of \(𝟚^n\). For this we define two maps.
|
||||
To understand \(i^*\) a bit better we might take a look how things that are defined on \(𝟚^n\) and extend this to simplicials.
|
||||
For this, we need to exhibit \([n]\) as a retract of \(𝟚^n\). To do this we define two maps.
|
||||
\begin{definition}
|
||||
Let
|
||||
\begin{itemize}
|
||||
|
@ -97,7 +97,7 @@ in \Set.
|
|||
\(j_!(\I^n) = \Yo 𝟚^n\)
|
||||
\end{proposition} \todo{\printline move to a sane place}
|
||||
\begin{proof}
|
||||
We have that \(\Hom(j_!(\I^n),B) = \Hom(\I^n,j^*(B))\). And we get
|
||||
We have that \(\Hom(j_!(\I^n),B) = \Hom(\I^n,j^*(B))\) and we get
|
||||
\begin{equation*}
|
||||
\Hom(\Yo 𝟚^n, B) = B(𝟚^n) = B(j(n)) = \Hom(\I^n,j^*B)\text{.}
|
||||
\end{equation*}
|
||||
|
@ -107,7 +107,7 @@ in \Set.
|
|||
\subsection{\(i_!\) and \(i^*\) are left Quillen functors}
|
||||
We need to show that \(i_!\) and \(i^*\) preserve cofibrations (or
|
||||
in other words monomorphisms) and trivial cofibrations. The hard
|
||||
part will be to show that \(i_!\) preserves monomorphisms and we will do that last.
|
||||
part will be to show that \(i_!\) preserves monomorphisms which we will do at the end.
|
||||
|
||||
\begin{lemma}\label{lem:restrictionPreserveMonos}
|
||||
Let \(F: \A → \B\) be a functor, then the restriction functor \(F^* : \hat{\B} → \hat{\A}\) preserves monomorphisms.
|
||||
|
@ -121,7 +121,7 @@ Let \(F: \A → \B\) be a functor, then the restriction functor \(F^* : \hat{\B}
|
|||
\(i^*\) preserves monomorphisms.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Directly by \cref{lem:restrictionPreserveMonos}.
|
||||
Follows directly by \cref{lem:restrictionPreserveMonos}.
|
||||
\end{proof}
|
||||
|
||||
\begin{proposition}\label{prop:i*preservesTrivialCofib}
|
||||
|
@ -157,7 +157,7 @@ Note that a repetition of this argument is also good to check if we actually nee
|
|||
or if we can do with just an Eilenberg-Zilber category.
|
||||
|
||||
\begin{lemma}[{\cite[lem 3.4]{sattlerIdempotentCompletionCubes2019}}]\label{lem:idemLemma3.4}
|
||||
Let \C be an Eilenberg-Zilber category. Assume \C as pullbacks along face maps.
|
||||
Let \C be an Eilenberg-Zilber category. Assume \C has pullbacks along face maps.
|
||||
The functor \(\colim : \hat{\C} → \Set\) preserves monomorphisms.
|
||||
\end{lemma}
|
||||
\begin{remark}
|
||||
|
@ -195,7 +195,7 @@ in \cite{pareAbsoluteColimits1971}.
|
|||
or the symmetric situation where \(B\) and \(C\) are interchanged.
|
||||
\end{lemma}
|
||||
\begin{proof}
|
||||
Let us assume first we have given all the data above (and WLOG \(B\) and \(C\) are not interchanged), and let
|
||||
First let us assume we have given all the data above (and WLOG \(B\) and \(C\) are not interchanged), and let
|
||||
\begin{eqcd*}
|
||||
A \arrow[r, "p" ] \arrow[d, "q" ] & B \arrow[d, "b"] \\
|
||||
C \arrow[r, "c"] & X
|
||||
|
@ -203,7 +203,7 @@ in \cite{pareAbsoluteColimits1971}.
|
|||
be a commutative square. As all this data gets preserved by any functor we only need to prove that such a diagram
|
||||
is already a pushout. We now need to find a \(x : P → X\) that witnesses \(P\) as a pushout.
|
||||
Since \(m\) splits, \(x\) will be unique automatically. We define \(x ≔ bu\). We now need to check
|
||||
if it makes the both evident triangles commute. We do this by chasing though all the data we have been given.
|
||||
if it makes both evident triangles commute. We do this by chasing through all the data we have been given.
|
||||
\begin{equation*}
|
||||
xm = bum = bqr_k = cqr_k = cqs_k = bps_k = bqr_{k-1} = … = bps_1 = b
|
||||
\end{equation*}
|
||||
|
@ -220,7 +220,7 @@ in \cite{pareAbsoluteColimits1971}.
|
|||
\end{eqcd*}
|
||||
That means we have a surjection from \(\Hom(P,B) + \Hom(P,C) → \Hom(P,P)\).
|
||||
A preimage of \(\id_P\) under this surjection is a morphism \(u\) for that either \(mu = \id_P\)
|
||||
or \(nu = \id_P\) holds. We assume WLOG the first case.
|
||||
or \(nu = \id_P\) holds. WLOG we assume the first case.
|
||||
|
||||
To get \(r_1,…,r_k\) and \(s_1,…,s_k\) that relate \(\id_B\) and \(um\), we look at the image under the
|
||||
\(\Hom(B,−)\) functor.
|
||||
|
@ -247,7 +247,7 @@ in \cite{pareAbsoluteColimits1971}.
|
|||
|
||||
To be able to lift this structure, we make sure we can lift commuting triangles first.
|
||||
\begin{lemma}\label{lem:gfibliftcommute}
|
||||
Let \(F : Q → \C\) be a discrete Grothendieck fibration, \(f: A → B\) be a morphism in \(Q\), and
|
||||
Let \(F : Q → \C\) be a discrete Grothendieck fibration, \(f: A → B\) be a morphism in \(Q\),
|
||||
\(g : C → F(B)\) and \(h : F(A) → C\) such that \(gh = F(f)\). We can lift \(g\) and \(h\) to form
|
||||
a commutative diagram in \(Q\).
|
||||
\begin{eqcd*}[column sep=small]
|
||||
|
@ -275,7 +275,7 @@ To be able to lift this structure, we make sure we can lift commuting triangles
|
|||
|
||||
Given some monomorphism \(f : A ↣B\) in \(Q\) and two morphisms \(g,h : C → \p(A)\) in \C, such that \(\p(f)h = \p(f)g \),
|
||||
we can lift \(g\) and \(h\) to \(g' : C' → A\) and \(h' : C' → A\). By \cref{lem:gfibliftcommute}, we have \(fg' = fh'\) and
|
||||
thus \(g' = h'\). We get directly \( g = \p(g') = \p(h') = h\) and thus \(\p(f)\) is monic.
|
||||
thus \(g' = h'\). We directly get \( g = \p(g') = \p(h') = h\) and thus \(\p(f)\) is monic.
|
||||
|
||||
Assume we have a map \(f\) in \(Q\) such that \(\p(f)\) has a (one sided) inverse. By \cref{lem:gfibliftcommute}, the lift
|
||||
of that (one sided) inverse is also a (one sided) inverse of \(f\).
|
||||
|
@ -345,13 +345,13 @@ To be able to lift this structure, we make sure we can lift commuting triangles
|
|||
& \C &
|
||||
\end{tikzcd}
|
||||
\end{equation*}
|
||||
That \(\colim\) sends \(f\) to a monomorphism,
|
||||
The fact that \(\colim\) sends \(f\) to a monomorphism,
|
||||
is equivalent to the statement that \(F\) is monic on connected components. That follows directly from the explicit construction
|
||||
of colimits in \(\Set\). Also \(F\) gets to be a discrete Grothendieck fibration on its own. We can lift the Eilenberg-Zilber
|
||||
structure to \(Q\) by \cref{lem:liftEZFib}.
|
||||
|
||||
So now let \(S\) and \(T\) be be objects of \(P\) such that \(F(S)\) and \(F(T)\) lie in the same connected component of \(Q\).
|
||||
This means there is a zigzag connecting \(F(S)\) and \(F(T)\). We can normalize this to a span. To see this, we are going
|
||||
This means that there is a zigzag connecting \(F(S)\) and \(F(T)\). We can normalize this to a span. To see this, we are going
|
||||
to show how to turn a cospan into a span of morphisms. So let \( f : D → B\) and \( g : E → B \) be a cospan. Because of the Eilenberg-Zilber
|
||||
structure, we can factor those into a degeneracy followed by a face map. Lets call them \(f_d\), \(f_f\), \(g_d\) and \(g_f\).
|
||||
\begin{eqcd*}[column sep=small]
|
||||
|
@ -373,8 +373,8 @@ structures. Sadly this does not work well for Eilenberg-Zilber categories. So we
|
|||
category.
|
||||
\end{lemma}
|
||||
\begin{proof}
|
||||
First we need to define a degree function \(\d: \Obj (\B) → ℕ\), let \(\d'\) be the degree function from the elegant Reedy structure
|
||||
we can then define \(\d ≔ \d'p\). We define the degeneracies and faces, as being degeneracies and faces under \(p\).
|
||||
First we need to define a degree function \(\d: \Obj (\B) → ℕ\). Let \(\d'\) be the degree function from the elegant Reedy structure.
|
||||
We can then define \(\d ≔ \d'p\). We define the degeneracies and faces, as being degeneracies and faces under \(p\).
|
||||
Here we would already get a problem with EZ-categories as we might have maps that are monic in \(B ↓ i\), but not in \B.
|
||||
|
||||
As a next step we need to construct the desired factorization into a degeneracy and a face map. So we take
|
||||
|
@ -401,7 +401,7 @@ structures. Sadly this does not work well for Eilenberg-Zilber categories. So we
|
|||
Usually this situation is given, if \A has pullbacks along a class of maps and \(i\) preserves them.
|
||||
\end{remark}
|
||||
\begin{proof}
|
||||
So consider a span of maps in \(B ↓ i\). That fulfills the conditions of this lemma.
|
||||
So consider a span of maps in \(B ↓ i\), that fulfills the conditions of this lemma.
|
||||
If we unravel this, we get the following diagram:
|
||||
\begin{eqcd*}
|
||||
{} & B \arrow[rdd, "ϕ'", bend left] \arrow[ldd, "ϕ"', bend right] & \\
|
||||
|
@ -430,7 +430,7 @@ structures. Sadly this does not work well for Eilenberg-Zilber categories. So we
|
|||
that degeneracies are split in \cref{lem:liftEZFib}.
|
||||
\end{remark}
|
||||
\begin{proof}
|
||||
We only need to show that \(i_!\) preserves monomorphisms component wise, giving us a functor from \(\hat{\A} → \Set\) for every
|
||||
We only need to show that \(i_!\) preserves monomorphisms componentwise, giving us a functor from \(\hat{\A} → \Set\) for every
|
||||
object in \( B ∈\B\). This functor can be written as
|
||||
\begin{eqcd*}
|
||||
\hat{\A} \arrow[r,"p^*"] & \widehat{B ↓ i} \arrow[r,"\colim"] & \Set
|
||||
|
@ -490,7 +490,7 @@ So let us start with \(j^*\) again.
|
|||
\begin{proof}
|
||||
Preservation of uniform fibrations is clear, as every lifting problem against a generating trivial cofibration factors
|
||||
throug the image of a trivial cofibration under \(j^*\). As \(j^*\) preserves coequalizer the equivariance follows
|
||||
from the equivariance of the model category on \dcSet, which follows from the presence of (at least one connection).
|
||||
from the equivariance of the model category on \dcSet, which follows from the presence of (at least one) connection.
|
||||
\end{proof}\todo{define uniform fibrations without equivariance}\todo{make a lemma that \dcSet is equivariant even if we just cite this from the relative elegance paper}
|
||||
|
||||
We want to recap the general proof idea that \(i_!\) preserved monos. We can test being a monomorphism component wise and can write the
|
||||
|
@ -499,8 +499,8 @@ That the colimit preserves monomorphisms is equivalent to a condition of connec
|
|||
of elements. We can only lift morphisms backwards along discrete Grothendieck fibrations, but we are faced
|
||||
with zigzags. To remedy this we want to lift a strong enough pullback property to turn these zigzags into spans.
|
||||
|
||||
This strategy will be the same for \(j_!\). Sadly we couldn't get arbitrary Eilenberg-Zilber structures to lift, so
|
||||
we will need a slightly different attack point.
|
||||
The strategy will be the same for \(j_!\). Sadly we couldn't get arbitrary Eilenberg-Zilber structures to lift, so
|
||||
we will need a slightly different point of attack.
|
||||
|
||||
We first observe that \(□\) has a lot more pullbacks then \(Δ\). While \(Δ\) has only “non-empty” pullbacks of face maps,
|
||||
\(□\) has all “non-empty" pullbacks. This elevates us from the necessity to lift the Eilenberg-Zilber structure to \(B ↓ i\).
|
||||
|
@ -533,11 +533,7 @@ Let \C be a category. Assume \C has pullbacks, then the functor \(\colim : \hat{
|
|||
consequence of \cref{commaPullback}.
|
||||
\end{proof}
|
||||
|
||||
And we arrive at the conclusion we wanted to get to:
|
||||
\begin{lemma}
|
||||
Let \(F: \A → \B\) be a functor, then the restriction functor \(F^* : \hat{\B} → \hat{\A}\) preserves monomorphisms.
|
||||
\end{lemma}
|
||||
|
||||
Finally we arrive at the conclusion we wanted to get to:
|
||||
\begin{proposition}\label{prop:j!preservesMonos}
|
||||
\(j_! : \widehat{\FL} → \□\) preserves monomorphisms.
|
||||
\end{proposition}
|
||||
|
@ -553,7 +549,7 @@ And we arrive at the conclusion we wanted to get to:
|
|||
where we identify \(f\) and \(g\) with functions \(m → n + 2\).
|
||||
We note that equalizers, if they exist in \FL, are constructed the same way as in \Set.
|
||||
If we apply this to our diagram, we can see that this construction can't create something empty,
|
||||
as both maps needs to preserve elements from \(B\).
|
||||
as both maps need to preserve elements from \(B\).
|
||||
We would like to complete the diagram in the following way
|
||||
%\begin{eqcd*}[column sep=large]
|
||||
\begin{equation*}
|
||||
|
@ -568,7 +564,7 @@ And we arrive at the conclusion we wanted to get to:
|
|||
\(f\) and \(g\). We will now argue that our equalizer would be empty if this quotient identifies
|
||||
\(⊤\) and \(⊥\).
|
||||
|
||||
We assume that indeed the relation identifies \(⊥\) and \(⊤\), let \(h ∈ 𝟚^n\) such that \(j(f)(h) = j(g)(h)\).
|
||||
We assume that indeed the relation identifies \(⊥\) and \(⊤\). Let \(h ∈ 𝟚^n\) such that \(j(f)(h) = j(g)(h)\).
|
||||
Because \(⊥\) and \(⊤\) are being identified, there exists two finite sequence of elements \(x_i\) and \(y_i\) in \(m\), such that
|
||||
there exists a \(k\) and
|
||||
(WLOG) \(f(x_1) = ⊥\), \(g(x_i) = f(y_i)\), \(g(y_i) = f(x_{i+1})\) and \(g(y_k) = ⊤\).
|
||||
|
@ -587,8 +583,8 @@ And we arrive at the conclusion we wanted to get to:
|
|||
h': l → 𝟚 && \text{by} && h'(x) ≔ h(z) & & \text{where} && z ∈ e^{-1}(x)\text{.}
|
||||
\end{align*}
|
||||
For this definition to make sense we need to argue that the choice of \(z\) does not matter.
|
||||
So let \(z\) and \(z'\) be both element of \(e^{-1}(x)\). Like above that means there are
|
||||
two sequences \(x_i\) and \(y_i\) like above, just that \(f(x_1) = z\) and \(g(y_k) = z'\), instead
|
||||
So let \(z\) and \(z'\) be both element of \(e^{-1}(x)\). Like above, that means there are
|
||||
two sequences \(x_i\) and \(y_i\), just that \(f(x_1) = z\) and \(g(y_k) = z'\), instead
|
||||
of \(⊥\) and \(⊤\).
|
||||
And like above we get
|
||||
\begin{align*}
|
||||
|
@ -622,7 +618,7 @@ and that is indeed the case.
|
|||
\begin{proof}
|
||||
As \T is the unique cocontinuous functor that extends the product preserving functor from \□ to \Δ that sends the interval
|
||||
to the interval, we only need to show these conditions. \(i^*j_!\) is cocontinuous as it is the composition of two left
|
||||
adjoints. We need to show that this functor is product preserving on representables. But by Yoneda and \cref{prop:j_!Nothing}
|
||||
adjoints. We need to show that this functor is product preserving on representables. But by Yoneda and \cref{prop:j_!Nothing} we have
|
||||
\begin{equation*}
|
||||
j_!(\I^n × \I^m) = j_!(\I^{n+m}) = \Yo 𝟚^{n+m} = \Yo 𝟚^n × \Yo 𝟚^m = j_!(\I^n) × j_!(\I^m)
|
||||
\end{equation*}
|
||||
|
@ -640,9 +636,9 @@ and that is indeed the case.
|
|||
Intuitively the reason for this is that we can't map a square with one side degenerated to a triangle into the representable
|
||||
square in \□. But we have a lot of ways to map a triangle into a triangulated square in \Δ.
|
||||
\begin{proof}
|
||||
By the Yoneda lemma and that \(j^*i_!\) is the triangulation functor, a map on the right-hand side corresponds to a
|
||||
By the Yoneda lemma, and that \(j^*i_!\) is the triangulation functor, a map on the right-hand side corresponds to a
|
||||
pair of monotone maps \([2] → [1]\). There are 16 of such pairs. On the left we can use \cref{prop:j_!Nothing},
|
||||
and expanding the definition to get \(\Hom_{\□}(\Hom_{\FL}(j(-),[2]),\Hom_{□}(-,2))\). Lets take such a transformation
|
||||
and expanding the definition we get \(\Hom_{\□}(\Hom_{\FL}(j(-),[2]),\Hom_{□}(-,2))\). Lets take such a transformation
|
||||
and call it \(η\). \(η_0\) gives rise to a map \(f : [2] → 𝟚^2\). \(η_1\) witnesses that \(f\) is indeed monotone, by
|
||||
composing with face maps. And \(η_2\) rules out all injective maps, as for them always exactly 2 faces agree and
|
||||
\(\I^2(2)\) does not contain such faces. There are only 9 such maps, which is an upper bound for the possible number of
|
||||
|
@ -661,7 +657,7 @@ first we don't run into that problem.
|
|||
As \(j^*i_!\) is the triangulation functor, the left-hand side becomes \(\Hom_{\Δ}(Δ^1 × Δ^1,Δ^1 × Δ^1)\).
|
||||
We can write \( Δ^1 × Δ^1 \) as a pushout of two copies of \(Δ^1\) joint by an edge.
|
||||
As \(i_!\) is cocontinuous, we can construct this pushout in \FL. By this and \cref{prop:i_!Nothing}, we
|
||||
get a colimit of two representables there. Unraveling these definition it is a straightforward exercise
|
||||
get a colimit of two representables there. Unraveling these definitions it is a straightforward exercise
|
||||
to verify that the set of on the left-hand side has more elements.
|
||||
\end{proof}
|
||||
|
||||
|
@ -703,7 +699,7 @@ contractible.
|
|||
are weak equivalences, then all components of \(α\) are weak equivalences.
|
||||
\end{corollary}
|
||||
\begin{proof}
|
||||
By \cref{test}\todo{In Leibniz construction chapter make this an example} we can write the the part of \(α\) at \(X\) as a
|
||||
By \cref{leibnizNatural} we can write the the part of \(α\) at \(X\) as a
|
||||
Leibniz application with the monomorphism \(∅ → X\). We can use both lemmas from above to show that \(α\) gets send by all
|
||||
of those maps to a weak equivalence. By \cref{lem:Eilenberg-ZilberMonosCreated} the monomorphisms in \(\Set^{\A^\op}\) are generated by
|
||||
morphisms \(∅ → \faktor{\Yo a}{H}\) under some closure properties. By assumption, morphisms of that form are sent to weak equivalences
|
||||
|
@ -745,8 +741,8 @@ contractible.
|
|||
|
||||
To show that these induce an equivalence between the homotopy categories we must show that we have a zigzag of
|
||||
natural transformations between \(i^*j_!j^*i_!\) and the identity functor such that every natural transformation
|
||||
is valued in weak equivalences, and likewise for \(j^*i_!i^*j_!\). We will call \(η^i\) and \(ε^i\) the unit and counit
|
||||
of the adjunction \(i_! ⊣ i^*\). Likewise \(η^j\) and \(ε^j\) for the adjunction \(j_! ⊣ j^*\).
|
||||
is valued in weak equivalences, and likewise for \(j^*i_!i^*j_!\). We will denote \(η^i\) and \(ε^i\) the unit and counit
|
||||
of the adjunction \(i_! ⊣ i^*\). Likewise we denote \(η^j\) and \(ε^j\) for the adjunction \(j_! ⊣ j^*\).
|
||||
We can construct these as follows
|
||||
\begin{eqcd*}[column sep=large, row sep=large,execute at end picture={\draw[decorate,decoration={brace,mirror,amplitude=4.5}] (sBrcs.south) to (eBrcs.south);}]
|
||||
\Δ \arrow[r,"i_!", ""'{name=i!s}] & |[alias=sBrcs]| \dcSet \arrow[r,"j^*"] \arrow[d,phantom,"⊛"] \arrow[rr,bend right=20,phantom , ""'{name=cc}] & \□ \arrow[r,"j_!"] & |[alias=eBrcs]| \dcSet \arrow[r,"i^*", ""'{name=i*s}] \arrow[d,phantom,"⊛"] & \Δ \\
|
||||
|
|
139
src/resource.bib
139
src/resource.bib
|
@ -1081,3 +1081,142 @@
|
|||
langid = {english},
|
||||
file = {/home/nerf/Zotero/storage/Y3K26BM6/Lack and Sobociński - 2005 - Adhesive and quasiadhesive categories.pdf}
|
||||
}
|
||||
|
||||
@online{hazratpour2categoricalProofFrobenius2024,
|
||||
title = {A 2-Categorical Proof of {{Frobenius}} for Fibrations Defined from a Generic Point},
|
||||
author = {Hazratpour, Sina and Riehl, Emily},
|
||||
date = {2024-02-22},
|
||||
eprint = {2210.00078},
|
||||
eprinttype = {arXiv},
|
||||
eprintclass = {math},
|
||||
doi = {10.48550/arXiv.2210.00078},
|
||||
url = {http://arxiv.org/abs/2210.00078},
|
||||
urldate = {2024-08-08},
|
||||
abstract = {Consider a locally cartesian closed category with an object I and a class of trivial fibrations, which admit sections and are stable under pushforward and retract as arrows. Define the fibrations to be those maps whose Leibniz exponential with the generic point of I defines a trivial fibration. Then the fibrations are also closed under pushforward.},
|
||||
pubstate = {prepublished},
|
||||
keywords = {18N45 18N40 03B38 55U35,Mathematics - Algebraic Topology,Mathematics - Category Theory,Mathematics - Logic},
|
||||
file = {/home/nerf/Zotero/storage/ISNC6YKS/Hazratpour and Riehl - 2024 - A 2-categorical proof of Frobenius for fibrations .pdf;/home/nerf/Zotero/storage/BIR588ML/2210.html}
|
||||
}
|
||||
|
||||
@online{campionCubicalSitesEilenbergZilber2023,
|
||||
title = {Cubical Sites as {{Eilenberg-Zilber}} Categories},
|
||||
author = {Campion, Timothy},
|
||||
date = {2023-03-10},
|
||||
eprint = {2303.06206},
|
||||
eprinttype = {arXiv},
|
||||
eprintclass = {math},
|
||||
doi = {10.48550/arXiv.2303.06206},
|
||||
url = {http://arxiv.org/abs/2303.06206},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {We show that various cube categories (without diagonals, but with symmetries / connections / reversals) are Eilenberg-Zilber categories. This generalizes a result of Isaacson for one particular cubical site. Our method does not involve direct verification of any absolute pushout diagrams. While we are at it, we record some folklore descriptions of cube categories with diagonals and determine exactly which of these are EZ categories. Beforehand, we develop some general theory of Eilenberg-Zilber categories. We show that a mild generalization of the EZ categories of Berger and Moerdijk are in fact characterized (among a broad class of ``generalized Reedy categories") by the satisfaction of the Eilenberg-Zilber lemma, generalizing a theorem of Bergner and Rezk in the strict Reedy case. We also introduce a mild strengthening of Cisinski's notion of a \textbackslash emph\{cat\textbackslash 'egorie squelettique\}, and show that any such category satisfies the Eilenberg-Zilber lemma. It is this tool which allows us to avoid checking absolute pushouts by hand.},
|
||||
pubstate = {prepublished},
|
||||
keywords = {55U35,Mathematics - Algebraic Topology,Mathematics - Category Theory},
|
||||
file = {/home/nerf/Zotero/storage/DUJL6DRG/Campion - 2023 - Cubical sites as Eilenberg-Zilber categories.pdf;/home/nerf/Zotero/storage/2QSTN5BN/2303.html}
|
||||
}
|
||||
|
||||
@article{kapulkinSimplicialModelUnivalent2021,
|
||||
title = {The Simplicial Model of {{Univalent Foundations}} (after {{Voevodsky}})},
|
||||
author = {Kapulkin, Krzysztof and Lumsdaine, Peter LeFanu},
|
||||
date = {2021-03-08},
|
||||
journaltitle = {Journal of the European Mathematical Society},
|
||||
volume = {23},
|
||||
number = {6},
|
||||
pages = {2071--2126},
|
||||
issn = {1435-9855},
|
||||
doi = {10.4171/jems/1050},
|
||||
url = {https://ems.press/journals/jems/articles/274693},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {Krzysztof Kapulkin, Peter LeFanu Lumsdaine},
|
||||
langid = {english},
|
||||
file = {/home/nerf/Zotero/storage/NNLS3E7X/Kapulkin and Lumsdaine - 2021 - The simplicial model of Univalent Foundations (aft.pdf}
|
||||
}
|
||||
|
||||
@inproceedings{bezemNonConstructivityKanSimplicial2015,
|
||||
title = {Non-{{Constructivity}} in {{Kan Simplicial Sets}}},
|
||||
booktitle = {{{DROPS-IDN}}/v2/Document/10.4230/{{LIPIcs}}.{{TLCA}}.2015.92},
|
||||
author = {Bezem, Marc and Coquand, Thierry and Parmann, Erik},
|
||||
date = {2015},
|
||||
publisher = {Schloss Dagstuhl – Leibniz-Zentrum für Informatik},
|
||||
doi = {10.4230/LIPIcs.TLCA.2015.92},
|
||||
url = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TLCA.2015.92},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {We give an analysis of the non-constructivity of the following basic result: if X and Y are simplicial sets and Y has the Kan extension property, then Y\textasciicircum X also has the Kan extension property. By means of Kripke countermodels we show that even simple consequences of this basic result, such as edge reversal and edge composition, are not constructively provable. We also show that our unprovability argument will have to be refined if one strengthens the usual formulation of the Kan extension property to one with explicit horn-filler operations.},
|
||||
eventtitle = {13th {{International Conference}} on {{Typed Lambda Calculi}} and {{Applications}} ({{TLCA}} 2015)},
|
||||
langid = {english},
|
||||
file = {/home/nerf/Zotero/storage/36F474PT/Bezem et al. - 2015 - Non-Constructivity in Kan Simplicial Sets.pdf}
|
||||
}
|
||||
|
||||
@inproceedings{bezemModelTypeTheory2014a,
|
||||
title = {A {{Model}} of {{Type Theory}} in {{Cubical Sets}}},
|
||||
booktitle = {{{DROPS-IDN}}/v2/Document/10.4230/{{LIPIcs}}.{{TYPES}}.2013.107},
|
||||
author = {Bezem, Marc and Coquand, Thierry and Huber, Simon},
|
||||
date = {2014},
|
||||
publisher = {Schloss Dagstuhl – Leibniz-Zentrum für Informatik},
|
||||
doi = {10.4230/LIPIcs.TYPES.2013.107},
|
||||
url = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TYPES.2013.107},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types into an equality. We also explain how to model propositional truncation and the circle. While not expressed internally in type theory, the model is expressed in a constructive metalogic. Thus it is a step towards a computational interpretation of Voevodsky's Univalence Axiom.},
|
||||
eventtitle = {19th {{International Conference}} on {{Types}} for {{Proofs}} and {{Programs}} ({{TYPES}} 2013)},
|
||||
langid = {english},
|
||||
file = {/home/nerf/Zotero/storage/4QVQJ2J4/Bezem et al. - 2014 - A Model of Type Theory in Cubical Sets.pdf}
|
||||
}
|
||||
|
||||
@inproceedings{angiuliCartesianCubicalComputational2018a,
|
||||
title = {Cartesian {{Cubical Computational Type Theory}}: {{Constructive Reasoning}} with {{Paths}} and {{Equalities}}},
|
||||
shorttitle = {Cartesian {{Cubical Computational Type Theory}}},
|
||||
booktitle = {{{DROPS-IDN}}/v2/Document/10.4230/{{LIPIcs}}.{{CSL}}.2018.6},
|
||||
author = {Angiuli, Carlo and Hou (Favonia), Kuen-Bang and Harper, Robert},
|
||||
date = {2018},
|
||||
publisher = {Schloss Dagstuhl – Leibniz-Zentrum für Informatik},
|
||||
doi = {10.4230/LIPIcs.CSL.2018.6},
|
||||
url = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2018.6},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {We present a dependent type theory organized around a Cartesian notion of cubes (with faces, degeneracies, and diagonals), supporting both fibrant and non-fibrant types. The fibrant fragment validates Voevodsky's univalence axiom and includes a circle type, while the non-fibrant fragment includes exact (strict) equality types satisfying equality reflection. Our type theory is defined by a semantics in cubical partial equivalence relations, and is the first two-level type theory to satisfy the canonicity property: all closed terms of boolean type evaluate to either true or false.},
|
||||
eventtitle = {27th {{EACSL Annual Conference}} on {{Computer Science Logic}} ({{CSL}} 2018)},
|
||||
langid = {english},
|
||||
file = {/home/nerf/Zotero/storage/ITQAFSAQ/Angiuli et al. - 2018 - Cartesian Cubical Computational Type Theory Const.pdf}
|
||||
}
|
||||
|
||||
@inproceedings{coquandHigherInductiveTypes2018,
|
||||
title = {On {{Higher Inductive Types}} in {{Cubical Type Theory}}},
|
||||
booktitle = {Proceedings of the 33rd {{Annual ACM}}/{{IEEE Symposium}} on {{Logic}} in {{Computer Science}}},
|
||||
author = {Coquand, Thierry and Huber, Simon and Mörtberg, Anders},
|
||||
date = {2018-07-09},
|
||||
series = {{{LICS}} '18},
|
||||
pages = {255--264},
|
||||
publisher = {Association for Computing Machinery},
|
||||
location = {New York, NY, USA},
|
||||
doi = {10.1145/3209108.3209197},
|
||||
url = {https://dl.acm.org/doi/10.1145/3209108.3209197},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {Cubical type theory provides a constructive justification to certain aspects of homotopy type theory such as Voevodsky's univalence axiom. This makes many extensionality principles, like function and propositional extensionality, directly provable in the theory. This paper describes a constructive semantics, expressed in a presheaf topos with suitable structure inspired by cubical sets, of some higher inductive types. It also extends cubical type theory by a syntax for the higher inductive types of spheres, torus, suspensions, truncations, and pushouts. All of these types are justified by the semantics and have judgmental computation rules for all constructors, including the higher dimensional ones, and the universes are closed under these type formers.},
|
||||
isbn = {978-1-4503-5583-4},
|
||||
file = {/home/nerf/Zotero/storage/ZTLQNGHG/Coquand et al. - 2018 - On Higher Inductive Types in Cubical Type Theory.pdf}
|
||||
}
|
||||
|
||||
@article{awodeyCubicalModelHomotopy2018,
|
||||
title = {A Cubical Model of Homotopy Type Theory},
|
||||
author = {Awodey, Steve},
|
||||
date = {2018-12-01},
|
||||
journaltitle = {Annals of Pure and Applied Logic},
|
||||
shortjournal = {Annals of Pure and Applied Logic},
|
||||
series = {Logic {{Colloquium}} 2015},
|
||||
volume = {169},
|
||||
number = {12},
|
||||
pages = {1270--1294},
|
||||
issn = {0168-0072},
|
||||
doi = {10.1016/j.apal.2018.08.002},
|
||||
url = {https://www.sciencedirect.com/science/article/pii/S0168007218300861},
|
||||
urldate = {2024-08-09},
|
||||
abstract = {We construct an algebraic weak factorization system (L,R) on the category of cartesian cubical sets, in which the canonical path object factorization A→AI→A×A, induced by the 1-cube I, is an (L,R)-factorization for any R-object A.},
|
||||
keywords = {Algebraic weak factorization system,Homotopy type theory,Identity type,Martin-Löf type theory,Path object},
|
||||
file = {/home/nerf/Zotero/storage/BSTEHCTN/Awodey - 2018 - A cubical model of homotopy type theory.pdf;/home/nerf/Zotero/storage/IF3X3VCQ/S0168007218300861.html}
|
||||
}
|
||||
|
||||
@online{QuillenModelStructure,
|
||||
title = {Quillen Model Structure},
|
||||
url = {https://groups.google.com/g/homotopytypetheory/c/RQkLWZ_83kQ},
|
||||
urldate = {2024-08-09},
|
||||
shorthand = {Mail},
|
||||
file = {/home/nerf/Zotero/storage/69U872ED/RQkLWZ_83kQ.html}
|
||||
}
|
||||
|
|
|
@ -1 +1 @@
|
|||
/nix/store/vi5k8nsgp9knk76bri7p6nr1la5fa4mn-output.pdf
|
||||
/nix/store/7g9d5mh042bhznbcfh25ki95c39pnz83-output.pdf
|
Loading…
Add table
Reference in a new issue