The construction of Čech homology used coverings of the space by families of open sets. The way the open sets overlap gives ‘combinatorial’ information on the space.
This has been abstracted and extended to homotopy and also in the direction of algebraic geometry. This article aims to introduce some of the main ways this idea has interacted with some of the many themes in the nLab.
The term ‘nerve of a covering’ has been used from the late 1920s for a construction that starts with a space and an open cover of it and builds a simplicial complex. Historically these data were organized as a simplicial complex, rather than a simplicial set. The idea had been parallely worked out by Čech and P.S. Aleksandrov so one often talks about Aleksandrov–Čech (co)homology. There was an alternative approach described by Vietoris (1927), which led to what is known as Vietoris homology? There are separate entries in nlab for Čech homology and Čech cohomology. The first is not exact, and so is not really a homology theory in the sense of Steenrod–Eilenberg axioms. It is also not the (Alexander–Spanier) dual of Čech cohomology. The two problems can be avoided at the same time using coherent homotopy theory. The resulting homology is one which is exact. It also is one that satisfies the wedge axiom, and then it is the unique such theory, nowadays sometimes called strong homology and in special cases it reduces to what was called Steenrod-Sitnikov homology? (but which was originally constructed in a different way).
If is another cover such that for each , there is a with , then the assignment in this case defines a map
dependent on the choice of for each , but independent ‘up to homotopy’ (see below).
Applying one’s favourite homotopy functor, , to each and to these homotopy classes of induced transition maps, yields an inverse system of objects in , i.e., a pro-object in , but the by themselves do not define a pro-object in the category of simplicial sets. They do form a pro-object in , or alternative, a ‘coherent pro-object’ in , (see Čech homotopy for more discussion on this point.)
Aleksandrov and Čech in the 1930s, applied homology and cohomology in this situation to extend simplicially-based homology to a much wider class of spaces. Lefschetz, without the language of category theory, studied, again in the 1930s, the formal properties of inverse systems of polyhedra and maps between them and his student Christie looked at the homotopy groups in this setting.
Borsuk (1968) developed shape theory, which although initially very geometric in flavour turned out also to be described in terms of Christie’s theory of the “Čech extensions” of homotopy theory.
The first papers on this approach were by Tim Porter. They used the Vietoris complex of a space relative to an open cover as well as the Čech complex? itself. (By Dowker's Theorem the two complexes give the same information up to homotopy, but the Vietoris complex is a functor on the category of covers, having values in the category of simplicial sets, whilst the Čech complex does not give so nice a structure as most naturally it takes values in the homotopy category of simplicial sets. In fact, that functor can be rigidified but this requires a certain amount of work.)
Porter also gave a partial solution to the problem of the stability of a space, that is whether it has the same Čech homotopy as a polyhedron. The methods developed for that problem then suggested the existence of a homotopy theory for pro-simplicial sets related to homotopy coherence. That observation was also made by Edwards and Hastings, who proposed a strong shape theory based on what they termed Steenrod homotopy?. This relates to Steenrod-Sitnikov homology.
The use of modified Čech methods in algebraic geometry was well established when Grothendieck and his collaborators in Paris started adapting it to work with a Grothendieck topology. Verdier in SGA4, introduced hypercoverings and in Artin and Mazur (SLN 100), you can find their use in homotopy theory. (The work of Lubkin (1967) should also be mentioned here as it contains much that is parallel to the development by Verdier, Artin and Mazur and is sometimes much easier to decipher for the non-specialist algebraic geometer. A summary of his construction is given under Lubkin's construction?.)
Given that a Grothendieck topology is essentially about abstracting a notion of ‘covering’, it is not surprising that modified Čech methods can be applied. Artin and Mazur used Verdier’s idea of a hypercovering to get, for each Grothendieck topos, , a pro-object in (i.e. an inverse system of simplicial sets), which they call the étale homotopy type of the topos (which for them is ‘sheaves for the étale topology on a variety’).
Applying homotopy group functors gives pro-groups such that is essentially the same as Grothendieck’s algebraic fundamental group, . (Here you need to know that the category of profinite groups in the topological sense, is equivalent to the category of pro-objects in the category of finite groups (as explained in profinite group). (If you remove ‘finite’ the result does not hold, but you can recover it in part by working with “pro-discrete localic groups” instead of topological groups, i.e. take limits of finite groups within the category of ‘localic’ rather than ‘topological’ groups, remembering that ‘locales’ are almost ‘spaces without points’.
Grothendieck’s nice has thus an interpretation as a formal limit of a Čech type, or shape theoretic, system of s of ‘hypercoverings’.
Can shape theory (or its more powerfully structured ‘strong’ or ‘homotopy coherent’ version, cf. Lisica and Mardešić, Edwards and Hastings, Porter (papers 1976–78) be useful for studying étale homotopy type? Not without extra work, since the Artin–Mazur–Verdier approach leads
one to look at inverse systems in , i.e. inverse systems (diagrams) in a homotopy category not a homotopy category of inverse systems as in strong shape theory. Attempts to ‘rigidify’ the hypercovering approach, so as to get into have been made (e.g. by Lubkin, (1967) or using simplicial schemes in Friedlander, (1982), but it is not completely clear if one of them is the definitive method.
What is the consensus on this here? I have sometimes seen talks which claim to have THE method but none has been clearly THE ONE. Perhaps I did not go to the right conferences! (Some is needed here, e.g. for more ways of getting around the difficulty.)
One of the difficulties with this hypercovering approach is that ‘hypercovering’ is a difficult concept and to the ‘non-expert’ seem non-geometric and lacking in intuition. Thankfully for us, there is an alternative approach put forward by Ken Brown (1973), (see BrownAHT).
As the Grothendieck topos ‘pretends to be’ the category of , but with a strange logic, we can ‘do’ simplicial set theory in as long as we take care of the arguments we use. To see a bit of this in action we can note that the object in will be the constant simplicial sheaf with value the ordinary , “constant” here taking on two meanings at the same time,
(a) constant sheaf, i.e. not varying ‘over ’ if is thought of as , and
(b) constant simplicial object, i.e. each is the same and all face and degeneracy maps are identities. Thus interpreted as an étale space is the identity map as a space over . Of course not all simplicial objects are constant and so can store a lot of information about the space (or site) .
One can look at the homotopy structure of . Ken Brown in BrownAHT showed it had a fibration category structure and if we look at those fibrant objects in which the natural map
is a weak equivalence, we find that these are exactly the hypercoverings. Global sections of give a simplicial set, and varying amongst the hypercoverings gives a pro-simplicial set (still in not in unfortunately) which determines the Artin–Mazur pro-homotopy type of
This makes it clear there is a link between Čech methods and derived category theory. In the first, the ‘space’ is resolved using ‘coverings’ and these, in a sheaf theoretic setting, lead to simplicial objects in that are weakly equivalent to ; in the second, to evaluate the derived functor of some functor , say, on an object , one takes the ‘average’ of the values of on objects weakly equivalent to , i.e. one works with the functor
(where has objects, , a weak equivalence, and maps, the commuting ‘triangles’, and this has a ‘domain’ functor , and is the composite ).
This is in many cases a pro-object in – unfortunately standard derived functor theory interprets ‘commuting triangles’ in too weak a sense and thus corresponds to shape rather than strong shape theory – one thus, in some sense, arrives in instead of in . This problem has been in part resolved with Grothendieck’s theory of Derivateurs (see the volume by Maltsiniotis.)