# crawl `crawl` is a simple web crawler that is local to one domain. It’s using goroutines to parallelize things. The generated sitemap is exceedingly simple, we do not keep track of a real graph, but rather just a shallow list of links for every page.