Crawling in SEO is the acquisition of data about a website.
Crawling is a process by which search engines crawler/ spiders/bots scan a website and collect details about each page: titles, images, keywords, other linked pages, etc. It also discovers updated content on the web, such as new sites or pages, changes to existing sites, and dead links.
According to Google
“The crawling process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they use links on those sites to discover other pages.”
Search engine crawler pays special attention to new sites, changes to existing sites and dead links. Computer programs determine which sites to crawl, how often and how many pages to fetch from each site.
Search engine crawler scans a web page from top left to bottom right and it collects each link (internal as well as external) on the page. These links are added to next page to visit list. It goes to the next page in its list, collects the links on that page, and repeats. Web crawlers also revisit past pages once in a while to see if any changes happened.
Any website owner can instruct search engine crawler with the help of google search console. Users can block search engine crawler with the help of Robots.txt file. Google never accept any type of payment to crawl website more often.
Crawling is the first phase of working on any search engine like Google. After crawling process search engine renders data collected from crawling, this process is called Indexing. Never get confused about crawling and indexing because both are different things.