Crawler is a built-in Scraper APIs feature that allows you to crawl any website based on your criteria, and then it returns the complete data to you. It’s useful when you need to collect a list of URLs in a specific category or from an entire website, as well as to receive parsed data in bulk.

For more information on how Crawler works and what its endpoints, filters, and parameters are, check documentation.

Did this answer your question?