Scraper APIs features
Josh avatar
Written by Josh
Updated over a week ago

Our Scraper APIs come with freely available features that you can use to scale, speed up, and improve your public data-gathering efforts. Refer to the following list of features and visit their respective documentation pages for in-depth configuration steps.

Cloud integration

The cloud integration feature enables you to automatically retrieve job results directly to your Amazon S3 or Google Cloud Storage. This way, you don’t have to make additional requests to get the data from us.

Batch queries

For efficient scraping operations, Scraper APIs allow you to submit up to 1000 query or url parameters per batch. Head to our documentation to learn more.

Headless Browser

With the Headless Browser feature, you can render JavaScript on web pages, manipulate DOM, and execute browser actions like entering text, clicking elements, scrolling, and more.

Custom Parser

When you want to parse the HTML of a web page, you can do so with Custom Parser by crafting your own parsing and data processing logic. This feature is especially valuable in scenarios where you want to retrieve parsed data, but we don’t have a dedicated parser for a specific target website.

Web Crawler

If you want to discover URLs, crawl pages, index all URLs on a website, or perform other crawling tasks, you can utilize Web Crawler. With it, you can crawl any domain, pick relevant content, and receive it in bulk.


For automatic execution of recurring scraping and parsing jobs, you can leverage the Scheduler feature to create schedules. We recommend using this feature together with cloud integration to retrieve data at specified intervals.

🙌 Need assistance? Contact support via live chat or send a message to [email protected].

🎯 Want a custom solution or a free trial? Contact sales by booking a call. For any questions, such as custom pricing, advice, or a free trial, drop us a line at [email protected].

Did this answer your question?