All Collections
Getting Started
How to start using E-Commerce Scraper API
How to start using E-Commerce Scraper API
Josh avatar
Written by Josh
Updated over a week ago

Create an API user

Head to the Oxylabs dashboard and create an account. Pick a subscription plan that suits you, or go with a free trial, and then create an API user.

Make a cURL request

You can use this cURL code for testing:

curl 'https://realtime.oxylabs.io/v1/queries' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "amazon_search", "query": "shoes", "domain": "com", "geo_location": "90210", "parse": "true"}'

Replace the USERNAME and PASSWORD with the API credentials you’ve just created and run the request via terminal, Postman, or any other setup. The output will contain scraped and parsed search results for the term “shoes” from the amazon.com domain. As the API connects through a proxy server connected in Beverly Hills, California, the results will also be localized for that specific zip code.

See this quick video tutorial on how to use E-Commerce Scraper API:

🔬 Test our APIs in the Scraper APIs Playground for free via Oxylabs dashboard.

Integration methods

You can integrate E-Commerce Scraper API within your infrastructure using three different methods:

  • Realtime – this is a synchronous method where you have to keep the connection open until the job is finished.

  • Push-Pull – this is an asynchronous method where you’ll have to make another request to the API to retrieve results once the job is finished.

  • Proxy Endpoint – this is a synchronous method where you can use our endpoint as a proxy.

To learn more about these integration methods, visit the respective pages on our documentation or see this detailed blog post.

Sources and parameters

E-Commerce Scraper API has dedicated parsers for the most popular online marketplaces:

🔎 For parameters applicable across all APIs, such as JavaScript rendering and user agents, refer to the Global Parameter Values page.

Additional free features

E-Commerce Scraper API can also utilize additional features to ease your projects and introduce greater flexibility:

  • Headless Browser – executes JavaScript rendering and enables you to craft your own browser instructions, such as entering text, clicking elements, scrolling pages, and more.

  • Custom Parser – allows you to create your own parsing and data processing logic that’s executed on a raw scraping result.

  • Web Crawler – enables you to crawl any site, choose useful content, and receive it in bulk. Use cases range from discovering URLs and crawling all pages on a site to indexing all URLs on a domain and more.

  • Scheduler – gives you the capability to automate recurring scraping and parsing jobs by creating schedules.


🙌 Need assistance? Contact support via live chat or send a message to [email protected].

🎯 Want a custom solution or a free trial? Contact sales by booking a call. For any questions, such as custom pricing, advice, or a free trial, drop us a line at [email protected].

Did this answer your question?