Skip to main content
All CollectionsGetting Started
How to start using Web Scraper API
How to start using Web Scraper API
Josh avatar
Written by Josh
Updated over 2 months ago

It's simple to start using Oxylabs' Web Scraper API and all of its features. Just follow a few easy steps, and you'll be all set.

Create an API user

Begin by creating an account on the Oxylabs dashboard. After that, select a pricing plan or pick a free trial and create an API user. Your credentials will be your key for user authorization later on.

Make a cURL request

After creating a user, you'll see a few code examples to test your Web Scraper API. Copy-paste the code into your terminal or preferred tool. For manual testing, we recommend using Postman. You can start scraping Amazon, Google, or any other target right away. Here are the cURL examples for your first request:

Amazon:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "amazon_product", "query": "B07FZ8S74R", "geo_location": "90210", "parse": true}'

Google:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "google_search", "query": "adidas", "geo_location": "California,United States", "parse": true}'

Other:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "universal", "url": "https://sandbox.oxylabs.io/"}'

Replace the USERNAME and PASSWORD with the API credentials you’ve just created, and then run the request via terminal, Postman, or any other setup.

See this short video tutorial demonstrating how to use Web Scraper API:

🔬Test our Web Scraper API in the Playground, which is accessible via the dashboard.

Integration methods

There are three different methods you can use to integrate Web Scraper API:

  • Realtime – provides a synchronous method where you have to keep the connection open until the job is finished.

  • Push-Pull – enables an asynchronous method where you’ll have to make another request to the API to retrieve results once the job is finished.

  • Proxy Endpoint – offers a synchronous method where you can use our endpoint as a proxy.

🛠️ For more information, please refer to the respective integration pages on our documentation or this comprehensive blog post.

Targets and parameters

Web Scraper API can effectively gather public data from any website, including e-commerce, search engines, online travel agencies, real estate platforms, and others. Visit documentation to learn more about the parameters you can use and see code examples.

🧠 For using features such as JavaScript rendering and user agents, explore the Features pages in our documentation.

Additional free features

If you want to extend the API functionality in your projects, you can use the following features integrated into Web Scraper API:

  • OxyCopilot – generates ready-to-use code for web scraping in seconds. This feature can be accessed via API Playground on our dashboard.

  • Headless Browser – renders JavaScript and lets you define custom browser instructions, such as entering text, clicking elements, scrolling pages, and more.

  • Custom Parser – enables you to create your own parsing and data processing logic that’s executed on a raw scraping result.

  • Web Crawler – crawls any site, allowing you to select useful content and receive it in bulk. You can use it to perform URL discovery, crawl all pages on a site, index all URLs on a domain, and more.

  • Scheduler – provides a way for you to automate recurring scraping and parsing tasks by creating schedules.


🙌 Need assistance? Contact support via live chat or send a message to [email protected].

🎯 Want a custom solution or a free trial? Contact sales by booking a call. For any questions, such as custom pricing, advice, or a free trial, drop us a line at [email protected].

Did this answer your question?