maxDepth
. You can also limit which paths should be followed using regex with includePaths
.
This scraper job is asynchronous. You’ll receive a jobId
, and results can be fetched via polling or delivered to a webhook.
Example Request
Use Web Scraping API Params
You can use any parameters from the Web Scraping API inside a Websites Crawler job — including:extractRules
aiExtractRules
headers
proxyType
/proxyCountry
blockResources
,jsScenario
,outputFormat
, and more
Get Scraper Job Status
To get the status of an existing scraper job, make a GET request to the endpoint/scrapers/jobs/:jobId
:
Response
Response
Webhook
The webhook will notify you of events related to the scraper job. Here is an example webhook payload for thescraper.data.scraped
event: