You can stop a running scraper job at any time using its jobId. This is useful if you submitted a job with wrong parameters or no longer need the data.
Stop Request
Only active jobs (in_progress)
cURL
Node.js
Python
PHP
Java
C#
Ruby
Rust
Go
curl --request DELETE \
--url 'https://api.hasdata.com/scrapers/jobs/:jobId' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <your-api-key>'
Behavior
If the job is still running, all in-progress pages will finish scraping, but no new pages will be started.
Any data collected up to that point is preserved and can still be fetched.
Credits are only charged for successfully scraped pages, even if the job was stopped early.
If the job has already finished or failed, the stop request has no effect.
Response Example
{
"jobId" : "dd1a8c53-2d47-4444-977d-8d653a6a3c82" ,
"status" : "stopped" ,
"creditsSpent" : 200 ,
"dataRowsCount" : 20 ,
"input" : {
/* job parameters */
}
}