Track SERP rankings, monitor competitors, research keywords, and audit SEO presence at scale. Returns position, keyword, title, link, displayedLink, snippet, highlighted words, source, rich snippet blocks, and sitelinks for every query × location combination. This scraper job is asynchronous. You’ll receive aDocumentation Index
Fetch the complete documentation index at: https://docs.hasdata.com/llms.txt
Use this file to discover all available pages before exploring further.
jobId, and can fetch results via polling or webhook delivery.
Request Cost
Each row of data returned consumes 2 credits from your balance.Example Request
Job Parameters
Put each keyword on the different line
Google canonical location for the search.
Results per each keyword (min. 10). The default value of 0 means no limit.
Supported Enrichments
Request any of the fields below via theenrichments array in your job payload.
| ID | Title | Description | Cost per Request |
|---|---|---|---|
email | Email Address | Website-associated email address | 5 credits |
phone | Phone Number | Website-associated phone number | 5 credits |
revenue | Revenue | Company revenue | 5 credits |
traffic | Website Traffic | Website traffic | 5 credits |
Getting Results
Webhooks
Receive real-time updates when your scraper job starts, completes, or collects data.
Results API
Use the Results API to fetch your data using the
jobId, with support for polling and pagination.Stopping a Job
Cancel an active scraper job early if it’s no longer needed or you want to save credits.