Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hasdata.com/llms.txt

Use this file to discover all available pages before exploring further.

Track SERP rankings, monitor competitors, research keywords, and audit SEO presence at scale. Returns position, keyword, title, link, displayedLink, snippet, highlighted words, source, rich snippet blocks, and sitelinks for every query × location combination. This scraper job is asynchronous. You’ll receive a jobId, and can fetch results via polling or webhook delivery.

Request Cost

Each row of data returned consumes 2 credits from your balance.
Credits are deducted only for successful rows.

Example Request

curl --request POST \
  --url 'https://api.hasdata.com/scrapers/google-serp/jobs' \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <your-api-key>' \
  --data '{"keywords":["pizza in new york","beef in new york"],"location":"Austin,Texas,United States","limit":0}'

Job Parameters

keywords
string[]
required
Put each keyword on the different line
location
string
required
Google canonical location for the search.
limit
number
Results per each keyword (min. 10). The default value of 0 means no limit.

Supported Enrichments

Request any of the fields below via the enrichments array in your job payload.
IDTitleDescriptionCost per Request
emailEmail AddressWebsite-associated email address5 credits
phonePhone NumberWebsite-associated phone number5 credits
revenueRevenueCompany revenue5 credits
trafficWebsite TrafficWebsite traffic5 credits

Getting Results

Webhooks

Receive real-time updates when your scraper job starts, completes, or collects data.

Results API

Use the Results API to fetch your data using the jobId, with support for polling and pagination.

Stopping a Job

Cancel an active scraper job early if it’s no longer needed or you want to save credits.