The Google SERP Scraper extracts search result data from Google — including titles, URLs, snippets, and more. It supports multiple keywords and a specified location.

The scraper runs asynchronously and returns a jobId, which you can use to poll for results or receive updates via a webhook.

Example Request

curl --request POST \
  --url 'https://api.hasdata.com/scrapers/google-serp/jobs' \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <your-api-key>' \
  --data '{
    "keywords": ["best coffee beans", "how to brew espresso"],
    "location": "United States"
  }'

Job Parameters

keywords
string[]
required

List of search queries, like “best running shoes” or “top tech blogs”

location
string
required

Google canonical location string, e.g. “United States”, “Germany”, “New York, NY”

Getting Results

Webhooks

Receive real-time updates when your scraper job starts, completes, or collects data.

Results API

Use the Results API to fetch your data using the jobId, with support for polling and pagination.

Stopping a Job

Cancel an active scraper job early if it’s no longer needed or you want to save credits.