HasData offers two ways to extract data: APIs and Scraper Jobs. Both are powered by the same backend, but differ in how they’re triggered, what they’re best suited for, and how results are delivered.

When to Use APIs

Use APIs when you need:

  • Fast, real-time responses
  • One-off requests with a known URL or query
  • Integration into apps, bots, dashboards, or workflows that expect immediate data

APIs are synchronous — you send a request and get the result in the same HTTP response.

Example Use Cases

  • Search Google SERP and get results back immediately
  • Fetch product data from Amazon or a listing from Zillow
  • Grab metadata from a specific page

When to Use Scraper Jobs

Use Scraper Jobs when you need to:

  • Scrape a large number of pages or listings
  • Crawl through paginated results
  • Extract data from complex platforms (e.g. Google Maps, Zillow)
  • Run structured scraping at scale

Scraper Jobs are asynchronous — you submit a job with parameters like URLs, filters, or depth. The job runs in the background, and you can:

  • Receive the result via webhook
  • Or poll for status and download the result when it’s ready

Example Use Cases

  • Crawl all listings for “restaurants in New York” from Google Maps
  • Scrape paginated product results from Amazon or Redfin
  • Extract 1,000+ real estate listings from Zillow with filters
  • Crawl all pages from a website and extract structured content (e.g. blog posts, articles)

Jobs are designed for bulk extraction, crawling, or anything that can’t be done in a single API call.

Summary

FeatureAPIScraper Job
ExecutionReal-timeQueued / background
ResponseSync (immediate)Async (webhook or polling)
Best forSingle queriesMulti-page or high-volume scraping
Credit ModelPer requestPer data row