Skip to main content
Build local lead lists, compare competitor offerings, and aggregate review data. Returns business name, URL, place id/alias, phone, website, menu, price level, rating, reviews count, full address and location, categories, services, features, highlights, operation hours, thumbnail, and full image gallery. This scraper job is asynchronous. You’ll receive a jobId, and can fetch results via polling or webhook delivery.

Request Cost

Each row of data returned consumes 3 credits from your balance.
Credits are deducted only for successful rows.

Example Request

curl --request POST \
  --url 'https://api.hasdata.com/scrapers/yelp/jobs' \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <your-api-key>' \
  --data '{"limit":100,"keyword":"Pizza","locations":[],"domain":"www.yelp.com"}'

Job Parameters

limit
number
required
Data limit (0 - unlimited)
keyword
string
required
Search query for Yelp businesses to scrape
locations
string
required
Location to search for businesses using a specific keyword. Use predefined Country–State pairs, or enter your own locations — one per line (supported by Yelp).
domain
string
required
Yelp Domain

Supported Enrichments

Request any of the fields below via the enrichments array in your job payload.
IDTitleDescriptionCost per Request
emailEmail AddressBusiness email address5 credits
websiteWebsite URLBusiness website URL5 credits
phonePhone NumberBusiness phone number5 credits
linkedinUrlLinkedIn ProfileBusiness LinkedIn page URL5 credits
facebookUrlFacebook ProfileBusiness Facebook page URL5 credits
instagramUrlInstagram ProfileBusiness Instagram profile URL5 credits
xUrlX (Twitter) ProfileBusiness X profile URL5 credits
revenueRevenueBusiness revenue5 credits
trafficWebsite TrafficBusiness website traffic5 credits
fundingFunding InfoBusiness funding information5 credits
foundedFounded YearBusiness founded year5 credits

Getting Results

Webhooks

Receive real-time updates when your scraper job starts, completes, or collects data.

Results API

Use the Results API to fetch your data using the jobId, with support for polling and pagination.

Stopping a Job

Cancel an active scraper job early if it’s no longer needed or you want to save credits.