Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hasdata.com/llms.txt

Use this file to discover all available pages before exploring further.

Research competitor catalogs, track price changes over time, and build product comparison feeds. Returns product id, name, handle, description, vendor, product type, min/max price, variants with full variant data, images, options, bestseller flag, and create/publish/update timestamps. This scraper job is asynchronous. You’ll receive a jobId, and can fetch results via polling or webhook delivery.

Request Cost

Each row of data returned consumes 1 credit from your balance.
Credits are deducted only for successful rows.

Example Request

curl --request POST \
  --url 'https://api.hasdata.com/scrapers/shopify/jobs' \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <your-api-key>' \
  --data '{"url":"https://b2bdemoexperience.myshopify.com/","currency":"USD"}'

Job Parameters

url
string
required
Shopify Store URL
currency
string
required
Currency (3 letter code)

Supported Enrichments

Request any of the fields below via the enrichments array in your job payload.
IDTitleDescriptionCost per Request
emailEmail AddressVendor company email address5 credits
websiteWebsite URLVendor company website URL5 credits
phonePhone NumberVendor company phone number5 credits
linkedinUrlLinkedIn ProfileVendor company LinkedIn page URL5 credits
facebookUrlFacebook ProfileVendor company Facebook page URL5 credits
instagramUrlInstagram ProfileVendor company Instagram profile URL5 credits
xUrlX (Twitter) ProfileVendor company X profile URL5 credits
githubUrlGitHub ProfileVendor company GitHub profile URL5 credits
revenueRevenueVendor company revenue5 credits
trafficWebsite TrafficVendor company website traffic5 credits
fundingFunding InfoVendor company funding information5 credits
foundedFounded YearVendor company founded year5 credits

Getting Results

Webhooks

Receive real-time updates when your scraper job starts, completes, or collects data.

Results API

Use the Results API to fetch your data using the jobId, with support for polling and pagination.

Stopping a Job

Cancel an active scraper job early if it’s no longer needed or you want to save credits.