Monitor hiring trends, benchmark pay bands, and power candidate-facing job boards. Returns job title, company, location, posted date, salary min/max + period, benefits, full description (HTML and text), details, and apply URL. This scraper job is asynchronous. You’ll receive aDocumentation Index
Fetch the complete documentation index at: https://docs.hasdata.com/llms.txt
Use this file to discover all available pages before exploring further.
jobId, and can fetch results via polling or webhook delivery.
Request Cost
Each row of data returned consumes 10 credits from your balance.Example Request
Job Parameters
Results Limit (0 - Unlimited)
Enter the keywords of the job positions. Each keyword must be on a different line.
Enter the locations where you want to search for and extract job listings. Each location must be on a different line.
Sort By
Indeed Domain
Supported Enrichments
Request any of the fields below via theenrichments array in your job payload.
| ID | Title | Description | Cost per Request |
|---|---|---|---|
email | Email Address | Hiring company email address | 5 credits |
website | Website URL | Hiring company website URL | 5 credits |
phone | Phone Number | Hiring company phone number | 5 credits |
linkedinUrl | LinkedIn Profile | Hiring company LinkedIn page URL | 5 credits |
facebookUrl | Facebook Profile | Hiring company Facebook page URL | 5 credits |
instagramUrl | Instagram Profile | Hiring company Instagram profile URL | 5 credits |
xUrl | X (Twitter) Profile | Hiring company X profile URL | 5 credits |
githubUrl | GitHub Profile | Hiring company GitHub profile URL | 5 credits |
revenue | Revenue | Hiring company revenue | 5 credits |
traffic | Website Traffic | Hiring company website traffic | 5 credits |
funding | Funding Info | Hiring company funding information | 5 credits |
founded | Founded Year | Hiring company founded year | 5 credits |
Getting Results
Webhooks
Receive real-time updates when your scraper job starts, completes, or collects data.
Results API
Use the Results API to fetch your data using the
jobId, with support for polling and pagination.Stopping a Job
Cancel an active scraper job early if it’s no longer needed or you want to save credits.