Overview
Submit a new job to scrape a single URL. This endpoint queues your job for processing by available workers.Request Body
The URL to scrape (must be a valid HTTP/HTTPS URL)Example:
https://example.comType of scraping job to performOptions:
spiderSite- Website scraping using Crawl4AIspiderMaps- Google Maps business scraping
Optional AI instructions for content extraction (spiderSite only)Example:
"Extract all product names and prices"Response
Whether the job was successfully queued
Unique identifier for the submitted job (UUID format)
Type of job submitted (
spiderSite or spiderMaps)Initial job status (always
queued)Human-readable confirmation message
Example Request
Example Response
Next Steps
After submitting a job:- Poll for status using GET /api/v1/jobs//status
- Retrieve results when status is
completedusing GET /api/v1/jobs//results
Jobs are processed asynchronously. Use the
/status endpoint to monitor progress.Job Types Explained
spiderSite
- Scrapes website content using Crawl4AI library
- Supports AI-powered content extraction with custom instructions
- Best for: Blogs, articles, product pages, documentation
spiderMaps
- Scrapes Google Maps business data
- Extracts: Name, address, phone, website, hours, reviews, etc.
- Best for: Local business research, competitor analysis
