POST
/
api
/
v1
/
jobs
/
submit
Submit Job
curl --request POST \
  --url https://spideriq.di-atomic.com/api/v1/jobs/submit \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "url": "<string>",
  "job_type": "<string>",
  "instructions": "<string>"
}'
{
  "success": true,
  "job_id": "<string>",
  "type": "<string>",
  "status": "<string>",
  "message": "<string>"
}

Overview

Submit a new job to scrape a single URL. This endpoint queues your job for processing by available workers.

Request Body

url
string
required
The URL to scrape (must be a valid HTTP/HTTPS URL)Example: https://example.com
job_type
string
required
Type of scraping job to performOptions:
  • spiderSite - Website scraping using Crawl4AI
  • spiderMaps - Google Maps business scraping
instructions
string
Optional AI instructions for content extraction (spiderSite only)Example: "Extract all product names and prices"

Response

success
boolean
Whether the job was successfully queued
job_id
string
Unique identifier for the submitted job (UUID format)
type
string
Type of job submitted (spiderSite or spiderMaps)
status
string
Initial job status (always queued)
message
string
Human-readable confirmation message

Example Request

curl -X POST https://spideriq.di-atomic.com/api/v1/jobs/submit \
  -H "Authorization: Bearer <your_token>" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "job_type": "spiderSite",
    "instructions": "Extract all contact information"
  }'

Example Response

{
  "success": true,
  "job_id": "550e8400-e29b-41d4-a716-446655440000",
  "type": "spiderSite",
  "status": "queued",
  "message": "Job queued successfully"
}

Next Steps

After submitting a job:
  1. Poll for status using GET /api/v1/jobs//status
  2. Retrieve results when status is completed using GET /api/v1/jobs//results
Jobs are processed asynchronously. Use the /status endpoint to monitor progress.

Job Types Explained

spiderSite

  • Scrapes website content using Crawl4AI library
  • Supports AI-powered content extraction with custom instructions
  • Best for: Blogs, articles, product pages, documentation

spiderMaps

  • Scrapes Google Maps business data
  • Extracts: Name, address, phone, website, hours, reviews, etc.
  • Best for: Local business research, competitor analysis