Core Concepts
Meter is built around a few key abstractions that make web scraping and monitoring simple and cost-effective. Understanding these concepts will help you get the most out of the platform.The big picture
1
Strategy
A reusable extraction plan generated by AI that defines how to scrape a website. Created once, used many times.
2
Job
A single execution of a scrape using a strategy. Jobs run asynchronously and return extracted data.
3
Schedule
Automated recurring jobs that run at specified intervals or cron times. Perfect for monitoring websites.
4
Change Detection
Intelligent diffing that compares jobs to detect meaningful content changes, filtering out noise.
Key concepts
Strategies
Learn how AI-generated extraction strategies work and when to use them
Jobs
Understand job execution, status checking, and result retrieval
Schedules
Set up automated monitoring with intervals or cron expressions
Change Detection
Discover how Meter detects meaningful content changes
How it all fits together
Example workflow
- Generate a strategy for extracting product data from an e-commerce site
- Create a schedule to scrape the site every hour
- Jobs run automatically, extracting current product data
- Changes are detected by comparing content hashes and structural signatures
- You get notified via webhook or pull from the changes API
- Update your database only with changed content
Cost model
Understanding Meter’s cost structure helps you optimize usage:| Action | Cost | Frequency |
|---|---|---|
| Strategy generation | ~$0.02-0.06 | Once per site/pattern |
| Job execution | Free* | Unlimited |
| Change detection | Free | Automatic |
| API calls | Free* | Unlimited |
*During beta, all features are free. Production pricing will be announced before beta ends.
Why strategy-based is cheaper
Traditional LLM scraping costs scale with usage:- Traditional: Pay per scrape ($0.02-0.10 each)
- Meter: Pay once for strategy ($0.02-0.06), then scrape unlimited times for free
- Traditional LLM scraping: $2-10
- Meter: $0.02-0.06 (97-99% savings)
Data model
Understanding the data model helps you work with the API:Best practices
Reuse strategies across similar pages
Reuse strategies across similar pages
If multiple pages have the same structure (e.g., product pages, blog posts), you can reuse the same strategy with different URLs:
Use pull-based API for batch processing
Use pull-based API for batch processing
Instead of webhooks for every change, poll the changes API periodically:This reduces webhook traffic and allows batching updates.
Set appropriate monitoring intervals
Set appropriate monitoring intervals
Faster intervals (15-30 min):
- Stock prices, sports scores, breaking news
- High-priority monitoring
- E-commerce products, job listings
- Most monitoring use cases
- Documentation, blog posts, policies
- Low-frequency content
Handle job failures gracefully
Handle job failures gracefully
Jobs can fail if sites are down or block requests:
Next steps
Strategies Deep Dive
Learn how AI generates extraction strategies
Jobs Deep Dive
Master job execution and result handling
Schedules Deep Dive
Set up automated monitoring
Change Detection
Understand how diffing works