Skip to main content

Quick Start

This guide will get you up and running with Meter in under 5 minutes. You’ll generate an extraction strategy, run your first scrape, and set up monitoring.
Prerequisites:

1. Install the SDK

Install the Meter Python SDK:
pip install meter-sdk

2. Set up authentication

Store your API key securely as an environment variable:
export METER_API_KEY="sk_live_your_api_key_here"
Get your API key from the Meter dashboard. Never commit API keys to version control.

3. Generate your first strategy

Create a Python file and generate an extraction strategy:
from meter_sdk import MeterClient
import os

# Initialize the client
client = MeterClient(api_key=os.getenv("METER_API_KEY"))

# Generate a strategy for Hacker News
result = client.generate_strategy(
    url="https://news.ycombinator.com",
    description="Extract post titles and scores",
    name="HN Front Page"
)

# Print the results
strategy_id = result["strategy_id"]
print(f"Strategy created: {strategy_id}")
print(f"\nPreview data ({len(result['preview_data'])} items):")
for item in result['preview_data'][:3]:
    print(f"  - {item}")
What’s happening here:
  1. Meter’s AI analyzes the page structure
  2. Generates CSS selectors for extracting titles and scores
  3. Returns a preview of extracted data
  4. Saves the strategy for reuse (no LLM costs on future scrapes)

4. Run a scrape job

Use your saved strategy to run a scrape:
# Create a job using the strategy
job = client.create_job(
    strategy_id=strategy_id,
    url="https://news.ycombinator.com"
)

# Wait for completion
completed_job = client.wait_for_job(job["job_id"])

# Print results
print(f"\nScraped {len(completed_job['results'])} items")
for item in completed_job['results'][:5]:
    print(f"  - {item}")
Jobs run asynchronously. The wait_for_job() method polls automatically until completion.

5. Set up monitoring

Schedule automatic scrapes to monitor for changes:
# Run every hour
schedule = client.create_schedule(
    strategy_id=strategy_id,
    url="https://news.ycombinator.com",
    interval_seconds=3600
)

print(f"Schedule created: {schedule['id']}")
print(f"Next run: {schedule['next_run_at']}")
Now Meter will automatically scrape Hacker News every hour. You can check for changes using the pull-based API or set up webhooks.

6. Check for changes

Use the pull-based API to get changes:
# Get changes for a schedule
changes = client.get_schedule_changes(
    schedule_id=schedule['id'],
    mark_seen=True  # Mark as seen after reading
)

if changes['count'] > 0:
    print(f"\n{changes['count']} jobs with changes:")
    for change in changes['changes']:
        print(f"  - Job {change['job_id']}: {change['item_count']} items")
else:
    print("\nNo changes detected")
Set mark_seen=False to preview changes without marking them as read.

Complete example

Here’s the complete code:
from meter_sdk import MeterClient
import os

# Initialize client
client = MeterClient(api_key=os.getenv("METER_API_KEY"))

# Step 1: Generate strategy
result = client.generate_strategy(
    url="https://news.ycombinator.com",
    description="Extract post titles and scores",
    name="HN Front Page"
)
strategy_id = result["strategy_id"]
print(f"✓ Strategy created: {strategy_id}")

# Step 2: Run initial scrape
job = client.create_job(strategy_id, "https://news.ycombinator.com")
initial = client.wait_for_job(job["job_id"])
print(f"✓ Initial scrape: {initial['item_count']} items")

# Step 3: Set up monitoring
schedule = client.create_schedule(
    strategy_id=strategy_id,
    url="https://news.ycombinator.com",
    interval_seconds=3600
)
print(f"✓ Monitoring enabled: every hour")

# Step 4: Check for changes
changes = client.get_schedule_changes(schedule['id'])
print(f"✓ Changes detected: {changes['count']} jobs")

Next steps

Troubleshooting

Possible causes:
  • URL is not accessible
  • Description is too vague or complex
  • Page requires authentication
Solutions:
  • Verify the URL loads in your browser
  • Make your description more specific: “Extract product names and prices from the grid” instead of “Get products”
  • For auth-required pages, contact support
Possible causes:
  • Target website is slow or down
  • Strategy is incorrect
Solutions:
  • Increase timeout: client.wait_for_job(job_id, timeout=300)
  • Check job status manually: client.get_job(job_id)
  • Refine strategy if results are incorrect
Problem: ModuleNotFoundError: No module named 'meter_sdk'Solution: Install the SDK: pip install meter-sdk
Problem: 401 UnauthorizedSolutions:
  • Verify your API key is correct
  • Check that METER_API_KEY environment variable is set
  • Ensure your API key hasn’t expired

Need help?

Email me at [email protected]