How to Automate Anything with Claude API and Python (2026 Guide)

AlphaFlux AI April 2, 2026 12 min read

Most Claude API tutorials show you how to send a message and get a response. That's the "hello world" of AI automation — and it's about as useful as knowing how to turn on a computer.

This guide covers 6 production patterns we actually run in systems processing 2,200+ automated actions daily. Every example includes working Python code you can deploy today.

What You Need

pip install anthropic python-dotenv

Pattern 1: AI-Powered Telegram Bot

The most requested automation. A bot that actually holds conversations — with per-user memory, rate limiting, and graceful error handling.

Why this matters

Every business needs a way for customers to ask questions 24/7. A Claude-powered Telegram bot costs ~$0.01 per conversation and handles unlimited users simultaneously.

import os
from anthropic import Anthropic
from telegram.ext import Application, MessageHandler, filters

client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
user_histories = {}  # Per-user conversation memory

async def handle_message(update, context):
    user_id = update.effective_user.id
    text = update.message.text

    # Maintain conversation history per user
    if user_id not in user_histories:
        user_histories[user_id] = []

    user_histories[user_id].append({"role": "user", "content": text})

    # Keep last 20 messages to control token usage
    history = user_histories[user_id][-20:]

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="You are a helpful customer support agent.",
        messages=history
    )

    reply = response.content[0].text
    user_histories[user_id].append({"role": "assistant", "content": reply})
    await update.message.reply_text(reply)
💡 Production tip
Add rate limiting (5 messages/minute/user) and a /clear command to reset conversation history. Token costs add up fast without these guardrails.

Pattern 2: Intelligent Web Scraping

Traditional scraping breaks when websites change layouts. Claude + Playwright is different — you describe what data you want, and Claude extracts it from any page structure.

from playwright.sync_api import sync_playwright
from anthropic import Anthropic

client = Anthropic()

def scrape_with_ai(url, extraction_prompt):
    with sync_playwright() as p:
        browser = p.chromium.launch()
        page = browser.new_page()
        page.goto(url, wait_until="networkidle")
        html = page.content()
        browser.close()

    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=4096,
        messages=[{
            "role": "user",
            "content": f"Extract the following from this HTML: {extraction_prompt}\n\n"
                       f"Return as JSON.\n\nHTML:\n{html[:50000]}"
        }]
    )
    return response.content[0].text

# Example: Extract product data from any e-commerce page
result = scrape_with_ai(
    "https://example.com/products",
    "product name, price, rating, number of reviews, key features"
)

Why this beats BeautifulSoup: No CSS selectors to maintain. No XPath to debug. The extraction prompt stays the same even when the website redesigns completely.

Pattern 3: Content Generation Pipeline

One topic list in, full SEO-optimized articles out. With meta descriptions, proper heading structure, and batch processing.

import json
from anthropic import Anthropic

client = Anthropic()

def generate_article(topic, style="professional"):
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=4096,
        system=f"""You are an expert content writer. Write in a {style} style.
Every article must include:
- An engaging H1 title (60 chars max)
- A meta description (155 chars max)
- 3-5 H2 sections with actionable content
- Specific examples and data points
- A conclusion with clear next steps""",
        messages=[{"role": "user", "content": f"Write a comprehensive article about: {topic}"}]
    )
    return response.content[0].text

# Batch process a topic list
topics = [
    "How to use AI for lead qualification",
    "Automating customer onboarding with AI",
    "AI-powered email triage for small businesses"
]

for i, topic in enumerate(topics):
    article = generate_article(topic)
    with open(f"article_{i+1}.md", "w") as f:
        f.write(article)
    print(f"✅ Generated: {topic}")

Pattern 4: Email Classification + Auto-Reply

Incoming emails get classified (support, sales, billing, spam) and Claude drafts context-aware replies. A human-in-the-loop approval mode lets you review before sending.

from anthropic import Anthropic

client = Anthropic()

def classify_and_reply(email_subject, email_body, sender):
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="""Classify this email and draft a reply.
Return JSON with:
- category: support|sales|billing|partnership|spam
- priority: critical|high|medium|low|none
- sentiment: positive|neutral|frustrated|angry
- suggested_reply: (draft reply text, or null for spam)
- action: reply_within_2h|reply_within_24h|forward_to_finance|escalate|archive""",
        messages=[{"role": "user", "content": f"From: {sender}\nSubject: {email_subject}\n\n{email_body}"}]
    )
    return response.content[0].text

# Process an incoming email
result = classify_and_reply(
    "Integration question about your API",
    "Hi, we're evaluating your platform for our 50-person team. Can you walk us through the API integration options?",
    "sarah@techstartup.io"
)
📊 Real-world results
In our production system, this pattern processes ~45 seconds per email with 94% classification accuracy. The suggested replies need minor edits maybe 20% of the time.

Pattern 5: Bulk Data Enrichment

Take a CSV of company names, and Claude researches and adds: founding year, HQ location, employee count, funding total, tech stack, competitors, and a lead score.

import pandas as pd
from anthropic import Anthropic

client = Anthropic()

def enrich_row(company_name, website):
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=512,
        messages=[{
            "role": "user",
            "content": f"""Research this company and return JSON:
Company: {company_name} ({website})
Fields needed: founded, hq_location, employee_count, funding_total,
latest_round, key_product, ideal_customer, tech_stack,
competitor_1, competitor_2, lead_score (1-100)"""
        }]
    )
    return response.content[0].text

# Enrich a CSV
df = pd.read_csv("companies.csv")
for i, row in df.iterrows():
    enrichment = enrich_row(row["company_name"], row["website"])
    # Parse JSON and merge into dataframe
    print(f"✅ Enriched: {row['company_name']}")

Cost: ~$0.003 per row with Claude Sonnet. A 1,000-row CSV costs about $3 to fully enrich — compared to $50-200/month for enrichment SaaS tools.

Pattern 6: Multi-Agent Orchestration

This is the most powerful pattern. A coordinator agent breaks down complex tasks and delegates to specialized agents — each with its own system prompt and expertise.

from anthropic import Anthropic

client = Anthropic()

AGENTS = {
    "research": "You are a research specialist. Gather facts, cite sources, flag uncertainties.",
    "analysis": "You are an analyst. Score items 1-10, identify patterns, make recommendations.",
    "writer":   "You are a professional writer. Clear, direct, no fluff. 800-1500 words.",
    "qa":       "You are QA. Check accuracy, find contradictions, score quality 1-10."
}

def call_agent(agent_name, task, context=""):
    prompt = f"CONTEXT:\n{context}\n\nTASK:\n{task}" if context else task
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=4096,
        system=AGENTS[agent_name],
        messages=[{"role": "user", "content": prompt}]
    )
    return response.content[0].text

# Run a full pipeline
task = "Research the top 5 CRM tools for small business"

research = call_agent("research", task)
analysis = call_agent("analysis", "Analyze and score these CRMs", research)
report   = call_agent("writer", "Write a recommendation report", analysis)
qa       = call_agent("qa", "Review this report for accuracy", report)

print(qa)  # Final QA'd output

Four Claude calls. Each builds on the previous one's output. The research agent gathers data, the analysis agent scores it, the writer produces a polished report, and the QA agent catches errors. Total cost: ~$0.08 per pipeline run.

What to Build Next

These 6 patterns cover most real-world AI automation needs. Pick the one closest to your use case, run it, modify the system prompts for your domain, and ship it.

The key insight: Claude is not a chatbot — it's a reasoning engine. Every pattern above uses Claude to make decisions, not just generate text. Classification, extraction, scoring, delegation — these are the automations businesses actually pay for.

Want the full production-ready code?

Get all 6 scripts with error handling, retry logic, rate limiting, example outputs, and a multi-agent orchestrator you can customize for any workflow.

Get the Claude API Toolkit — $29 →

FAQ

How much does the Claude API cost?

Claude Sonnet costs $3 per million input tokens and $15 per million output tokens. In practice, most automations cost $0.01-0.10 per run. A Telegram bot conversation costs about $0.01. A full multi-agent pipeline costs ~$0.08.

Can I use these patterns commercially?

Yes. The Anthropic API terms allow commercial use. Build products, sell services, automate client work — all fair game.

What's the difference between Claude Sonnet and Opus?

Sonnet is faster and cheaper — ideal for automations where speed matters (bots, scrapers, email processing). Opus is more capable for complex reasoning — use it for analysis, strategy, and multi-step pipelines where accuracy matters more than speed.

Do I need to worry about rate limits?

Yes. Anthropic enforces rate limits based on your tier. The production scripts in our toolkit include built-in rate limiting with exponential backoff — your automations won't crash when they hit limits.