Overview

Weavr's AI agent action (ai.agent) enables workflows to leverage large language models for complex reasoning, analysis, and task execution. Agents can use tools, search the web, and interact with external systems through MCP servers.

Claude & GPT

Support for Anthropic Claude and OpenAI models with automatic tool use.

Tool Use

Agents can read files, make HTTP requests, search the web, and more.

MCP Support

Extend capabilities with Model Context Protocol servers.

Configuration

Configure your AI provider in ~/.weavr/config.yaml:

# Anthropic (recommended)
anthropicKey: sk-ant-...
model: claude-sonnet-4-20250514

# Or OpenAI
openaiKey: sk-...
model: gpt-4o

# Web search (optional)
# Set via environment: BRAVE_API_KEY or TAVILY_API_KEY
Environment variables: You can also set ANTHROPIC_API_KEY or OPENAI_API_KEY instead of adding keys to the config file.

Basic Usage

Use the ai.agent action in your workflow steps:

steps:
  - id: analyze
    action: ai.agent
    with:
      task: |
        Analyze this error log and suggest fixes:
        {{ trigger.error }}

      # Optional: customize the agent
      model: claude-sonnet-4-20250514
      maxTokens: 4096
      temperature: 0.7
      system: "You are a helpful debugging assistant."

Accessing Results

The agent's response is available in subsequent steps:

- id: respond
  action: slack.post
  needs: [analyze]
  with:
    channel: "#alerts"
    text: "{{ steps.analyze.response }}"

Built-in Tools

Agents have access to these tools by default:

Tool Description
read_file Read contents of a file from the filesystem
write_file Write content to a file
list_directory List files in a directory
http_request Make HTTP requests to APIs
web_search Search the web (requires API key)
run_workflow Execute another Weavr workflow

Enabling/Disabling Tools

- id: safe-agent
  action: ai.agent
  with:
    task: "Analyze this code"
    tools:
      - read_file        # Only allow file reading
      - http_request     # And HTTP requests

MCP Servers

Model Context Protocol (MCP) servers extend agent capabilities with custom tools. Configure MCP servers in your config:

# ~/.weavr/config.yaml
mcpServers:
  filesystem:
    command: npx
    args: [-y, "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]

  postgres:
    command: npx
    args: [-y, "@modelcontextprotocol/server-postgres"]
    env:
      DATABASE_URL: postgresql://user:pass@localhost/db

  github:
    command: npx
    args: [-y, "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: "{{ env.GITHUB_TOKEN }}"

Tools from MCP servers are automatically available to agents.

LLM Providers

Anthropic Claude

model: claude-sonnet-4-20250514      # Recommended
model: claude-opus-4-20250514        # Most capable
model: claude-haiku-3-5-20241022     # Fastest

OpenAI

model: gpt-4o                        # Recommended
model: gpt-4o-mini                   # Faster/cheaper
model: o1                            # Reasoning model

Examples

Code Review Agent

name: pr-review
trigger:
  type: github.pull_request
  with:
    repo: your-org/repo
    events: [opened, synchronize]

steps:
  - id: get-diff
    action: http.request
    with:
      url: "{{ trigger.pullRequest.url }}.diff"
      headers:
        Accept: text/plain

  - id: review
    action: ai.agent
    needs: [get-diff]
    with:
      task: |
        Review this pull request diff and provide feedback:

        {{ steps.get-diff.body }}

        Focus on:
        - Potential bugs
        - Security issues
        - Code style
        - Performance concerns

  - id: comment
    action: github.create_comment
    needs: [review]
    with:
      repo: your-org/repo
      issue_number: "{{ trigger.number }}"
      body: "{{ steps.review.response }}"

Research Assistant

name: research-topic
trigger:
  type: telegram.message

steps:
  - id: research
    action: ai.agent
    with:
      task: |
        Research the following topic and provide a comprehensive summary:

        {{ trigger.text }}

        Include:
        - Key facts and findings
        - Recent developments
        - Relevant sources

  - id: respond
    action: telegram.send
    needs: [research]
    with:
      chatId: "{{ trigger.chatId }}"
      text: "{{ steps.research.response }}"

Daily Digest

name: daily-digest
trigger:
  type: cron.schedule
  with:
    expression: "0 8 * * *"

steps:
  - id: gather
    action: ai.agent
    with:
      task: |
        Create a morning briefing with:
        1. Top tech news from Hacker News
        2. Weather for San Francisco
        3. Any interesting AI developments

        Keep it concise and actionable.

  - id: send
    action: slack.post
    needs: [gather]
    with:
      channel: "#daily-digest"
      text: "{{ steps.gather.response }}"