Overview

Weavr's AI agent action (ai.agent) enables workflows to leverage large language models for complex reasoning, analysis, and task execution. Agents can use tools, search the web, and interact with external systems through MCP servers.

Claude & GPT

Support for Anthropic Claude and OpenAI models with automatic tool use.

Tool Use

Agents can read files, make HTTP requests, search the web, and more.

MCP Support

Extend capabilities with Model Context Protocol servers.

Configuration

Configure your AI provider in ~/.weavr/config.yaml:

# Anthropic (recommended)
anthropicKey: sk-ant-...
model: claude-sonnet-4-20250514

# Or OpenAI
openaiKey: sk-...
model: gpt-4o

# Web search (optional)
# Set via environment: BRAVE_API_KEY or TAVILY_API_KEY
Environment variables: You can also set ANTHROPIC_API_KEY or OPENAI_API_KEY instead of adding keys to the config file.

Basic Usage

Use the ai.agent action in your workflow steps:

steps:
  - id: analyze
    action: ai.agent
    with:
      task: |
        Analyze this error log and suggest fixes:
        {{ trigger.error }}

      # Optional: customize the agent
      model: claude-sonnet-4-20250514
      maxTokens: 4096
      temperature: 0.7
      system: "You are a helpful debugging assistant."

Accessing Results

The agent's response is available in subsequent steps:

- id: respond
  action: slack.post
  needs: [analyze]
  with:
    channel: "#alerts"
    text: "{{ steps.analyze.response }}"

Memory Blocks

Define memory blocks in your workflow to assemble context from files, URLs, search results, trigger data, and step outputs. Use them in prompts with {{ memory.blocks.* }} or attach them directly to ai.agent.

memory:
  - id: incident-context
    sources:
      - id: runbook
        type: file
        path: "docs/runbook.md"
      - id: alert
        type: trigger
        path: "error"

steps:
  - id: analyze
    action: ai.agent
    with:
      task: "Diagnose the issue using available context."
      memory: [incident-context]

Built-in Tools

Agents have access to these tools by default:

Tool Description
read_file Read contents of a file from the filesystem
write_file Write content to a file
list_directory List files in a directory
http_request Make HTTP requests to APIs
web_search Search the web (requires API key)
web_fetch Fetch and extract content from a URL
shell_exec Execute local shell commands (use with caution)

Enabling/Disabling Tools

- id: safe-agent
  action: ai.agent
  with:
    task: "Analyze this code"
    tools:
      - read_file        # Only allow file reading
      - http_request     # And HTTP requests

MCP Servers

Model Context Protocol (MCP) servers extend agent capabilities with custom tools. Weavr includes 20+ pre-configured MCP servers that can be managed from the web UI.

Enable via Web UI (Recommended)

The easiest way to manage MCP servers is through Settings in the web app:

  1. Navigate to Settings
  2. Scroll to MCP Servers section
  3. Toggle servers on/off - changes take effect immediately
  4. View connection status (running/not connected) in real-time
No restart required: MCP servers can be enabled/disabled dynamically. New workflow runs will automatically use the updated tool set.

Available Server Categories

Filesystem

filesystem, everything-search

Git & Code

git, github, gitlab

Database

postgres, sqlite, mysql

Browser

puppeteer, playwright, browserbase

Search

brave-search, exa

Productivity

google-drive, slack, linear, obsidian

Manual Configuration

You can also configure MCP servers in ~/.weavr/config.yaml:

# ~/.weavr/config.yaml
mcp:
  servers:
    filesystem:
      command: npx
      args: [-y, "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]

    postgres:
      command: npx
      args: [-y, "@modelcontextprotocol/server-postgres"]
      env:
        DATABASE_URL: postgresql://user:pass@localhost/db

    github:
      command: npx
      args: [-y, "@modelcontextprotocol/server-github"]
      env:
        GITHUB_PERSONAL_ACCESS_TOKEN: "{{ env.GITHUB_TOKEN }}"

Using MCP Tools in Workflows

Once enabled, MCP tools appear in the Workflow Builder's tool selector for AI agent blocks:

steps:
  - id: backup-files
    action: ai.agent
    with:
      task: "Backup all .md files from ~/Documents to ~/Backup"
      tools:
        - read_file           # From filesystem MCP
        - write_file          # From filesystem MCP
        - list_directory      # From filesystem MCP

The Workflow Builder groups tools by source (Built-in vs MCP) and shows the server name for each MCP tool, making it easy to see which capabilities are available.

LLM Providers

Anthropic Claude

model: claude-sonnet-4-20250514      # Recommended
model: claude-opus-4-20250514        # Most capable
model: claude-haiku-3-5-20241022     # Fastest

OpenAI

model: gpt-4o                        # Recommended
model: gpt-4o-mini                   # Faster/cheaper
model: o1                            # Reasoning model

Examples

Code Review Agent

name: pr-review
trigger:
  type: github.pull_request
  with:
    repo: your-org/repo
    events: [opened, synchronize]

steps:
  - id: get-diff
    action: http.request
    with:
      url: "{{ trigger.pullRequest.url }}.diff"
      headers:
        Accept: text/plain

  - id: review
    action: ai.agent
    needs: [get-diff]
    with:
      task: |
        Review this pull request diff and provide feedback:

        {{ steps.get-diff.body }}

        Focus on:
        - Potential bugs
        - Security issues
        - Code style
        - Performance concerns

  - id: comment
    action: github.create_comment
    needs: [review]
    with:
      repo: your-org/repo
      issue_number: "{{ trigger.number }}"
      body: "{{ steps.review.response }}"

Research Assistant

name: research-topic
trigger:
  type: telegram.message

steps:
  - id: research
    action: ai.agent
    with:
      task: |
        Research the following topic and provide a comprehensive summary:

        {{ trigger.text }}

        Include:
        - Key facts and findings
        - Recent developments
        - Relevant sources

  - id: respond
    action: telegram.send
    needs: [research]
    with:
      chatId: "{{ trigger.chatId }}"
      text: "{{ steps.research.response }}"

Daily Digest

name: daily-digest
trigger:
  type: cron.schedule
  with:
    expression: "0 8 * * *"

steps:
  - id: gather
    action: ai.agent
    with:
      task: |
        Create a morning briefing with:
        1. Top tech news from Hacker News
        2. Weather for San Francisco
        3. Any interesting AI developments

        Keep it concise and actionable.

  - id: send
    action: slack.post
    needs: [gather]
    with:
      channel: "#daily-digest"
      text: "{{ steps.gather.response }}"