Appearance
Build & Finetune Agents
Framework Agnostic
Weaver is deeply optimized for the NexAU and NexRL workflow, providing seamless integration and best practices for building and finetuning agentic models. That said, Weaver remains framework-agnostic and can be integrated with other agent frameworks to suit your needs.
NexAU Quick Start: Build a Web Research Agent
NexAU is a general-purpose agent framework for building tool-using LLM agents. It provides declarative configuration, flexible tool systems, and seamless tracing for both standalone usage and reinforcement learning.
We're going to build a web research agent based on nexau/examples/deep_research/:
yaml
type: agent
name: deep_research_agent
max_context_tokens: 100000
system_prompt: |
Date: {{date}}
You are a research agent. Search for information using web_search and web_read.
system_prompt_type: jinja
tool_call_mode: openai
llm_config:
temperature: 0.7
max_tokens: 4096
api_type: openai_chat_completion
tools:
- name: web_search
yaml_path: ./tools/WebSearch.yaml
binding: nexau.archs.tool.builtin.web_tool:web_search
- name: web_read
yaml_path: ./tools/WebRead.yaml
binding: nexau.archs.tool.builtin.web_tool:web_readRun it:
python
from nexau import Agent, AgentConfig, LLMConfig
from datetime import datetime
import os
agent_config = AgentConfig.from_yaml("deep_research_agent.yaml")
agent_config.llm_config = LLMConfig(
model=os.getenv("LLM_MODEL"),
base_url=os.getenv("LLM_BASE_URL"),
api_key=os.getenv("LLM_API_KEY")
)
agent = Agent(agent_config)
response = agent.run(
"What are the latest developments in quantum computing?",
context={"date": datetime.now().strftime("%Y-%m-%d")}
)
print(response)That's all you need to build a web research agent. For more advanced features, e.g. sub-agents, skills and middlewares, etc., please refer to NexAU's doc.
Finetune Agents using NexRL
NexAU agents can be trained with reinforcement learning via NexRL.
Agent Workspace Setup
For NexRL integration, organize your agent in a workspace:
recipe/my_task/agent_workspace/
├── agent_config.yaml # NexAU agent definition
├── evaluator.py # Reward computation
├── custom_worker.py # Optional: custom formatting
└── tools/ # Tool implementationsAgent Configuration for RL
Enable tracing for NexRL:
yaml
llm_config:
temperature: 0.7
max_tokens: 8192
api_type: openai_chat_completion
# RL-specific settings (optional)
logprobs: true
extra_body:
train_mode: true
include_stop_str_in_output: true
skip_special_tokens: false
return_tokens_as_token_ids: true
# Required for NexRL
tracers:
- import: nexau.archs.tracer.adapters.in_memory:InMemoryTracerNexRL Integration
In your NexRL recipe, reference the agent:
yaml
rollout_worker:
type: "nexau"
nexau_agent_config_path: "recipe/my_task/agent_workspace/agent_config.yaml"
evaluator_module_path: "recipe/my_task/agent_workspace/evaluator.py:MyEvaluator"
nexau_agent_workspace: "recipe/my_task/agent_workspace"
task_name: "my_task"Example Recipe: NexRL/recipe/weaver_nexau_deepsearch_qwen3_8b/
For complete details on NexRL integration, see the NexRL guide.
Next Steps
- Full documentation: NexAU GitHub
- Advanced features: Skills, MCP integration, context compaction
- RL training: NexRL guide for reinforcement learning with Weaver
- Examples: Check NexAU Examples for complete agent implementations