Appearance
Installation
Get Weaver installed and run your first training in minutes.
System Requirements
- Python: 3.10+
- OS: Linux, macOS, or Windows
Note: The Weaver SDK runs on your local machine (CPU-only is fine). The actual training happens on remote GPU infrastructure that Weaver manages for you.
Weaver SDK Setup
Install the Weaver SDK from PyPI:
bash
pip install nex-weaverAPI Key Setup
Before using Weaver, you need to register an account and generate an API key from the Weaver.
Once you have your API key, set it as an environment variable:
bash
export WEAVER_API_KEY=<your-api-key>You can add this to your .bashrc or .zshrc file to make it permanent.
Verify Installation
Check that Weaver is installed correctly:
bash
python -c "import weaver; print('Weaver installed successfully')"You should see "Weaver installed successfully" printed.
Quick Start: Your First Training
Let's train a simple Pig Latin translator as a demonstration.
Step 1: Create a Training Script
Create train.py:
python
import os
import torch
from weaver import ServiceClient, types
# Set your API key
# Create the service client with context manager
with ServiceClient(
api_key=os.getenv("WEAVER_API_KEY"),
) as service_client:
# Create a training client for Qwen3-8B
training_client = service_client.create_model(
base_model="Qwen/Qwen3-8B"
)
# Get the tokenizer
tokenizer = training_client.get_tokenizer()
# Prepare training data
examples = [
{"input": "hello world", "output": "ello-hay orld-way"},
{"input": "banana split", "output": "anana-bay plit-say"},
]
# Process examples into training format
def process_example(example):
prompt = f"English: {example['input']}\nPig Latin:"
prompt_tokens = tokenizer.encode(prompt, add_special_tokens=True)
completion_tokens = tokenizer.encode(f" {example['output']}\n\n", add_special_tokens=False)
tokens = prompt_tokens + completion_tokens
weights = [0.0] * len(prompt_tokens) + [1.0] * len(completion_tokens)
input_tokens = tokens[:-1]
target_tokens = tokens[1:]
weights = weights[1:]
return types.Datum(
model_input=types.ModelInput.from_ints(input_tokens),
loss_fn_inputs={
"target_tokens": torch.tensor(target_tokens, dtype=torch.int64),
"weights": torch.tensor(weights, dtype=torch.float32),
},
)
processed_examples = [process_example(ex) for ex in examples]
# Training loop
adam_params = types.AdamParams(learning_rate=1e-4)
for step in range(10):
# Forward and backward pass
training_client.forward_backward(
processed_examples,
"cross_entropy",
wait=True,
)
# Optimizer step
training_client.optim_step(adam_params, wait=True)
print(f"Step {step} completed")
print("Training complete!")Step 2: Run Training
bash
python train.pyThat's it! Weaver will train your model on distributed GPU infrastructure.
Sampling from Your Trained Model
After training, you can sample from your model:
python
# Save weights for sampling
sampling_client = training_client.save_weights_and_get_sampling_client(
name="my-model"
)
# Sample from the model
prompt_tokens = tokenizer.encode("English: coffee break\nPig Latin:", add_special_tokens=True)
prompt = types.ModelInput.from_ints(prompt_tokens)
params = types.SamplingParams(max_tokens=20, temperature=0.0, stop=["\n"])
result = sampling_client.sample(
prompt=prompt,
sampling_params=params,
num_samples=1,
)
# Decode and print the response
response_tokens = result["sequences"][0]["tokens"]
response = tokenizer.decode(response_tokens)
print(f"Response: {response}")Next Steps
Now that you have Weaver installed:
- Learn about Training and Sampling - Core training APIs
- Explore Loss Functions - Available loss functions
- Check out Model Lineup - Supported models
- Read about Saving and Loading - Model persistence
Troubleshooting
Common Issues
Issue: ImportError: No module named 'weaver'
Solution: Make sure you installed the package:
bash
pip install nex-weaverIssue: Authentication errors
Solution: Verify your API key is set:
bash
echo $WEAVER_API_KEYIssue: Connection errors
Solution: Check your network connection and API endpoint availability.
For more help, check the specific API documentation or contact support.