Model Card for Proxy Lite

Proxy Lite logo

A mini, open-weights, version of Proxy.

Model Description

  • Developed by: Convergence AI
  • Model type: 3B Vision-Language Model
  • Agent type: Web-browsing Agent
  • License: CC-BY-NC-4.0
  • Finetuned from model: Qwen/Qwen2.5-VL-3B-Instruct
  • Running the agent

Running Proxy on the web

https://github.com/convergence-ai/proxy-lite to run Proxy lite on a browser

git clone https://github.com/convergence-ai/proxy-lite.git
make proxy
proxy "Find some markets near Kings Cross and tell me their ratings."
Proxy Lite Demo

Uses

Proxy Lite is designed and trained to complete automated tasks in a web browser.

Full code for running the model is available in the github repository.

This includes a CLI tool for running the model, as well as a streamlit app.

You can use this endpoint for small-scale testing.


Direct Use

We recommend hosting your own endpoint with vLLM, you can use the following command:

vllm serve --model convergence-ai/proxy-lite-3b \
    --trust-remote-code \
    --enable-auto-tool-choice \
    --tool-call-parser hermes \
    --port 8008 \

The tool arguments are very important for parsing the tool calls from the model appropriately.

Important: Qwen-2.5-VL Support in transformers is not yet available in the latest release so be sure to install from source.

Message History

When it comes to using and prompting Proxy Lite, please refer to the repository for more information, but the model expects a message history of the form:

message_history = [
    {
        "role": "system", 
        "content": "You are Proxy Lite...", # Full system prompt in src/proxy_lite/agents/proxy_lite_agent.py
    }, # System prompt
    {
        "role": "user", 
        "content": "Find some markets near Kings Cross and tell me their ratings.",
    }, # Set the task
    {
        "role": "user", 
        "content": [
            {"type": "image_url", "image_url": {base64_encoded_screenshot} },
            {"type": "text", "text": "URL: https://www.google.com/ \n- [0] <a>About</a> \n- [1] <a>Store</a>...."}
        ] # This is the observation from the environment
    },
]

This would then build up the message history, alternating between the assistant (who takes the action) and the user (who provides the observation).

Context-Window Management: When making calls to the model, all the observations other than the current one are discarded in order to reduce the large number of image tokens required. Since the model responses include reflection on the observations and are all included in the message history, the model is still aware of the entire history when planning new actions.

Tools

You should also pass the Tools that the model has access to, these will define the action space available to the model. You can do this with transformers:

from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor

from proxy_lite.tools import ReturnValueTool, BrowserTool
from proxy_lite.serializer import OpenAICompatableSerializer

processor = AutoProcessor.from_pretrained("convergence-ai/proxy-lite-3b")
tools = OpenAICompatableSerializer().serialize_tools([ReturnValueTool(), BrowserTool(session=None)])

templated_messages = processor.apply_chat_template(
    message_history, tokenize=False, add_generation_prompt=True, tools=tools
)

image_inputs, video_inputs = process_vision_info(message_history)

batch = processor(
    text=[templated_messages],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)

Or you can send to the endpoint directly, which will handle the formatting:

from openai import OpenAI

client = OpenAI(base_url="http://convergence-ai-demo-api.hf.space/v1")

response = client.chat.completions.create(
    model="convergence-ai/proxy-lite-3b",
    messages=message_history,
    tools=tools,
    tool_choice="auto",
)

Evaluation

Proxy Lite scored 72.4% on the WebVoyager benchmark, placing it 1st out of all available open-weights models.

A breakdown of the results by website is shown below:

web_name Success Rate (%) Finish Rate (%) Avg. Steps
Allrecipes 87.8 95.1 10.3
Amazon 70.0 90.0 7.1
Apple 82.1 89.7 10.7
ArXiv 60.5 79.1 16.0
BBC News 69.4 77.8 15.9
Booking 70.0 85.0 24.8
Cambridge Dict. 86.0 97.7 5.7
Coursera 82.5 97.5 4.7
ESPN 53.8 87.2 14.9
GitHub 85.0 92.5 10.0
Google Flights 38.5 51.3 34.8
Google Map 78.9 94.7 9.6
Google Search 71.4 92.9 6.0
Huggingface 68.6 74.3 18.4
Wolfram Alpha 78.3 93.5 6.1

Out-of-Scope Use

Proxy Lite is specifically designed to automate routine tasks within a web browser environment. However, it should not be used for:

  • High-Stakes or Safety-Critical Applications:
    Avoid using Proxy Lite for tasks such as financial transactions, healthcare operations, legal decision-making, or emergency responses, where any error could lead to serious harm or significant financial loss.

  • Unauthorized or Invasive Data Extraction:
    Automated scraping or extraction of data from websites should only be performed with explicit permission. Proxy Lite should not be used to bypass websites' terms of service, copyright restrictions, or privacy policies.

  • Interactions with Malicious or Unverified Websites:
    Using the model to navigate or interact with suspicious or untrusted websites may expose the system to security threats such as malware, phishing attacks, or other forms of cyber exploitation.

  • Compliance-Regulated or Legally Sensitive Actions:
    Tasks that require adherence to strict legal or regulatory standards (e.g., processing personal data or sensitive information) should employ additional safeguards beyond what the model provides.


Citation

BibTeX:

@article{proxy-lite,
  title={Proxy Lite - A Mini, Open-weights, Autonomous Assistant},
  author={Convergence AI},
  year={2025}
}
Downloads last month
6
Safetensors
Model size
3.75B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for convergence-ai/proxy-lite-3b

Unable to build the model tree, the base model loops to the model itself. Learn more.