Spaces:
Running
on
Zero
Running
on
Zero
File size: 1,100 Bytes
886d8e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
title: "Best Practices"
---
Most settings — like model architecture and GPU offloading — can be adjusted via your LLM providers like [LM Studio.](https://lmstudio.ai/)
**However, `max_tokens` and `context_window` should be set via Open Interpreter.**
For local mode, smaller context windows will use less RAM, so we recommend trying a much shorter window (~1000) if it's is failing or if it's slow.
<CodeGroup>
```bash Terminal
interpreter --local --max_tokens 1000 --context_window 3000
```
```python Python
from interpreter import interpreter
interpreter.offline = True # Disables online features like Open Procedures
interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format
interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to LM Studio, requires this
interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server
interpreter.llm.max_tokens = 1000
interpreter.llm.context_window = 3000
interpreter.chat()
```
</CodeGroup>
<br />
<Info>Make sure `max_tokens` is less than `context_window`.</Info>
|