File size: 887 Bytes
886d8e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
title: Introduction
---

**Open Interpreter** works with both hosted and local language models.

Hosted models are faster and more capable, but require payment. Local models are private and free, but are often less capable.

For this reason, we recommend starting with a **hosted** model, then switching to a local model once you've explored Open Interpreter's capabilities.

<CardGroup>

<Card
  title="Hosted setup"
  icon="cloud"
  href="/language-models/hosted-models"
>
  Connect to a hosted language model like GPT-4 **(recommended)**
</Card>

<Card
  title="Local setup"
  icon="microchip"
  href="/language-models/local-models"
>
  Setup a local language model like Mistral
</Card>

</CardGroup>

<br/>
<br/>

<span class="opacity-50">Thank you to the incredible [LiteLLM](https://litellm.ai/) team for their efforts in connecting Open Interpreter to hosted providers.</span>