alabulei commited on
Commit
46cfdd4
·
verified ·
1 Parent(s): 1cad725

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -7,4 +7,34 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ ![](https://github.com/second-state/LlamaEdge/raw/dev/assets/logo.svg)
11
+
12
+
13
+ Run Open source LLMs and create OpenAI-compatible API services for the Llama2 series of LLMs locally With LlamaEdge!
14
+
15
+ ## Give it a try
16
+
17
+ Run a single command in your command line terminal.
18
+
19
+ ```
20
+ bash <(curl -sSfL 'https://code.flows.network/webhook/iwYN1SdN3AmPgR5ao5Gt/run-llm.sh')
21
+ ```
22
+
23
+ Follow the on-screen instructions to install the WasmEdge Runtime and download your favorite open-source LLM. Then, choose whether you want to chat with the model via the CLI or via a web UI.
24
+
25
+ [See it in action](https://youtu.be/Hqu-PBqkzDk) | [Docs](https://www.secondstate.io/articles/run-llm-sh/)
26
+
27
+ ## Why?
28
+
29
+ [LlamaEdge](https://github.com/second-state/LlamaEdge), powered by Rust and WasmEdge, provides a strong alternative to Python in AI inference.
30
+
31
+ * Lightweight. The total runtime size is 30MB.
32
+ * Fast. Full native speed on GPUs.
33
+ * **Portable. Single cross-platform binary on different CPUs, GPUs, and OSes.**
34
+ * Secure. Sandboxed and isolated execution on untrusted devices.
35
+ * Container-ready. Supported in Docker, containerd, Podman, and Kubernetes.
36
+
37
+ ## Learn more
38
+
39
+ Please visit the [LlamaEdge](https://github.com/second-state/LlamaEdge) project to learn more.
40
+