[WIP] Upload folder using huggingface_hub (multi-commit 50f58f06ead177f88836447a20b19f26d917b831c1a3f9b9bac44727fbfd886e)

#2
by sharpenb - opened
Files changed (5) hide show
  1. README.md +5 -6
  2. config.json +2 -2
  3. model.safetensors +2 -2
  4. plots.png +0 -0
  5. smash_config.json +1 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
3
  metrics:
4
  - memory_disk
@@ -7,8 +8,6 @@ metrics:
7
  - inference_throughput
8
  - inference_CO2_emissions
9
  - inference_energy_consumption
10
- tags:
11
- - pruna-ai
12
  ---
13
  <!-- header start -->
14
  <!-- 200823 -->
@@ -22,7 +21,7 @@ tags:
22
  [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
23
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
24
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
25
- [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx)
26
 
27
  # Simply make AI models cheaper, smaller, faster, and greener!
28
 
@@ -30,11 +29,11 @@ tags:
30
  - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
31
  - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
32
  - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
33
- - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
34
 
35
  ## Results
36
 
37
- ![image info](./plots.png)
38
 
39
  **Frequently Asked Questions**
40
  - ***How does the compression work?*** The model is compressed with llm-int8.
@@ -61,7 +60,7 @@ You can run the smashed model with these steps:
61
  from transformers import AutoModelForCausalLM, AutoTokenizer
62
 
63
  model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-xglm-564M-bnb-4bit-smashed",
64
- trust_remote_code=True, device_map='auto')
65
  tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
66
 
67
  input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
 
1
  ---
2
+ library_name: pruna-engine
3
  thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
4
  metrics:
5
  - memory_disk
 
8
  - inference_throughput
9
  - inference_CO2_emissions
10
  - inference_energy_consumption
 
 
11
  ---
12
  <!-- header start -->
13
  <!-- 200823 -->
 
21
  [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
22
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
23
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
24
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck)
25
 
26
  # Simply make AI models cheaper, smaller, faster, and greener!
27
 
 
29
  - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
30
  - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
31
  - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
32
+ - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
33
 
34
  ## Results
35
 
36
+ Detailed efficiency metrics coming soon!
37
 
38
  **Frequently Asked Questions**
39
  - ***How does the compression work?*** The model is compressed with llm-int8.
 
60
  from transformers import AutoModelForCausalLM, AutoTokenizer
61
 
62
  model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-xglm-564M-bnb-4bit-smashed",
63
+ trust_remote_code=True)
64
  tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M")
65
 
66
  input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/tmp/tmpgk6v805a",
3
  "activation_dropout": 0,
4
  "activation_function": "gelu",
5
  "architectures": [
@@ -22,7 +22,7 @@
22
  "quantization_config": {
23
  "bnb_4bit_compute_dtype": "bfloat16",
24
  "bnb_4bit_quant_type": "fp4",
25
- "bnb_4bit_use_double_quant": false,
26
  "llm_int8_enable_fp32_cpu_offload": false,
27
  "llm_int8_has_fp16_weight": false,
28
  "llm_int8_skip_modules": [
 
1
  {
2
+ "_name_or_path": "/tmp/tmpefd80oyd",
3
  "activation_dropout": 0,
4
  "activation_function": "gelu",
5
  "architectures": [
 
22
  "quantization_config": {
23
  "bnb_4bit_compute_dtype": "bfloat16",
24
  "bnb_4bit_quant_type": "fp4",
25
+ "bnb_4bit_use_double_quant": true,
26
  "llm_int8_enable_fp32_cpu_offload": false,
27
  "llm_int8_has_fp16_weight": false,
28
  "llm_int8_skip_modules": [
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0f33f43b53b8dc0965d37aa2c883cfa627dffb944e418f68e68b21b33cceaf6
3
- size 694930096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3122b51cea3efb16f5ea48225496917166269d0fb1774a765e6126d7b6a8f49
3
+ size 681040709
plots.png DELETED
Binary file (427 kB)
 
smash_config.json CHANGED
@@ -8,7 +8,7 @@
8
  "compilers": "None",
9
  "task": "text_text_generation",
10
  "device": "cuda",
11
- "cache_dir": "/ceph/hdd/staff/charpent/.cache/models8vfpiksw",
12
  "batch_size": 1,
13
  "model_name": "facebook/xglm-564M",
14
  "pruning_ratio": 0.0,
 
8
  "compilers": "None",
9
  "task": "text_text_generation",
10
  "device": "cuda",
11
+ "cache_dir": "/ceph/hdd/staff/charpent/.cache/modelsuf8rvw4z",
12
  "batch_size": 1,
13
  "model_name": "facebook/xglm-564M",
14
  "pruning_ratio": 0.0,