A newer version of this model is available: AINovice2005/LeEmpereur_70-Base

Model Name:

  • LeEmpereur_70

image/png

Model Description

The pruning was performed using the PruneMe library from Arcee.ai, significantly reducing the model's size. The exact pruning strategy applied involves reducing the number of parameters by approximately 70%.

Configuration:

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: argilla/notus-7b-v1
        layer_range: [0, 1]
  - sources:
      - model: argilla/notus-7b-v1
        layer_range: [2,10]
            
merge_method: passthrough
dtype: bfloat16

๐‘๐ž๐ฌ๐ฎ๐ฅ๐ญ๐ฌ: Firstly, the ideal number of parameters to be pruned should be much lower in future iterations.Secondly, sizeable amount of finetuning should be done if model parameters are reduced to a greater extent.

๐๐จ๐ญ๐ž: This model is made with the intention to be used for fine-tuning. It should not to be used for inference as is.

Downloads last month
85
Safetensors
Model size
2.09B params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AINovice2005/LeEmpereur_70

Finetuned
(5)
this model
Finetunes
1 model