Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigWeave v7.1 124b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Vicuna and Alpaca.

Merge process

This is a merge of Xwin-LM/Xwin-LM-70B-V0.1 and Sao10K/Euryale-1.3-L2-70B. It uses the same configuration as alpindale/goliath-120b but with the ranges "fixed" since goliath omits some layers (see this thread).

Merge configuration:

slices:
  - sources:
    - model: Xwin-LM/Xwin-LM-70B-V0.1
      layer_range: [0,16]
  - sources:
    - model: Sao10K/Euryale-1.3-L2-70B
      layer_range: [8,24]
  - sources:
    - model: Xwin-LM/Xwin-LM-70B-V0.1
      layer_range: [16,32]
  - sources:
    - model: Sao10K/Euryale-1.3-L2-70B
      layer_range: [24,40]
  - sources:
    - model: Xwin-LM/Xwin-LM-70B-V0.1
      layer_range: [32,48]
  - sources:
    - model: Sao10K/Euryale-1.3-L2-70B
      layer_range: [40,56]
  - sources:
    - model: Xwin-LM/Xwin-LM-70B-V0.1
      layer_range: [48,64]
  - sources:
    - model: Sao10K/Euryale-1.3-L2-70B
      layer_range: [56,72]
  - sources:
    - model: Xwin-LM/Xwin-LM-70B-V0.1
      layer_range: [64,80]
merge_method: passthrough
dtype: float16
Downloads last month
21
Safetensors
Model size
124B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigWeave-v7.1-124b

Quantizations
2 models