Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ We released all of our checkpoints used in [LoRA-Flow](https://aclanthology.org/
|
|
19 |
# Summary
|
20 |
> In this repo, we release LoRA and the gate of 7B models trained in our paper in HuggingFace format.
|
21 |
# Introduction
|
22 |
-
The following picture
|
23 |

|
24 |
# Citation
|
25 |
if you find our repo is helpful, please cite the following
|
|
|
19 |
# Summary
|
20 |
> In this repo, we release LoRA and the gate of 7B models trained in our paper in HuggingFace format.
|
21 |
# Introduction
|
22 |
+
LoRA-Flow provides an efficient way to fuse different LoRA modules which can outperform existing methods significantly. The following picture shows our proposed method, we use layer-wise fusion gates to facilitate dynamic LoRA fusion, which project input hidden states of each layer into fusion weights. For more details, please refer to our paper.
|
23 |

|
24 |
# Citation
|
25 |
if you find our repo is helpful, please cite the following
|