Sheshera Mysore
commited on
Commit
•
53c0932
1
Parent(s):
340d0f5
Small changes
Browse files
README.md
CHANGED
@@ -53,7 +53,7 @@ result = aspire_bienc(**inputs)
|
|
53 |
clsrep = result.last_hidden_state[:,0,:]
|
54 |
```
|
55 |
|
56 |
-
**`aspire-biencoder-compsci-spec-full`**, can be used as follows: 1) Download the [`aspire-biencoder-
|
57 |
|
58 |
### Variable and metrics
|
59 |
This model is evaluated on information retrieval datasets with document level queries. Performance here is reported on CSFCube (computer science/English). This is detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). CSFCube presents a finer-grained query via selected sentences in a query abstract based on which a finer-grained retrieval must be made from candidate abstracts. The bi-encoder above ignores the finer grained query sentences and uses the whole abstract - this presents a baseline in the paper.
|
@@ -62,13 +62,13 @@ We rank documents by the L2 distance between the query and candidate documents.
|
|
62 |
|
63 |
### Evaluation results
|
64 |
|
65 |
-
The released model `aspire-biencoder-compsci-spec` (and `aspire-biencoder-compsci-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-compsci-spec`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-compsci-spec` and `aspire-biencoder-compsci-spec-full` are the single best run among the 3 re-runs.
|
66 |
|
67 |
| | CSFCube aggregated | CSFCube aggregated|
|
68 |
|--------------------------------------------:|:---------:|:-------:|
|
69 |
| | MAP | NDCG%20 |
|
70 |
| `specter` | 34.23 | 53.28 |
|
71 |
-
| `aspire-biencoder-compsci-spec`<sup>*</sup> | 37.90 | 58.16 |
|
72 |
| `aspire-biencoder-compsci-spec` | 37.17 | 57.91 |
|
73 |
| `aspire-biencoder-compsci-spec-full` | 37.67 | 59.26 |
|
74 |
|
|
|
53 |
clsrep = result.last_hidden_state[:,0,:]
|
54 |
```
|
55 |
|
56 |
+
**`aspire-biencoder-compsci-spec-full`**, can be used as follows: 1) Download the [`aspire-biencoder-compsci-spec-full.zip`](https://drive.google.com/file/d/1AHtzyEpyn7DeFYOdt86ik4n0tGaG5kMC/view?usp=sharing), and 2) Use it per this example usage script: [`aspire/examples/ex_aspire_bienc.py`](https://github.com/allenai/aspire/blob/main/examples/ex_aspire_bienc.py)
|
57 |
|
58 |
### Variable and metrics
|
59 |
This model is evaluated on information retrieval datasets with document level queries. Performance here is reported on CSFCube (computer science/English). This is detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). CSFCube presents a finer-grained query via selected sentences in a query abstract based on which a finer-grained retrieval must be made from candidate abstracts. The bi-encoder above ignores the finer grained query sentences and uses the whole abstract - this presents a baseline in the paper.
|
|
|
62 |
|
63 |
### Evaluation results
|
64 |
|
65 |
+
The released model `aspire-biencoder-compsci-spec` (and `aspire-biencoder-compsci-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-compsci-spec-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-compsci-spec` and `aspire-biencoder-compsci-spec-full` are the single best run among the 3 re-runs.
|
66 |
|
67 |
| | CSFCube aggregated | CSFCube aggregated|
|
68 |
|--------------------------------------------:|:---------:|:-------:|
|
69 |
| | MAP | NDCG%20 |
|
70 |
| `specter` | 34.23 | 53.28 |
|
71 |
+
| `aspire-biencoder-compsci-spec-full`<sup>*</sup> | 37.90 | 58.16 |
|
72 |
| `aspire-biencoder-compsci-spec` | 37.17 | 57.91 |
|
73 |
| `aspire-biencoder-compsci-spec-full` | 37.67 | 59.26 |
|
74 |
|