Spaces:
Runtime error
Runtime error
JP-SystemsX
commited on
Commit
·
a5e27b5
1
Parent(s):
318c91b
Fixed some typos
Browse files
nDCG.py
CHANGED
@@ -13,7 +13,7 @@ score (Ideal DCG, obtained for a perfect ranking) to obtain a score between
|
|
13 |
This ranking metric returns a high value if true labels are ranked high by
|
14 |
``predictions``.
|
15 |
|
16 |
-
If a value for k is given to the metric it will only consider the k highest
|
17 |
scores in the ranking
|
18 |
|
19 |
References
|
@@ -41,12 +41,12 @@ Args:
|
|
41 |
|
42 |
predictions ('list' of 'float'): Either predicted relevance, probability estimates or confidence values
|
43 |
|
44 |
-
k (int): If set to a value only the k highest scores in the ranking will be considered else considers all outputs.
|
45 |
Defaults to None.
|
46 |
|
47 |
sample_weight (`list` of `float`): Sample weights Defaults to None.
|
48 |
|
49 |
-
ignore_ties ('boolean'): If set to true assumes that there are no ties (this is likely if predictions are continuous)
|
50 |
for efficiency gains. Defaults to False.
|
51 |
|
52 |
Returns:
|
@@ -64,7 +64,7 @@ Examples:
|
|
64 |
>>> results = nDCG_metric.compute(references=[[10, 0, 0, 1, 5]], predictions=[[.1, .2, .3, 4, 70]], k=3)
|
65 |
>>> print(results)
|
66 |
{'nDCG@3': 0.4123818817534531}
|
67 |
-
Example 3-There is only one relevant label but there is a tie and the model can't decide which one is the one.
|
68 |
>>> accuracy_metric = evaluate.load("accuracy")
|
69 |
>>> results = nDCG_metric.compute(references=[[1, 0, 0, 0, 0]], predictions=[[1, 1, 0, 0, 0]], k=1)
|
70 |
>>> print(results)
|
|
|
13 |
This ranking metric returns a high value if true labels are ranked high by
|
14 |
``predictions``.
|
15 |
|
16 |
+
If a value for k is given to the metric, it will only consider the k highest
|
17 |
scores in the ranking
|
18 |
|
19 |
References
|
|
|
41 |
|
42 |
predictions ('list' of 'float'): Either predicted relevance, probability estimates or confidence values
|
43 |
|
44 |
+
k (int): If set to a value, only the k highest scores in the ranking will be considered, else considers all outputs.
|
45 |
Defaults to None.
|
46 |
|
47 |
sample_weight (`list` of `float`): Sample weights Defaults to None.
|
48 |
|
49 |
+
ignore_ties ('boolean'): If set to true, assumes that there are no ties (this is likely if predictions are continuous)
|
50 |
for efficiency gains. Defaults to False.
|
51 |
|
52 |
Returns:
|
|
|
64 |
>>> results = nDCG_metric.compute(references=[[10, 0, 0, 1, 5]], predictions=[[.1, .2, .3, 4, 70]], k=3)
|
65 |
>>> print(results)
|
66 |
{'nDCG@3': 0.4123818817534531}
|
67 |
+
Example 3-There is only one relevant label, but there is a tie and the model can't decide which one is the one.
|
68 |
>>> accuracy_metric = evaluate.load("accuracy")
|
69 |
>>> results = nDCG_metric.compute(references=[[1, 0, 0, 0, 0]], predictions=[[1, 1, 0, 0, 0]], k=1)
|
70 |
>>> print(results)
|