jalling commited on
Commit
e093398
1 Parent(s): b87a078

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -3
README.md CHANGED
@@ -1,3 +1,55 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: LFAI_RAG_niah_v1
4
+ data_files:
5
+ - split: test
6
+ path: "LFAI_RAG_niah_v1.json"
7
+ default: true
8
+ license: apache-2.0
9
+ ---
10
+
11
+ # LFAI_RAG_niah_v1
12
+
13
+ This dataset aims to be the basis for RAG-focused Needle in a Haystack evaluations for [LeapfrogAI](https://github.com/defenseunicorns/leapfrogai)🐸.
14
+
15
+ ## Dataset Details
16
+
17
+ LFAI_RAG_niah_v1 contains 120 context entries that are intended to be used for Needle in a Haystack RAG Evaluations.
18
+
19
+ For each entry, a secret code (Doug's secret code) has been injected into a random essay. This secret code is the "needle" that is the goal to be found by an LLM.
20
+
21
+ Example:
22
+ ```
23
+ {
24
+ "context_length":512,
25
+ "context_depth":0.0,
26
+ "secret_code":"Whiskey137",
27
+ "copy":0,
28
+ "context":"Doug's secret code is: Whiskey137. Remember this. Venture funding works like gears. A typical startup goes through several rounds of funding, and at each round you want to take just enough money to reach the speed where you can shift into the next gear.\n\nFew startups get it quite right. Many are underfunded. A few are overfunded, which is like trying to start driving in third gear."
29
+ }
30
+ ```
31
+
32
+ ### Dataset Sources
33
+
34
+ Data was generated using the essays of [Paul Graham](https://www.paulgraham.com/articles.html) as the haystack that a random secret code is injected into.
35
+
36
+ ## Uses
37
+
38
+ This dataset is ready to be used for Needle in a Haystack evaluations.
39
+
40
+ ## Dataset Structure
41
+
42
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
43
+
44
+ Each entry in this dataset contains the following fields:
45
+ - `context_length`: approximately how many characters the context field is (rounded to the nearest power of 2)
46
+ - `context_depth`: approximately how far into the context the secret code phrased is injected, represented as a fraction of document depth
47
+ - `secret_code`: the secret code generated for the given entry. This is used to verify the LLM found the correct code
48
+ - `copy`: for each length and depth, the experiment should be repeated a few times, so this count refers to which instance of the repeated setup the entry is
49
+ - `context`: the portion of text with the injected secret code
50
+
51
+ ## Dataset Card Authors
52
+ The Leapfrogai🐸 team at [Defense Unicorns](https://www.defenseunicorns.com/)🦄
53
+
54
+ ## Dataset Card Contact
55