view post Post 589 I just got my first ChatGPT review on ARR! 😅 Any advice on how to prove it's AI-generated? Thanks! See translation
view post Post 1449 I'm excited to announce that my internship paper at Parameter Lab was accepted to Findings of #NAACL2025 🎉 TLDR: Stating an LLM was trained on a sentence might not be possible 😥 , but it is possible for large enough amounts of tokens, such as long documents or multiple documents! 🤯 Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models (2411.00154)🔗 https://github.com/parameterlab/mia-scaling See translation
DCoT Models from the paper "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models" Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models Paper • 2407.03181 • Published Jul 3, 2024 • 1 haritzpuerto/LLaMA2-7B-dcot Text Generation • Updated Jul 16, 2024 • 114 • 2 haritzpuerto/LLaMA2-13B-dcot Text Generation • Updated Jul 16, 2024 • 6 haritzpuerto/LLaMA2-70B-dcot Text Generation • Updated Jul 16, 2024 • 2
Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models Paper • 2407.03181 • Published Jul 3, 2024 • 1
The Pile Samples used for the NAACL 2025 Findings paper: "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models." haritzpuerto/the_pile_arxiv_1k_sample Viewer • Updated Jul 5, 2024 • 5.84k • 47 haritzpuerto/the_pile_arxiv_50k_sample Viewer • Updated Jul 16, 2024 • 54.8k • 77 haritzpuerto/the_pile_00_arxiv Viewer • Updated Aug 15, 2024 • 83.9k • 134 haritzpuerto/the_pile_00_wiki Viewer • Updated Aug 28, 2024 • 599k • 69