Datasets:
license: mit
task_categories:
- text-classification
tags:
- AI
- ICML
- ICML2024
ICML 2024 International Conference on Machine Learning 2024 Accepted Paper Meta Info Dataset
This dataset is collect from the ICML 2024 OpenReview website (https://openreview.net/group?id=ICML.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/icml2024). For researchers who are interested in doing analysis of ICML 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICML 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File
{
"abs": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
"Download PDF": "https://raw.githubusercontent.com/mlresearch/v235/main/assets/abad-rocamora24a/abad-rocamora24a.pdf",
"OpenReview": "https://openreview.net/forum?id=AZWqXfM6z9",
"title": "Revisiting Character-level Adversarial Attacks for Language Models",
"url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
"authors": "Elias Abad Rocamora, Yongtao Wu, Fanghui Liu, Grigorios Chrysos, Volkan Cevher",
"detail_url": "https://proceedings.mlr.press/v235/abad-rocamora24a.html",
"tags": "ICML 2024",
"abstract": "Adversarial attacks in Natural Language Processing apply perturbations in the character or token levels. Token-level attacks, gaining prominence for their use of gradient-based methods, are susceptible to altering sentence semantics, leading to invalid adversarial examples. While character-level attacks easily maintain semantics, they have received less attention as they cannot easily adopt popular gradient-based methods, and are thought to be easy to defend. Challenging these beliefs, we introduce Charmer, an efficient query-based adversarial attack capable of achieving high attack success rate (ASR) while generating highly similar adversarial examples. Our method successfully targets both small (BERT) and large (Llama 2) models. Specifically, on BERT with SST-2, Charmer improves the ASR in $4.84$% points and the USE similarity in $8$% points with respect to the previous art. Our implementation is available in https://github.com/LIONS-EPFL/Charmer."
}
Related
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews