AgentDojo: Benchmarking the Capabilities and Adversarial Robustness of LLM Agents¶
Edoardo Debenedetti1, Jie Zhang1, Mislav Balunović1,2,
Luca Beurer-Kellner1,2, Marc Fischer1,2, Florian Tramèr1
1ETH Zurich and 2Invariant Labs
Read our paper here.
Quickstart¶
pip install agentdojo
Warning
Note that the API of the package is still under development and might change in the future.
If you want to use the PI detector, you need to install the transformers
extra:
pip install "agentdojo[transformers]"
FAQ¶
How do I run the benchmark?¶
To run the benchmark, you first need to install the package. Once this is done, the benchmark can be run with the benchmark script. Documentation on how to use the script can be obtained with the --help
flag.
For example, to run the workspace
suite on the tasks 0 and 1, with gpt-4o-2024-05-13
as the LLM, the tool filter as a defense, and the attack with tool knowlege, run the following command:
python -m agentdojo.scripts.benchmark -s workspace -ut user_task_0 \
-ut user_task_1 --model gpt-4o-2024-05-13 \
--defense tool_filter --attack tool_knowledge
To run the above, but on all suites and tasks, run the following:
python -m agentdojo.scripts.benchmark --model gpt-4o-2024-05-13 \
--defense tool_filter --attack tool_knowledge
How do I create a new defense or model?¶
Check out the documentation on how to create your own pipeline.
How do I create a new attack?¶
Check out the documentation on attacks.
How can I use the Dojo to create my own benchmark?¶
Take a look at our in-depth documentation on how to create a new task suite.
Development set-up¶
Take a look at the development set-up docs.
Citing¶
If you use AgentDojo in your research, please consider citing our paper:
@misc{debenedetti2024agentdojo,
title={AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents},
author={Edoardo Debenedetti and Jie Zhang and Mislav Balunović and Luca Beurer-Kellner and Marc Fischer and Florian Tramèr},
year={2024},
eprint={2406.13352},
archivePrefix={arXiv},
primaryClass={cs.CR}
}