SUMMARY
This is AI generated summarization, which may have errors. For context, always refer to the full article.
The White House will be using notable hacking convention DEF CON 31, to be held on August 10 to 13 in Las Vegas, as a springboard for evaluating a number of generative artificial intelligence models in collaboration with tech companies and the hacking community.
The collaboration is between the White House and a number of AI developers, including OpenAI, Google, Antrhopic, Hugging Face, Microsoft, Nvidia, and Stability AI.
The event at DEF CON 31, meanwhile will be hosted by AI Village, an AI hacking community.
The Biden-Harris Administration announced the initiative last Thursday, May 4, as part of a series of actions meant to promote responsible AI innovation in the United States.
The actions include further investment into responsible American AI research and development, releasing draft policy guidelines for public comment on the use of AI systems by the US government, and the aforementioned public assessments of existing generative AI systems.
Said the White House, the DEF CON 31 event “will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework.”
It added, “This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models.”
Red-teaming AI
AI Village, in a blog post, said it wil; be “hosting the first public generative AI red team event at DEF CON 31 with our partners at Humane Intelligence, SeedAI, and the AI Vulnerability Database. We will be testing models kindly provided by Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability with participation from Microsoft, on an evaluation platform developed by Scale AI.”
Ars Technica, in its report, explained red-teaming is the “process by which security experts attempt to find vulnerabilities or flaws in an organization’s systems to improve overall security and resilience.”
The evaluations would be done on provided laptops with timed access to multiple large language models from the vendors listed.
The event will feature a capture-the-flag styled point system to promote testing different types of potential harms, and the community members participating are expected to abide by the hacker hippocratic oath.
While the prize for the winner seemingly pales in comparison to the arduous task ahead of them – the one with the highest number of points wins a high end NVIDIA graphics card – it’s likely the real winners here will be the average person who will be most affected by an AI that isn’t as secure or is more harmful without the intervention of an ethical hacking community. – Rappler.com
Add a comment
How does this make you feel?
There are no comments yet. Add your comment to start the conversation.