The US government is building an AI sandbox to tackle cybercrime
>
Top US security agencies are developing a virtual environment that uses machine learning to understand cyber threats and share findings with both public and private organizations.
A joint effort between the Science and Technology Directorate (S&T) – housed within the Department of Homeland Security (DHS) – and the Cybersecurity and Infrastructure Security Agency (CISA), will design an AI sandbox for researchers to collaborate and develop analytical approaches testing and techniques to counter cyber threats.
CISA’s Advanced Analytics Platform for Machine Learning (CAP-M) will be used for this in both on-premise and multi-cloud scenarios.
Learning threats
“While initially supporting cyber missions, this environment will be flexible and extensible to support datasets, tools and collaboration for other infrastructure security missions,” DHS said.
In CAP-M, various experiments will be conducted and data will be analyzed and correlated to help all kinds of organizations protect themselves against the ever-changing world of cybersecurity threats.
The experimental data will be made available to other government departments, as well as academic institutions and private sector companies. The S&T assured that privacy issues will be taken into account.
Part of the experimentation involves testing AI and machine learning techniques for their analytical capabilities of cyberthreats and their effectiveness as tools to help combat them. CAP-M will also create a machine learning loop to automate workflows such as exporting and reconciling data.
Speak against The register (opens in new tab)Monti Knode, a director of pen testing platform Horizon3.ai, said such a plan is long overdue, but welcomed the opportunity to test analytical skills.
Knode commented on past failures that “over the years have overwhelmingly contributed to warning fatigue, leading analysts and practitioners on wild goose chases and rabbit holes, as well as true warnings that matter but are buried.”
He added that “labs rarely replicate the complexity and noise of a live production environment, but [CAP-M] could be a positive step.”
Speculating how it might work, Knode suggested that simulated attacks could be executed automatically to train the AI on them to learn how they work and how to recognize them.
Sami Elhini, a biometrics specialist at Cerberus Sentinel, was also optimistic that learning and analyzing threats could lead to a better understanding of them, but cautioned that models could become too general and overlook threats to smaller targets, rendering them unimportant. are filtered out.
He also expressed safety concerns, claiming that “In uncovering [AI/ML] models for a larger audience, the likelihood of an exploit increases.” He said other countries could target CAP-M to learn more about how it works or even disrupt it.
However, it seems that there is mostly positivity surrounding the federal project. Craig Lurey, co-founder and CTO of Keeper Security, also shared The register that “Research and development projects within the federal government can help support and catalyze disparate R&D efforts within the private sector. … Cybersecurity is national security and as such should be prioritized.”
Tom Kellermann, a VP at Contrast Security, echoed these sentiments, stating that CAP-M is a “critical project to improve information sharing on TTPs. [tactics, techniques, and procedures] and increase situational awareness in US cyberspace.”