Penetration Testing Services Cloud Pentesting Penetration Network Pentesting Application Pentesting Web Application Pentesting Social Engineering May 28, 2024 Securing the Future: CISA’s Pioneering Roadmap for AI in Cybersecurity The Cybersecurity and Infrastructure Security Agency (CISA) is a federal agency within the United States Department of Homeland Security (DHS) that was established to enhance the security and resilience of the nation’s critical infrastructure. CISA’s history dates back to 2007 when it began as the National Protection and Programs Directorate (NPPD) with its original mission to reduce and eliminate threats to U.S. critical and cyber infrastructure. Fast forwarding to October 2023, U.S. President Joe Biden signed the landmark Executive Order 14110, “Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI),” stating that “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.” Today, CISA operates with a budget of $2.9 billion and has over 3,000 employees dedicated to defending against cyber threats and building more cyber-resilient infrastructure for the future. With the advent of Artificial Intelligence (AI) and President Biden’s Executive Order, CISA has embraced the challenge, introducing new guidelines that promise to revolutionize cybersecurity practices associated with AI. In this blog, we’ll explain CISA’s new guidelines for AI, including its goals, key pillars, and more. What are CISA’s Goals for Securing Artificial Intelligence CISA’s new guidelines for securing AI are focused on these four main goals: Cyber Defense Risk Reduction and Resilience Operational Collaboration Agency Unification CISA’s Roadmap for Artificial Intelligence To achieve its goals around secure AI, CISA developed and released a comprehensive roadmap for AI, outlining a strategic approach to harnessing AI’s potential while mitigating its risks. CISA’s roadmap outlines its objectives, how it plans to achieve them, and how progress will be evaluated. The roadmap includes five “lines of effort”, each addressing a key aspect of AI in cybersecurity: Line of Effort 1: Responsible Use of AI CISA’s goals for Line of Effort (LOE) 1 are to evaluate its cybersecurity programs for potential opportunities to utilize AI and provide necessary resources, requirements, and oversight for appropriate AI integration. It also aims to proactively mitigate threats to critical infrastructure through the responsible use of AI tools, preventing damaging exploits proactively. Specifically, CISA’s 5 key objectives for this LOE are as follows: Objective 1.1: Establish governance and oversight processes for CISA’s use of AI. Objective 1.2: Collect, review, and prioritize AI use cases to support CISA missions. Objective 1.3: Develop an adoption strategy for the next generation of AI-enabled technologies. Objective 1.4: Incorporate cyber defense, incident management, and redress procedures into AI systems and processes. Objective 1.5: Examine holistic approaches to limiting bias in AI use at CISA. Objective 1.6: Responsibly and securely deploy AI systems to support CISA’s cybersecurity mission. The success of this LOE will be measured based on increased responsible uses of AI software tools across CISA workflows. Line of Effort 2: Assuring AI Systems The second line of effort, Assuring AI Systems, aims to identify security risks and resilience challenges to proactively mitigate threats to critical infrastructure, adapt existing security guidance to apply to AI software systems, and ensure that stakeholders understand how AI-specific threats align with the existing vulnerability disclosure process. CISA plans to achieve these outcomes through the following objectives: Objective 2.1: Assess cybersecurity risks of AI adoption in critical infrastructure sectors. Objective 2.2: Engage critical infrastructure stakeholders to determine security and resilience challenges of AI adoption. Objective 2.3: Capture the breadth of AI systems used across the Federal enterprise. Objective 2.4: Develop best practices and guidance for acquisition, development, and operation of secure AI systems. Objective 2.5: Drive adoption of strong vulnerability management practices for AI systems. Objective 2.6: Incorporate AI systems into Secure by Design initiative. This LOE’s success will be measured by increased adherence to CISA risk guidance and best practices not only for AI software deployment, but also for red teaming and vulnerability management. Line of Effort 3: Protecting Critical Infrastructure CISA’s third line of effort on its roadmap is geared toward partnering with other government agencies and industry partners that develop, test, and evaluate AI tools to assess risk and recommend the mitigation of AI-based threats to U.S. critical infrastructure. It aims to protect AI systems from being exploited, especially via AI-enhanced attacks, and support the advancement of AI risk management practices across the critical infrastructure community. CISA plans to achieve this outcome through the following core objectives: Objective 3.1: Regularly engage industry stakeholder partners that are developing AI tools to assess and address security concerns to critical infrastructure and evaluate methods for educating partners and stakeholders. Objective 3.2: Use CISA partnerships and working groups to share information on AI-driven threats. Objective 3.3: Assess AI risks to critical infrastructure. CISA will measure the success of this LOE based on the number of publications and engagements that generate awareness of emerging AI-related risks and advances in AI risk management practices. Line of Effort 4: Collaboration and Communication CISA’s fourth LOE outlines its plans to collaborate with other agencies, international partners, and the public on the development of processes and policies around the use of AI-based software. The key outcome that CISA aims to achieve with this is to ensure that CISA stakeholders are in alignment with clear guidance for AI security. CISA highlights its plans to ensure this happens with the following key objectives: Objective 4.1: Support the development of a whole-of-DHS approach on AI policy issues. Objective 4.2: Participate in interagency policy meetings and interagency working groups on AI. Objective 4.3: Develop CISA policy positions that take a strategic, national level perspective for AI policy documents, such as memoranda and other products. Objective 4.4: Ensure CISA strategy, priorities, and policy framework align with interagency policies and strategy. Objective 4.5: Engage with international partners surrounding global AI security. CISA will measure the success of this based on the proportion of AI-centric guidance and policy documents developed with United States interagency and international partners. Line of Effort 5: Expanding AI Expertise CISA’s fifth and final LOE is centered around ensuring that CISA hires, trains, and retains a workforce with expertise in AI software systems and techniques. CISA plans to achieve this by following these four objectives: Objective 5.1: Connect and amplify AI expertise that already exists in CISA’s workforce. Objective 5.2: Recruit interns, fellows, and staff with AI expertise. Objective 5.3: Educate CISA’s workforce on AI. Objective 5.4: Ensure internal training not only reflects technical expertise, but also incorporates legal, ethical, and policy considerations of AI implementation across all aspects of CISA’s work. CISA’s metrics for the success of this are based on the increased AI expertise in the CISA workforce. Many have noticed that CISA’s measures of effectiveness aren’t as specific and quantitative as they could be. CISA has acknowledged the challenging nature of identifying appropriate measures of effectiveness in its roadmap, stating that it will require ongoing effort and continuous improvement. CISA is currently developing more specific effectiveness measures, which it is slated to define in its annual operating plan. Conclusion CISA’s guidelines are a significant step toward a future where AI and cybersecurity are inextricably linked. As we look ahead, the importance of secure AI systems cannot be overstated, as the safety of our digital world depends on it. About BreachLock BreachLock is a global leader in Continuous Attack Surface Discovery and Penetration Testing. Continuously discover, prioritize, and mitigate exposures with evidence-backed Attack Surface Management, Penetration Testing and Red Teaming. Know your risk. Contact BreachLock today! Industry recognitions we have earned Tell us about your requirements and we will respond within 24 hours. Fill out the form below to let us know your requirements. We will contact you to determine if BreachLock is right for your business or organization.