--- id: cybersecurity-regulatory-compliance related: - cybersecurity-ai-threats - cybersecurity-enterprise-ai - global-non-us-landscape - ai-financial-services key_findings: - "NIST AI RMF 1.0 remains the core framework — no 2.0 exists; vendor references to 2.0 conflate companion documents" - "OWASP LLM Top 10 2025 added system prompt leakage and vector/embedding weaknesses; new Agentic Top 10 (ASI01-10) published Dec 2025" - "MITRE ATLAS added 14 agentic-specific techniques in October 2025 via Zenity Labs collaboration" - "EU AI Act Article 15 names five AI-specific attack vectors as mandatory security requirements for high-risk systems" --- # AI Security Regulatory & Compliance Frameworks **Scope:** Regulatory and compliance frameworks governing AI security — NIST AI RMF ecosystem, OWASP LLM/Agentic Top 10, EU AI Act security provisions, US financial services regulatory guidance (SEC, FINRA, OCC, FDIC, Fed, NYDFS), emerging agentic AI standards, and MITRE ATLAS. Excludes general EU AI Act coverage (see global-non-us-landscape.md). **Date:** March 23, 2026 **Credibility tiers used:** Tier 1 (NIST, OWASP, MITRE, SEC, FINRA, OCC, FDIC, Federal Reserve, NYDFS, EU Commission/EUR-Lex), Tier 2 (ISO publications, CSA, U.S. Treasury), Tier 4 (industry commentary for adoption context) --- ## 1. NIST AI Risk Management Framework (AI RMF) ### Current Status **AI RMF 1.0** was released January 26, 2023 — the primary document remains at version 1.0. No RMF 2.0 has been formally released; references to "2.0" in vendor content conflate companion document expansions with the core framework. NIST expects RMF 1.1 guidance addenda and more granular evaluation methodologies through 2026. **Companion document ecosystem (as of March 2026):** | Document | Release | Notes | |---|---|---| | AI RMF 1.0 | Jan 2023 | Core framework: Govern, Map, Measure, Manage | | AI RMF Playbook | Jan 2023, updated Feb 2025 | Suggested actions per function/subcategory | | NIST AI 100-1 (Trustworthy AI) | Mar 2023 | Foundational principles | | **NIST AI 600-1** (GenAI Profile) | **Jul 2024** | Companion resource for generative AI, per EO 14110 §4.1(a)(i)(A) | | **NIST IR 8596** (Cyber AI Profile) | **Dec 16, 2025 — IPRD** | CSF 2.0 profile for AI cybersecurity; public comment Jan 30, 2026 | | **NIST AI Agent Standards Initiative** | **Feb 17, 2026** | CAISI launch; RFI on agent security closed Mar 9, 2026 | | COSAiS overlays (forthcoming) | In development | SP 800-53 control overlays for GenAI, Predictive AI, Agentic AI | ### Key Provisions — Core Framework (AI RMF 1.0) Four functions operate as an iterative governance cycle, not a one-time checklist: - **GOVERN**: Establish organizational structures, policies, accountability, and culture for AI risk management. Sets risk appetite, assigns roles, establishes oversight committees. - **MAP**: Categorize AI systems, identify affected stakeholders, surface relevant risks (technical, ethical, regulatory). Includes AI system inventory and context documentation. - **MEASURE**: Evaluate and analyze identified risks — bias, security vulnerabilities, performance drift, fairness. Requires metrics calibrated to deployment context. - **MANAGE**: Allocate resources and implement risk treatment. Includes monitoring, incident response, and AI system retirement processes. The framework is **voluntary and technology-neutral**. It does not prescribe specific technologies or mandate adoption, though US sector regulators (SEC, CFPB, FDA, FTC) increasingly reference AI RMF principles in supervisory expectations. ### Key Provisions — NIST AI 600-1 (GenAI Profile, July 2024) Developed pursuant to [EO 14110](https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence). Maps AI RMF functions to 12 risk categories unique to or exacerbated by generative AI: 1. CBRN (chemical/biological/radiological/nuclear) information 2. Confabulation (hallucination) 3. Data privacy 4. Data provenance 5. **Information security** — "Lowered barriers for offensive cyber capabilities, including ease of security attacks, hacking, malware, phishing, and offensive cyber operations through accelerated automated discovery and exploitation of vulnerabilities; increased available attack surface for targeted cyber attacks, which may compromise the confidentiality and integrity of model [infrastructure]" 6. Intellectual property 7. Obscene/harmful content 8. Sociotechnical harms (bias) 9. Value chain and component integration 10. Others per PWG findings Security-relevant actions under this profile span: adversarial robustness testing, system prompt hardening, model access controls, supply chain integrity, and output validation. [Source: NIST AI 600-1 PDF, July 2024](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf) ### Key Provisions — NIST IR 8596 (Cyber AI Profile, December 2025) Released as an Initial Preliminary Draft (IPRD) December 16, 2025 with public comment closing January 30, 2026. This is the most operationally significant AI security document NIST has produced. Applies CSF 2.0 structure to AI cybersecurity across **three focus areas**: 1. **Secure** — Protecting AI systems, models, agents, data, and supply chains from compromise. Covers AI-specific attack surfaces: adversarial inputs, data poisoning, model drift, prompt injection, supply chain integrity. 2. **Defend** — Using AI to enhance cybersecurity operations. Covers AI-assisted threat detection, anomaly detection, UEBA, compliance automation, adversarial training, automated incident response. 3. **Thwart** — Building resilience against AI-enabled attacks. Covers deepfake phishing, AI-generated malware, autonomous adversarial agents operating from reconnaissance through exfiltration. **Selected high-priority provisions (Priority 1 = highest):** - Inventory AI models, APIs, keys, agents, data, and their integrations and permissions [IDENTIFY] - Issue unique, traceable identities to AI systems (service accounts, not shared credentials) [PROTECT] - Restrict arbitrary code execution by AI agent systems [PROTECT] - Track and log AI traffic separately from human traffic [DETECT] - Define conditions for disabling AI autonomy during risk response [IDENTIFY] - Maintain protected, regularly tested backups of critical AI assets [PROTECT] - Build AI-specific incident response procedures including model rollback and retraining [RESPOND/RECOVER] Framework is not final — NIST is soliciting public comment before releasing an Initial Public Draft. [Source: NIST IR 8596 IPRD, December 2025](https://nvlpubs.nist.gov/nistpubs/ir/2025/NIST.IR.8596.iprd.pdf) ### Practical Adoption The AI RMF has become the de facto US AI governance standard for regulated industries and federal contractors. Key adoption signals: - **150+ organizations** engaged with MITRE ATLAS (which references ATLAS-NIST AI RMF interop as a design principle), per [MITRE ATLAS Overview, September 2025 at NIST CSRC](https://csrc.nist.gov/csrc/media/Presentations/2025/mitre-atlas/TuePM2.1-MITRE%20ATLAS%20Overview%20Sept%202025.pdf) - **FINRA explicitly references AI RMF** in its 2025 Annual Regulatory Oversight Report as the closest available framework for broker-dealer AI governance - **US Treasury FS AI RMF** (February 19, 2026) is a direct financial-services adaptation of NIST AI RMF with 230 control objectives — the clearest signal that regulators consider NIST AI RMF the foundational layer, not a standalone option - **ISO/IEC 42001:2023** (AI management systems) and the EU AI Act technical documentation requirements are widely mapped to AI RMF, enabling dual compliance - Sector-specific profiles in active development include financial services credit/fraud models and critical infrastructure monitoring ### Financial Services Relevance The February 2026 [US Treasury Financial Services AI Risk Management Framework (FS AI RMF)](https://home.treasury.gov/news/press-releases/sb0401), developed through the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council's AI Executive Oversight Group (AIEOG), is the single most relevant document for FS practitioners. Key characteristics: - **Structural alignment with NIST AI RMF** but operationally specific — not just principles - **230 Control Objectives** in a Risk and Control Matrix, organized by AI adoption stage - Covers: AI lifecycle governance, data quality and provenance, third-party/vendor AI risk, cybersecurity and adversarial threats, human oversight of automated systems - Scalable — controls calibrated by institution size, complexity, and adoption stage - Companion AI Lexicon establishes common definitions for AI concepts across regulatory, technical, legal, and business functions - Published by the [Cyber Risk Institute](https://cyberriskinstitute.org/artificial-intelligence-risk-management/), available for download The FS AI RMF explicitly addresses cybersecurity and adversarial threats as a key risk theme, directly engaging the AI-specific threat surface (model manipulation, data poisoning, adversarial inputs) that existing bank examination guidance (OCC, FDIC, Fed) had addressed only indirectly. --- ## 2. OWASP Top 10 for LLM Applications ### Current Status **Version 2025 (v2.0)** released November 18, 2024. This is the current production version. The 2023 version (v1.1) was the initial release. The project has since expanded into the [OWASP GenAI Security Project](https://genai.owasp.org/), which now includes an additional **Top 10 for Agentic Applications** released December 9, 2025. [Source: OWASP LLM Top 10 PDF, v2025](https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-v2025.pdf) ### Key Provisions — 2025 Top 10 | ID | Name | Core Risk | Key Mitigations | |---|---|---|---| | **LLM01:2025** | **Prompt Injection** | User/external prompts alter LLM behavior; direct (user input) or indirect (external data sources, tool outputs) | Constrain model via system prompt instructions; privilege separation; RAG Triad validation; require human approval for high-risk actions; segregate and label external content | | **LLM02:2025** | **Sensitive Information Disclosure** | PII, financial data, health records, API credentials, system prompt contents exposed in LLM outputs | Strict access controls (least privilege); data sanitization before ingestion; federated learning/differential privacy; conceal system preamble; tokenization/redaction | | **LLM03:2025** | **Supply Chain** | Compromised training data, pre-trained models, LoRA adapters, plugins, deployment platforms | SBOM/AI-BOM (OWASP CycloneDX); vet all data sources; integrity checks and model signing; anomaly detection; patch management policy | | **LLM04:2025** | **Data and Model Poisoning** | Manipulation of training/fine-tuning/embedding data to introduce backdoors, biases, or vulnerabilities | Data lineage tracking (DVC/ML-BOM); sandboxing; adversarial/red team testing; monitor training loss; RAG grounding at inference | | **LLM05:2025** | **Improper Output Handling** | Insufficient validation of LLM outputs passed downstream — enables XSS, CSRF, SSRF, RCE | Zero-trust on model outputs; OWASP ASVS guidelines; context-aware encoding; parameterized queries; CSP for XSS prevention | | **LLM06:2025** | **Excessive Agency** | LLM granted excessive functionality, permissions, or autonomy enabling damaging downstream actions | Minimize extensions and permissions; require explicit user approval for high-risk actions; complete mediation; rate-limiting; human-in-the-loop for critical actions | | **LLM07:2025** | **System Prompt Leakage** | Sensitive data (credentials, business logic, operational details) in system prompts exposed through responses | Separate sensitive data from prompts; implement security controls outside the LLM (not prompt-dependent); privilege separation | | **LLM08:2025** | **Vector and Embedding Weaknesses** | Weaknesses in RAG vector stores/embedding pipelines enabling injection, manipulation, unauthorized data retrieval | Fine-grained access controls per user/role; data validation and source authentication; monitor and log retrieval queries | | **LLM09:2025** | **Misinformation** | LLM-generated false/misleading content appearing credible; overreliance on AI outputs | RAG grounding; fine-tuning with specific datasets; cross-verification + human oversight; automated factual validation | | **LLM10:2025** | **Unbounded Consumption** | Excessive inference resource usage — DoS, unexpected costs, model extraction via query flooding | Rate limiting and quotas; input validation; resource throttling and timeouts; anomaly detection; watermarking; graceful degradation | ### 2023 vs. 2025 Changes Material changes reflect two years of real-world LLM deployment experience: **New in 2025 (not in 2023):** - **LLM07: System Prompt Leakage** — Over 30 documented cases in 2024 exposed API keys and operational workflows through system prompt extraction - **LLM08: Vector and Embedding Weaknesses** — Added because 53% of companies rely on RAG rather than fine-tuning, creating a new attack surface not present in 2023 **Renamed/Expanded:** - *Insecure Output Handling* → **Improper Output Handling** (LLM05) — broader scope beyond injection to all downstream output vulnerabilities - *Training Data Poisoning* → **Data and Model Poisoning** (LLM04) — now includes model-level threats - *Denial of Service* → **Unbounded Consumption** (LLM10) — expanded to include cost exploitation and model extraction, not just availability attacks **Removed from 2023:** - *Insecure Plugin Design* — absorbed into LLM06 Excessive Agency (reflects maturation from plugin era to agentic framework era) - *Overreliance* — integrated into LLM09 Misinformation - *Model Theft* — incorporated into LLM10 Unbounded Consumption **Priority shifts reflecting real-world data:** - Sensitive Information Disclosure moved from #6 (2023) to #2 (2025) — breach data shows this is the highest-frequency realized harm - Supply Chain moved to #3 — driven by increased reliance on third-party models, datasets, and APIs ### OWASP Top 10 for Agentic Applications (December 9, 2025) Released December 9, 2025 by OWASP GenAI Security Project. 600+ contributors. Version 2026.1. First industry-standard framework specifically for autonomous AI agent security. [Source: OWASP GenAI Security Project press release, December 2025](https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/) | ID | Risk | Severity | |---|---|---| | ASI01 | Agent Goal Hijack — attacker redirects agent objectives via manipulated instructions or tool outputs | Critical | | ASI02 | Tool Misuse & Exploitation — prompt injection or misalignment causes agents to misuse legitimate tools | Critical | | ASI03 | Identity & Privilege Abuse — exploiting inherited credentials, delegated permissions, or agent-to-agent trust | Critical | | ASI04 | Supply Chain Vulnerabilities — malicious/tampered tools, descriptors, models, agent personas | High | | ASI05 | Unexpected Code Execution — agents generate or execute attacker-controlled code without validation | Critical | | ASI06 | Memory & Context Poisoning — persistent corruption of agent memory, RAG stores, contextual knowledge | High | | ASI07 | Insecure Inter-Agent Communication — spoofing or intercepting messages between agents | High | | ASI08 | Cascading Failures — single agent error propagates through connected agents | Medium | | ASI09 | Human-Agent Trust Exploitation — AI generates confident explanations that mislead human operators | Medium | | ASI10 | (not fully listed in available press release content) | — | ### Financial Services Relevance LLM02 (Sensitive Information Disclosure) is the highest-priority risk for FS given the volume of PII, account data, transaction records, and regulatory correspondence processed by AI systems. LLM06 (Excessive Agency) is critical for any agentic deployment with write access to financial systems. LLM08 (Vector/Embedding Weaknesses) applies to any RAG-based deployment over internal knowledge bases containing NPI or confidential client data. FINRA's 2025 Annual Regulatory Oversight Report specifically flags LLM-enabled fraud risks (synthetic identity creation, deepfake account takeovers) that map directly to LLM01 and LLM09. --- ## 3. EU AI Act — Security Provisions *Note: General EU AI Act coverage (scope, risk categories, timelines) is in global-non-us-landscape.md. This section covers only cybersecurity and security-specific obligations.* ### Current Status The EU AI Act (Regulation EU 2024/1689) entered into force August 1, 2024. Application timeline from [EU Commission official summary](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai): | Date | Event | |---|---| | Feb 2, 2025 | Prohibited AI practices and AI literacy obligations active | | Aug 2, 2025 | GPAI model obligations (Articles 53-56) became applicable; enforcement framework operational | | 2026 (ongoing) | High-risk AI system obligations phased enforcement | | Aug 2, 2027 | Extended transition period ends for high-risk AI embedded in regulated products | | Aug 2, 2026 | Full applicability date for most provisions | ### Key Provisions — Article 15: Cybersecurity for High-Risk AI Systems [Article 15](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689) is the primary cybersecurity requirement for high-risk AI systems: **Article 15(1):** High-risk AI systems shall be designed and developed to achieve an appropriate level of accuracy, robustness, and cybersecurity, performing consistently throughout their lifecycle. **Article 15(4) — Robustness:** Systems shall be resilient against errors, faults, or inconsistencies. Technical redundancy solutions may include backup and fail-safe plans. Continuous-learning systems shall address feedback loops that could propagate biased outputs. **Article 15(5) — Cybersecurity mandate (full text):** High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities. Technical solutions shall be appropriate to the relevant circumstances and risks. Technical solutions addressing AI-specific vulnerabilities shall include, where appropriate, measures to: - **Prevent, detect, respond to, resolve and control for attacks trying to:** - Manipulate the training data set (data poisoning) - Manipulate pre-trained components used in training (model poisoning) - Use inputs designed to cause the AI model to make a mistake (adversarial examples / model evasion) - Conduct confidentiality attacks - Exploit model flaws This is the only place EU law explicitly names AI-specific attack vectors. The enumeration tracks closely with NIST SP 800-207 adversarial ML taxonomy and MITRE ATLAS technique categories, enabling cross-framework mapping. ### Key Provisions — Articles 53-56: GPAI and Systemic Risk **Article 55(1)** — Providers of GPAI models with systemic risk (threshold: >10²⁵ FLOPs training compute) must: - **(a)** Perform model evaluation per standardized protocols, including adversarial testing to identify and mitigate systemic risks - **(b)** Assess and mitigate systemic risks at Union level, including from development, market placement, or use - **(c)** Track, document, and report without undue delay to the AI Office about serious incidents and corrective measures - **(d)** Ensure adequate cybersecurity protection for the GPAI model and its physical infrastructure Serious incident reporting: within 15 business days to the national market surveillance authority. Contrast with NIS2's 24-hour initial notification requirement — parallel obligations apply where both regimes are triggered. **Article 56 — Codes of Practice:** Were to be finalized by May 2, 2025. If not finalized or deemed inadequate by August 2, 2025, the Commission may issue implementing acts with common rules. The AI Office has published draft guidelines (July 2025) clarifying key GPAI provisions. ### Security Obligations by Risk Tier | Risk Category | Security Obligations Active as of March 2026 | |---|---| | Prohibited AI (Art. 5) | Prohibited as of Feb 2025; no security obligations, just prohibition | | GPAI — standard | Transparency, copyright, technical documentation (Art. 53). Active Aug 2025 | | GPAI — systemic risk | All Art. 53 + adversarial testing, incident reporting, cybersecurity protection (Art. 55). Active Aug 2025 | | High-risk — regulated products | Art. 15 cybersecurity requirements; transition period until Aug 2027 | | High-risk — Annex III | Art. 15 requirements; phased enforcement through 2026 | ### Interaction with EU Cybersecurity Law Three EU frameworks create overlapping — and in some cases conflicting — obligations for AI systems: **NIS2 Directive** (effective Oct 2024 for member state implementation): - Applies to operators of essential services and digital infrastructure (including banking and financial market infrastructure under Annex I and Annex II) - Requires: risk management measures, 24-hour initial incident notification to national CSIRTs, supply chain security - AI systems used by NIS2-regulated entities inherit NIS2 obligations regardless of AI Act risk tier - Conflict point: NIS2 "significant incident" vs. AI Act "serious incident" definitions diverge; a ransomware attack on an AI-powered banking service triggers both regimes with different evidence requirements and different deadlines **Cyber Resilience Act (CRA)** (in force Dec 2024, main obligations apply Dec 2027): - Covers products with digital elements — hardware and software - Secure-by-design requirements, mandatory vulnerability disclosure, CE marking requirement - AI software components are products with digital elements and therefore in scope - Complements Article 15 by requiring security architecture decisions to be made at design time, not patched post-deployment **Digital Operational Resilience Act (DORA)** (applicable Jan 2025): - Applies to financial entities (banks, investment firms, insurers, payment service providers) and their ICT third-party service providers - AI systems used by financial entities are ICT systems under DORA; AI vendors to EU financial entities may qualify as critical ICT third-party providers - DORA's ICT risk management, incident reporting, resilience testing, and third-party oversight requirements apply in parallel with AI Act obligations Practical implication for FS firms: A high-risk AI system deployed by an EU bank may simultaneously trigger EU AI Act Article 15 (cybersecurity), DORA ICT risk management, and NIS2 incident reporting. Compliance programs must be designed for parallel compliance, not sequential. --- ## 4. SEC, FINRA, and Federal Banking Agency AI Guidance ### SEC — Current Status (March 2026) The SEC under Chair Atkins (Trump administration, confirmed April 2025) has moved away from prescriptive AI rulemaking but has maintained and intensified enforcement of existing frameworks. **No AI-specific rules currently on the books.** The Gensler-era proposed rule on predictive data analytics (would have broadly regulated AI tools used by investment advisers and broker-dealers) was withdrawn in 2025. Per Chair Atkins: "investors can rely on our current principles-based rules to inform them of how AI impacts companies" and "we should resist the temptation to adopt prescriptive disclosure requirements for every 'new thing.'" **Active enforcement vector: AI washing.** The SEC has rebranded its enforcement unit and is actively investigating AI-related fraud: - **February 20, 2025:** SEC announced creation of the [Cyber and Emerging Technologies Unit (CETU)](https://www.sec.gov/newsroom/press-releases/2025-42), replacing the Crypto Assets and Cyber Unit. 30 fraud specialists and attorneys. Priority areas include: fraud using AI/ML, regulated entities' compliance with cybersecurity rules, and public issuer fraudulent disclosure relating to cybersecurity. - **April 9, 2025:** First Trump-era AI enforcement action. [SEC v. Nate, Inc. / Albert Saniger](https://www.sec.gov/enforcement-litigation/litigation-releases/lr-26282) — SEC and DOJ filed parallel civil and criminal charges. Saniger raised $42M+ claiming Nate's app used AI for automated shopping; in reality, purchases were manually processed by overseas contract workers. Charged with violations of Securities Exchange Act §10(b), Rule 10b-5, and Securities Act §17(a). Parallel DOJ criminal indictment. **2025 SEC Examination Priorities** (October 2024 release for FY2025): Explicitly called out AI use by investment advisers — examiners may conduct in-depth review of compliance policies/procedures and disclosures to investors related to AI integration in portfolio management, trading, marketing, and compliance. **SEC Investor Advisory Committee (December 4, 2025):** Recommended the SEC require issuers to: (1) adopt a definition of "Artificial Intelligence," (2) disclose board AI oversight mechanisms, (3) report separately on AI deployment effects on operations and consumer-facing matters. Current SEC leadership has not acted on this recommendation. **Prior enforcement actions establishing precedent:** - *In re Global Predictions, Inc.* — unsubstantiated AI performance claims + failure to disclose conflicts - *In re Delphia (USA) Inc.* — falsely claimed AI used client data to provide investing advantage ### FINRA — Current Status **FINRA Regulatory Notice 24-09** (June 27, 2024): Authoritative primary source. Key holding: FINRA rules are technology-neutral and continue to apply when firms use GenAI/LLMs. Does not create new requirements. [Source: FINRA Regulatory Notice 24-09](https://www.finra.org/rules-guidance/notices/24-09) **FINRA 2025 Annual Regulatory Oversight Report** (January 28, 2025): More operationally specific than RN 24-09. [Source: FINRA 2025 Annual Report PDF](https://www.finra.org/sites/default/files/2025-01/2025-annual-regulatory-oversight-report.pdf) Specific rules FINRA applies to AI/GenAI use: | Rule | AI Application | |---|---| | **Rule 3110 (Supervision)** | Firms must supervise GenAI use at enterprise AND individual level. AI doing a supervised function is part of the supervisory chain. "The AI did it" is not a defense. | | **Rule 2210 (Communications with Public)** | AI-generated content (chatbots, marketing) must accurately describe AI and balance benefits with risks. Must be supervised and retained. | | **Rule 3310 (AML)** | Ongoing monitoring must detect GenAI-enabled fraud — synthetic IDs, deepfakes used for account takeover. | | **Reg S-P (SEC Rule)** | Cybersecurity obligations; 72-hour breach notification; oversight of service providers. | | **Rules 17a-3/17a-4 (Books & Records)** | LLM prompts, outputs, and AI-assisted decisions are records subject to retention. | **FINRA GenAI observations (2025 Report):** Firms are proceeding cautiously, primarily exploring vendor-supported GenAI for internal efficiency (document summarization, transaction validation, policy retrieval). FINRA is specifically watching emergence of AI agents capable of autonomous multi-system action. Agent-specific concerns flagged: excessive autonomy without human validation, agents exceeding intended scope, limited auditability, improper handling of sensitive data. **Practical FINRA examination expectations (per Baker Donelson analysis of 2026 developments):** Examiners will ask to see AI inventory, supervisory policies covering AI outputs, logging and retention of prompts/responses, model version tracking, and human review procedures for regulated workflows. [Source: Baker Donelson, January 2026](https://www.bakerdonelson.com/finras-genai-playbook-real-accountability-for-broker-dealers) ### OCC, FDIC, Federal Reserve No AI-specific interagency statement has been issued as of March 2026. Each agency's AI engagement has been limited: - **OCC FY2025 Bank Supervision Operating Plan** (October 2024): AI is not listed as a standalone supervisory priority. Cybersecurity remains a priority with emphasis on incident response, operational resilience, and third-party risk management — all of which encompass AI systems. [Source: OCC FY2025 Supervision Plan](https://www.occ.gov/news-issuances/news-releases/2024/nr-occ-2024-111a.pdf) - **OCC Acting Comptroller (April 29, 2025):** Delivered remarks on "ethical and responsible" AI use in banking at the National Fair Housing Alliance's Responsible AI Symposium. Discussed Project REACh (innovation pathway). No supervisory guidance issued. [Source: OCC Press Release 2025-38](https://www.occ.treas.gov/news-issuances/news-releases/2025/nr-occ-2025-38.html) - **OCC AI Research Solicitation** (October 7, 2024): Solicited academic research papers on AI in banking, with selected papers presented June 6, 2025. Signals OCC is in research mode, not yet guidance mode. [Source: OCC News Release 2024-115](https://www.occ.gov/news-issuances/news-releases/2024/nr-occ-2024-115.html) - **Federal Reserve:** No standalone AI guidance. AI risk falls under existing model risk management guidance (SR 11-7), operational risk frameworks, and third-party risk management guidance (interagency guidance finalized June 2023). FR SR 11-7 on model risk management is the de facto standard — banks treat LLMs as "models" under SR 11-7, triggering full model validation, documentation, and ongoing monitoring requirements. - **Interagency Third-Party Risk Management Guidance** (June 9, 2023): [Federal Register final rule](https://www.federalregister.gov/documents/2023/06/09/2023-12340/interagency-guidance-on-third-party-relationships-risk-management) — applies to AI vendors accessing bank systems; covers due diligence, contract protections, ongoing monitoring. **Practical reality:** Federal bank examination of AI currently occurs through the lens of existing frameworks — model risk management (SR 11-7), operational risk, third-party risk — not AI-specific guidance. Examiners will flag AI deficiencies under these frameworks, not under a new "AI rule." ### NYDFS — Most Specific US FS Regulator on AI Cybersecurity The New York Department of Financial Services has issued the most actionable cybersecurity-specific AI guidance of any US state or federal regulator. **Industry Letter: Cybersecurity Risks Arising from Artificial Intelligence (October 16, 2024):** Addressed to executives and CISO-level personnel at all NYDFS-regulated entities (banking, insurance, financial services). Primary source: [DFS Industry Letter IL20241016](https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks) **Key holding:** Does not impose new requirements. Maps existing 23 NYCRR Part 500 (NYDFS Cybersecurity Regulation) obligations to AI risk contexts. Obligates covered entities to apply their cybersecurity program to AI-specific risks. Specific control requirements as applied to AI risk contexts: | Requirement | Part 500 Citation | AI-Specific Application | |---|---|---| | Risk Assessments | §§ 500.2, 500.3, 500.9 | Must account for deepfakes, organization's AI use, TPSP AI technologies. Annual update + update upon material change. | | TPSP / Vendor Management | § 500.11(a) | AI vendor due diligence before access to systems/NPI. Contracts must prohibit unauthorized data ingestion. TPSPs must notify of AI-related cybersecurity events. | | Multi-Factor Authentication | § 500.12 | MFA for all Authorized Users accessing systems/NPI. Extended to ALL users (not just privileged) as of November 2025. | | Access Controls | §§ 500.7(1),(2) | Least privilege; periodic annual review; prompt termination upon departure. | | Annual Training | § 500.14(a)(3) | Must include AI risks, mitigation procedures, and response to AI-enhanced social engineering. Simulated deepfake phishing/vishing/video attacks. | | Specialized Cybersecurity Personnel Training | § 500.10(a)(2) | AI threats and defensive AI uses. Train on securing AI systems if deploying or using AI via TPSPs. | | Data Minimization | § 500.13(b) | Dispose of NPI no longer needed, including AI use cases. | | Data Inventory | § 500.13(a) | Inventory all Information Systems, including AI-reliant systems. Deadline: November 1, 2025. | | Incident Response Plans | § 500.16(a) | Must cover AI-related Cybersecurity Events, including AI-enabled attacks. | | Senior Governing Body | § 500.4(d) | Board must understand cybersecurity including AI risks; exercise authority; review regular AI-inclusive reports. | **NYDFS Insurance Circular Letter No. 7 (July 11, 2024):** Separate guidance for insurers using AI for underwriting and pricing. Requires: board-level AI oversight, AI risk governance framework, annual testing (pre-production and at least annually thereafter), documentation of testing methodologies, audit function engagement. Does not alter existing law; applies existing insurance law obligations to AI contexts. [Source: DFS Insurance Circular Letter No. 7](https://www.dfs.ny.gov/industry-guidance/circular-letters/cl2024-07) --- ## 5. Emerging Standards for Agentic AI ### Landscape Assessment As of March 2026, no formal regulatory standard specifically governs AI agent security. The space is in active standard-development mode. The practical risk is significant: a [February 2026 CISO survey](https://www.akto.io/blog/state-of-agentic-ai-security-2025) found 69% of enterprises piloting or running production agent deployments, but 79% have no formal governance policy. Only 21% have end-to-end visibility. ### NIST AI Agent Standards Initiative (February 17, 2026) The most significant formal government action on agentic AI security. Launched February 17, 2026 by CAISI (Center for AI Standards and Innovation at NIST). [Source: NIST CAISI announcement](https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure) and [ANSI summary](https://www.ansi.org/standards-news/all-news/2-18-26-nist-launches-ai-agent-standards-initiative) **Three pillars:** 1. **Industry-led standards development** — CAISI facilitating technical standards; U.S. leadership in international standards bodies (ISO/IEC JTC 1/SC 42, IEEE), coordination with NSF 2. **Open-source protocol development** — Community-driven protocols for agent interoperability; ensuring governance frameworks span all platforms 3. **Security and identity research** — Agent authentication, identity infrastructure, authorization controls, security evaluation. Early identified threats: prompt injection, data poisoning, excessive write access, interaction with untrusted internet resources **Active deliverables:** - RFI on AI Agent Security Threats — closed March 9, 2026 - NCCoE Concept Paper: *Accelerating Adoption of Software and AI Agent Identity and Authorization* — public comment deadline April 2, 2026 - Sector-specific listening sessions on AI agent adoption barriers beginning April 2026 - Forthcoming: COSAiS (Control Overlays for Securing AI Systems) on SP 800-53 for GenAI, Predictive AI, and Agentic AI NIST's published framing for agent security requirements: least-privilege and just-in-time access (not persistent permissions), unique agent identities separate from shared service accounts, full audit trails of prompts/context/tool-calls/approvals/actions, post-deployment behavioral monitoring beyond uptime metrics, prompt injection treated as an architectural security control problem. ### CSA MAESTRO (February 2025) Cloud Security Alliance published **MAESTRO** (Multi-Agent Environment, Security, Threat, Risk, and Outcome) as a threat modeling framework specifically for Agentic AI. [Source: CSA blog, February 6, 2025](https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro) **Seven-layer architecture** decomposes agent systems for threat analysis: 1. Foundation Models (LLM capabilities) 2. Data Operations (data management, RAG stores) 3. Agent Frameworks (reasoning loops, tool dispatch, orchestration) 4. Deployment Infrastructure (compute, networking, storage) 5. Security Layer (monitoring, guardrails, access controls) 6. Agent Ecosystem (multi-agent interactions, external services) 7. Business Application Layer (end-user value delivery) MAESTRO's key insight: threats don't exist in single layers — the most dangerous attack paths chain across layers. A prompt injection at Layer 1 can cascade through to tool invocation (Layer 3) and data exfiltration (Layer 6). STRIDE alone misses these cross-layer threat chains. As of February 2026, CSA is actively evolving MAESTRO based on real-world implementation feedback. [Source: CSA MAESTRO real-world applications blog, February 2026](https://cloudsecurityalliance.org/blog/2026/02/11/applying-maestro-to-real-world-agentic-ai-threat-models-from-framework-to-ci-cd-pipeline) ### CSA AI Controls Matrix (AICM, July 2025) [AICM v1.0](https://cloudsecurityalliance.org/artifacts/ai-controls-matrix) released July 9, 2025, updated October 30, 2025. Most comprehensive vendor-neutral AI security control framework currently available. - **243 control objectives** across **18 security domains** - Domains include both AI-specific (Model Security, AI Supply Chain, Transparency & Accountability, Human Oversight & Control) and traditional-security-enhanced-for-AI (IAM with AI-specific privileged access, Data Security with training/inference data) - Five-pillar analysis: Control Type, Ownership/Accountability (across CSP/Model Provider/Orchestrated Service Provider/Application Provider/Customer layers), Architectural Relevance, Lifecycle Relevance, Threat Category - Nine threat categories addressed: model manipulation, data poisoning, sensitive data disclosure, model theft, service failures, insecure supply chains, insecure apps/plugins, denial of service, loss of governance/compliance - Maps to: ISO 42001:2023, NIST AI RMF 1.0, NIST AI 600-1, BSI AIC4, ISO 27001/27002 - **STAR for AI Level 1** (self-assessment program) launched October 23, 2025 — enables organizations to publish standardized AI CAIQ self-assessments to the STAR Registry - Freely available for download; forms the basis for upcoming third-party AI security certification ### ISO Standards No ISO standard specifically for AI agent security exists as of March 2026. Relevant existing standards: - **ISO/IEC 42001:2023** — AI management systems; first global AI governance standard; focus on organizational structures for risk, transparency, accountability. Being mapped to EU AI Act compliance workflows. Not agent-specific. - **ISO/IEC 23894:2023** — AI risk management guidance; integrates risk management across AI lifecycle. Being extended to cover agentic use cases. - **ISO/IEC JTC 1/SC 42** — Primary ISO committee for AI standardization. Actively developing new standards; CAISI is engaging this committee for AI agent standards. ### Enterprise Pre-Deployment Requirements for AI Agents Synthesized from NIST IR 8596, OWASP ASI, CSA AICM, CISA secure-by-design, and FINRA 2025 guidance — the following constitute the emerging professional consensus on minimum controls before deploying an AI agent with system access: **Identity and Access:** - Unique cryptographic identity for each agent (not shared service accounts) - Least-privilege access scoped to specific task, not role-persistent - Just-in-time access provisioning for sensitive operations - Human approval gates for high-impact actions (financial transactions, data deletion, external communications) - Short-lived tokens requiring regular re-authentication **Inventory and Governance:** - AI agent inventory documenting: purpose, tool access, data access, permissions, owner, environment - Classification by action risk level (read-only → recommendation → autonomous action → multi-agent orchestration) - Formal governance policy approved by senior management/board - Defined acceptable use boundaries and prohibited action categories **Monitoring and Auditability:** - Full audit logging: prompts received, context retrieved, tool calls made, approvals obtained, actions executed, rollback activity - SIEM integration for AI agent telemetry (aligns with NIST Cyber AI Profile Detect provisions) - Behavioral baselines and anomaly detection - Agent-to-agent communication monitoring (only 17% of enterprises currently do this per 2025 survey) **Security Testing:** - Pre-deployment red teaming using MITRE ATLAS and OWASP ASI Top 10 as attack scenarios - Prompt injection testing across all external input surfaces - Authorization boundary testing (can agent exceed defined scope?) - Supply chain security review of underlying model, tools, and data sources --- ## 6. MITRE ATLAS ### Current Status [MITRE ATLAS](https://atlas.mitre.org) (Adversarial Threat Landscape for Artificial-Intelligence Systems) — as of October 2025: - **15 tactics** - **66 techniques** - **46 sub-techniques** - **26 mitigations** - **33 real-world case studies** - **150+ organizations** engaged in ATLAS use Originally launched in 2020 (initially "Adversarial ML Threat Matrix") through MITRE-Microsoft collaboration. Has evolved into a community-driven framework. **October 2024 update:** Added GenAI attack mitigations. **October 2025 update:** Added **14 new techniques** for AI agents and generative AI, in collaboration with Zenity Labs, Microsoft, Intel, Verizon, CrowdStrike, and 10+ other organizations. [Source: NIST CSRC MITRE ATLAS Overview, September 2025](https://csrc.nist.gov/csrc/media/Presentations/2025/mitre-atlas/TuePM2.1-MITRE%20ATLAS%20Overview%20Sept%202025.pdf) ### Taxonomy **The 15 tactics (adversarial objectives):** | Tactic | Description | |---|---| | Reconnaissance | Gather information on AI system architecture, data sources, vulnerabilities | | Resource Development | Acquire infrastructure/capabilities for AI attacks | | Initial Access | Entry to AI environment via compromised APIs, phishing, software vulnerabilities | | ML Model Access | Gain access to target ML model inference APIs or model artifacts | | Execution | Run adversarial code or inputs against AI system | | Persistence | Maintain foothold in AI systems across sessions (e.g., memory manipulation) | | Defense Evasion | Bypass AI-based security; evade ML-based detection systems | | Credential Access | Harvest authentication credentials, API keys from AI configurations/RAG stores | | Discovery | Map AI architecture — data flows, model versions, sensitive data locations | | Lateral Movement | Expand access through connected AI systems, agents, or downstream models | | Collection | Harvest training datasets, model parameters, sensitive user data | | Command and Control | Remotely manage compromised AI systems | | Exfiltration | Extract data via AI agent tools, query flooding (model extraction), or output channels | | Impact | Disrupt AI functionality, corrupt outputs, cause downstream operational failures | | *(14th/15th added per Oct 2025 update)* | Agentic-specific tactics | **October 2025 new agentic techniques (selected):** 1. AI Agent Context Poisoning — manipulate agent's LLM context persistently 2. Memory Manipulation — alter long-term LLM memory across sessions 3. Thread Injection — inject malicious instructions into a specific conversation thread 4. Modify AI Agent Configuration — change configuration files for persistent malicious behavior 5. RAG Credential Harvesting — use LLM to find credentials inadvertently ingested into RAG databases 6. Credentials from AI Agent Configuration — access API keys from agent's own config 7. Discover AI Agent Configuration — probe to find config files revealing tool/service access 8. Embedded Knowledge Discovery — identify authorized data sources and knowledge bases 9. Tool Definitions Discovery — identify callable tools/APIs for lateral movement/exfiltration planning 10. Activation Triggers — identify keywords/events that trigger automated agent workflows 11. Data from AI Services — collect sensitive info by querying centralized AI-enabled services 12. RAG Database Prompting — prompt AI to retrieve sensitive internal documents from RAG 13. AI Agent Tool Invocation — force agent to use authorized tools for unauthorized actions 14. Exfiltration via AI Agent Tool Invocation — encode sensitive data into tool parameters (e.g., email body, CRM field) to exfiltrate ### Relationship to MITRE ATT&CK ATLAS uses the same matrix structure as ATT&CK (tactics → techniques → sub-techniques) deliberately, enabling security teams to apply familiar workflows: | Dimension | ATT&CK | ATLAS | |---|---|---| | Target | Traditional IT/OT/cloud/mobile infrastructure | AI/ML systems and AI-enabled systems | | Threat types covered | Network intrusion, endpoint compromise, C2, exfiltration via traditional channels | Data poisoning, model evasion, model extraction, adversarial examples, AI-specific credential theft | | Unique concepts | Lateral movement via AD, living off the land | ML Model Access, Training Data Manipulation, Adversarial Input Generation | | Red team use | Purple team exercises against IT infrastructure | Red team exercises against AI/ML pipelines and deployed models | | Cross-reference | Some ATLAS techniques map to ATT&CK techniques (marked with & in SAFE-AI mapping) | — | | Defense counterpart | MITRE D3FEND | ATLAS Mitigations (26 as of Oct 2025) | ATLAS ≠ a replacement for ATT&CK. Organizations deploying AI within IT infrastructure need both: ATT&CK for the underlying infrastructure compromise path, ATLAS for the AI-specific attack surface. A supply chain attack that compromises an AI model's training data (ATLAS) may begin with a spearphishing email (ATT&CK Initial Access). **AI Incident Sharing Initiative** (launched October 2024): Community platform for sharing anonymized AI security incidents. Reports map to ATLAS techniques. Integrates with CVE and CWE AI Working Groups. Enables trend analysis across reported incidents. Estimated financial impact of AI supply chain attack vector: over $1 billion as of March 2024, per MITRE internal analysis presented at NIST CSRC. ### Real-World Case Studies (from ATLAS database, 33 total) **Cylance Malware Scanner Bypass:** - Adversaries studied ML-based scanner via conference presentations, patents, and API documentation - Deduced detection logic via black-box query analysis - Created malware with adversarial characteristics (universal bypass) that consistently evaded ML classifier - Appended bypass to multiple malicious files — effectively a one-size bypass for all malware variants - ATLAS mapping: Reconnaissance → ML Model Access → Defense Evasion (adversarial examples) **Model Distillation Attack (case study pattern):** - Adversaries query target model extensively at scale - Train a surrogate model on the return values (model stealing) - Surrogate model replicates target model capability without authorization - ATLAS mapping: ML Model Access → Collection → Exfiltration - Defense gap exploited: unconstrained API access, no anomaly detection on query volume ### Practical Utility Assessment ATLAS is most useful as a red team driver and threat modeling accelerant, not as a compliance checklist. **High utility:** - Red team exercise design: map intended AI deployment to ATLAS tactics, design test cases for each relevant technique - Detection rule development: use ATLAS technique descriptions to write monitoring rules and behavioral baselines - Vendor security assessment: evaluate AI vendors against ATLAS mitigations to identify control gaps - Post-incident analysis: map observed behavior to ATLAS techniques for structured reporting **Limitations:** - 33 case studies covers real-world attacks documented through early 2025 — actual attack volume is much higher but unreported - Framework lags deployment reality: agentic AI technique additions (Oct 2025) were developed as practitioners were already deploying agents in production - No quantitative likelihood scoring — unlike ATT&CK's data-driven technique prevalence, ATLAS techniques don't carry frequency weightings - Mitigation guidance is less developed than ATT&CK's; most mitigation detail points to NIST SP 800-53 controls and ATLAS Playbook, not action-specific technical guidance --- ## Source Index ### Tier 1 — Government/Official Standards Publications - **NIST AI RMF 1.0 (January 2023):** https://airc.nist.gov/Home - **NIST AI 600-1 GenAI Profile (July 2024):** https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf - **NIST AI RMF Playbook (updated February 2025):** https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook - **NIST IR 8596 Cyber AI Profile IPRD (December 16, 2025):** https://nvlpubs.nist.gov/nistpubs/ir/2025/NIST.IR.8596.iprd.pdf - **NIST CAISI AI Agent Standards Initiative (February 17, 2026):** https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure - **NIST AI Agent Standards Initiative hub:** https://www.nist.gov/caisi/ai-agent-standards-initiative - **NIST Cybersecurity Framework 2.0 (February 2024):** https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf - **MITRE ATLAS framework:** https://atlas.mitre.org - **MITRE ATLAS Overview at NIST CSRC (September 2025):** https://csrc.nist.gov/csrc/media/Presentations/2025/mitre-atlas/TuePM2.1-MITRE%20ATLAS%20Overview%20Sept%202025.pdf - **OWASP Top 10 for LLM Applications 2025 (PDF):** https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-v2025.pdf - **OWASP GenAI Security Project (includes LLM Top 10 and Agentic):** https://genai.owasp.org/llm-top-10/ - **OWASP Agentic AI Top 10 announcement (December 9, 2025):** https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/ - **EU AI Act (Regulation EU 2024/1689) — EUR-Lex official text:** https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 - **EU AI Act — Article 15 (cybersecurity):** https://artificialintelligenceact.eu/article/15/ - **EU AI Act — Article 16 (high-risk AI provider obligations):** https://artificialintelligenceact.eu/article/16/ - **EU AI Act — Application Timeline (European Commission):** https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai - **SEC — Cyber and Emerging Technologies Unit announcement (February 20, 2025):** https://www.sec.gov/newsroom/press-releases/2025-42 - **SEC — Nate, Inc. enforcement action (April 9, 2025):** https://www.sec.gov/enforcement-litigation/litigation-releases/lr-26282 - **SEC — AI at the SEC (2025 Compliance Plan):** https://www.sec.gov/ai - **FINRA Regulatory Notice 24-09 (June 27, 2024):** https://www.finra.org/rules-guidance/notices/24-09 - **FINRA 2025 Annual Regulatory Oversight Report (January 28, 2025):** https://www.finra.org/sites/default/files/2025-01/2025-annual-regulatory-oversight-report.pdf - **FINRA Key Challenges on AI in Securities Industry:** https://www.finra.org/rules-guidance/key-topics/fintech/report/artificial-intelligence-in-the-securities-industry/key-challenges - **OCC FY2025 Bank Supervision Operating Plan:** https://www.occ.gov/news-issuances/news-releases/2024/nr-occ-2024-111a.pdf - **OCC AI Research Solicitation (October 2024):** https://www.occ.gov/news-issuances/news-releases/2024/nr-occ-2024-115.html - **OCC Acting Comptroller AI Remarks (April 29, 2025):** https://www.occ.treas.gov/news-issuances/news-releases/2025/nr-occ-2025-38.html - **Interagency Third-Party Risk Management Guidance (June 2023):** https://www.federalregister.gov/documents/2023/06/09/2023-12340/interagency-guidance-on-third-party-relationships-risk-management - **NYDFS AI Cybersecurity Industry Letter (October 16, 2024):** https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks - **NYDFS Insurance Circular Letter No. 7 (July 11, 2024):** https://www.dfs.ny.gov/industry-guidance/circular-letters/cl2024-07 - **US Treasury FS AI RMF announcement (February 19, 2026):** https://home.treasury.gov/news/press-releases/sb0401 - **Cyber Risk Institute FS AI RMF:** https://cyberriskinstitute.org/artificial-intelligence-risk-management/ - **ANSI summary of NIST AI Agent Standards Initiative:** https://www.ansi.org/standards-news/all-news/2-18-26-nist-launches-ai-agent-standards-initiative ### Tier 2 — Industry Bodies and Official Standards Organizations - **CSA MAESTRO Agentic AI Framework (February 6, 2025):** https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro - **CSA AI Controls Matrix (AICM, July 2025):** https://cloudsecurityalliance.org/artifacts/ai-controls-matrix - **CSA AICM STAR for AI program:** https://cloudsecurityalliance.org/blog/2025/08/20/announcing-the-ai-controls-matrix-and-iso-iec-42001-mapping-and-the-roadmap-to-star-for-ai-42001 - **CSA MAESTRO real-world application (February 2026):** https://cloudsecurityalliance.org/blog/2026/02/11/applying-maestro-to-real-world-agentic-ai-threat-models-from-framework-to-ci-cd-pipeline - **ISO AI Standards Summit 2025:** https://www.iso.org/aisummit - **SEC Investor Advisory Committee AI recommendations (December 7, 2025):** https://www.dandodiary.com/2025/12/articles/securities-laws/sec-investor-advisory-committee-recommends-ai-related-disclosure-guidelines/ ### Tier 3 — Analysis Sources with High Factual Reliability - **FINRA GenAI Playbook analysis (Baker Donelson, January 2026):** https://www.bakerdonelson.com/finras-genai-playbook-real-accountability-for-broker-dealers - **NYDFS AI Cybersecurity Guidance analysis (White & Case, November 2024):** https://www.whitecase.com/insight-alert/nydfs-releases-artificial-intelligence-cybersecurity-guidance-covered-entities - **SEC AI washing enforcement (Holland & Knight, July 2025):** https://www.hklaw.com/en/insights/publications/2025/07/sec-and-doj-warm-up-to-enforcement-over-ai-washing - **EU AI Act August 2025 GPAI deadlines (Cranium AI, February 2026):** https://cranium.ai/resources/blog/navigating-the-eu-ai-act-august-2025-deadline-gpai-compliance-penalties-and-enforcement/ - **NIS2 / EU AI Act interaction (ISMS.online, October 2025):** https://www.isms.online/nis-2/vs/eu-ai-act/ - **NIS2 / CRA relationship (Hyperproof, February 2026):** https://hyperproof.io/understanding-the-relationship-between-nis2-and-the-eu-cyber-resilience-act/ - **OWASP LLM 2023 vs. 2025 comparison (Mindgard, January 2025):** https://mindgard.ai/blog/how-the-owasp-top-10-risks-for-llms-evolved-from-2023-to-2025-lessons-and-implications - **MITRE ATLAS Framework 2026 Guide (Practical DevSecOps):** https://www.practical-devsecops.com/mitre-atlas-framework-guide-securing-ai-systems/ - **MITRE ATLAS statistics and case studies (Vectra AI, February 2026):** https://www.vectra.ai/topics/mitre-atlas - **NIST AI Agent Standards Initiative analysis (MetricStream, March 2026):** https://www.metricstream.com/blog/nists-ai-agent-standards-initiative.html - **SEC AI compliance and Form ADV guidance (Kitces, November 2025):** https://www.kitces.com/blog/artificial-intelligence-compliance-considerations-investment-advisers-sec-securities-exchange-commission-legal-regulation-framework/ - **State of Agentic AI Security 2025 survey (Akto, February 2026):** https://www.akto.io/blog/state-of-agentic-ai-security-2025 - **FS AI RMF analysis (JD Supra, March 2026):** https://www.jdsupra.com/legalnews/u-s-treasury-releases-ai-risk-3993293/