Blog

Evaluating AI in Public Safety: Security & Governance: Considerations Agencies Can't Ignore

In public safety, technology decisions carry real-world consequences. AI is no exception. 

Agencies are actively exploring how AI can improve efficiency and outcomes, from report writing to investigative support. However, unlike traditional public safety systems such as RMS or CAD, these tools are often evaluated through a different lens. 

That’s where risk starts to creep in.  

It’s essential to consider how AI is securely delivered within a system and how it is governed across that organization. The dual perspective reinforces a simple principle: AI adoption isn’t just about capability. It’s about control. 

This is a first of a three-part series from Mark43’s Governance, Risk and Compliance team on evaluating AI in public safety from a security and compliance perspective. We start with the most foundational question: how do these systems handle and protect mission-critical data? 

AI Isn’t Just Another Vendor Evaluation 

When agencies evaluate core systems, they ask the right questions: Where is the data stored? Who has access? How is it protected? 

Those same questions still apply to AI – but they don’t go far enough.  

AI introduces a set of considerations that traditional vendor evaluations weren’t built to address:  

  • Where is data processed in real time?  
  • Is it retained or used to train models?  
  • Can outputs be audited, explained, and defended in court or before city council? 

If a vendor can’t answers these questions clearly, you aren’t evaluating a solution. You are accepting riskyou can’t fully understand or control.  

The Five Areas That Define AI Security in Public Safety 

Evaluating AI in public safety ultimately comes down to five key areas: 

  • Data Control – Where does agency data go?, How is it handled? Is it retained or reused? 
  • Access Enforcement – Who can access what, and how that access is governed? 
  • Auditability – Whether actions, inputs, and outputs can be traced and defended  
  • Vendor & Supply Chain Risk – Who is in the data path, including model providers, cloud infrastructure, and sub processors 
  • Operational Risk – The real-world impact of incorrect or misleading outputs 

What Agencies Should Be Looking For 

  1. Data Exposure & Regulatory Implications 

AI systems often rely on external services or APIs. That means data may leave your controlled environment, even if only temporarily. For agencies handling sensitive law enforcement and personal data, this raises immediate concerns: 

  • Is data encrypted end-to-end? 
  • Is it processed within authorized jurisdictions? 
  • Is it retained, cached, or used for model training? 

If a vendor cannot clearly explain data handling, that’s a problem, not a detail. 

Leading AI implementations in public safety environments take a more controlled approach: keeping customer data within the platform boundary, avoiding model training on that data, and enforcing strict access, encryption, and contractual controls. 

  1. Access Control Still Matters 

AI should not create a back door into your data. If a user doesn’t have access to a record, they should not be able to retrieve it through an AI prompt. Effective implementations enforce access at the user level, ensuring data visibility is limited to what each individual is authorized to see. Access should be tied to authenticated user identity, not shared credentials, so that all actions remain attributable and auditable. 

Look for AI tools that offer: 

  • Integration with Single Sign-On (SSO) and Multi-factor authentication (MFA) 
  • Enforcement of role-based access control 
  • Clear tenant isolation 

AI should reinforce your access model, not bypass it. 

  1. Logging, Monitoring, and Auditability 

In public safety, “we have logs” isn’t enough. You need logs that hold up under scrutiny and provide meaningful insights. 

For AI, that means: 

  • Capturing prompts and outputs 
  • Attributing activity to individual users 
  • Maintaining traceability of data sources 

If you can’t reconstruct how an AI output was generated, you can’t defend it. 

  1. Vendor & Supply Chain Risk 

Most AI solutions are not standalone platforms. They rely on third-party model providers, cloud infrastructure, and additional sub processors 

Each additional provider and infrastructure expands your risk surface, increasing the number of entities that can access, process, or impact your data.  

Agencies should understand: 

  • Who is processing their data? 
  • What controls exist across that ecosystem? 
  • How is risk managed end-to-end? 
  1. Operational Risk, Not Just Security Risk 

AI failure isn’t just about breaches. It’s about: 

  • Incorrect report narratives 
  • Misinterpreted data 
  • Overreliance on generated outputs 

In public safety, those aren’t technical issues; they’re operational and legal risks. This is why a human-in-the-loop approach is critical. Ensuring AI supports decision-making, but trained personnel remain responsible for validating outputs, maintaining oversight, and taking final action. 

What Best-in-Class Secure and Compliant AI Looks Like 

The strongest AI implementations in public safety share a common architecture. Regardless of vendor or platform, agencies should look for solutions that meet the following standard: 

  • Data stays within the platform boundary and is never routed through external services that move it outside the controlled environment 
  • Agency data is never retained or used to train AI models without explicit agreement 
  • All data is processed under strict encryption, access, and jurisdictional controls consistent with CJIS requirements, UK data protection laws, and other regulatory frameworks 
  • AI outputs respect the same role-based access controls that govern the rest of the platform — no user should be able to retrieve information through AI that they couldn’t access directly 
  • Every AI interaction is fully logged and attributable to individual authenticated users, supporting transparency and auditability 
  • AI permissions are configurable at a granular level, so agencies can control exactly where and how AI is applied across different incident types and workflows 

If a solution can’t meet these criteria, it isn’t ready for a public safety environment. 

AI adoption isn’t just about capability. It’s about control. Agencies that evaluate AI with the same rigor they apply to any mission-critical system — defined security requirements, clear governance, and verifiable controls — will be in a far stronger position than those that don’t. 

The Bottom Line 

Evaluating AI is only the first step. The real challenge is ensuring it is used in a way that is secure, compliant, and operationally sound over time.  

In our next post, we’ll explore how public safety agencies can build a practical AI governance program, covering data controls, acceptable use policies, oversight mechanisms, and how to integrate AI into existing security and compliance frameworks. 

Is your agency exploring AI-enabled capabilities but unsure how to implement them securely? Mark43 builds AI into the Public Safety Platform with a human-first, secure and compliant approach, combining strong data protection, controlled architecture, and responsible AI safeguards. Explore our Trust Center to learn more and book a demo today.