Mark43’s Approach to Responsible AI

Mark43’s Approach to Responsible AI - Desktop Banner

At Mark43, our mission is to empower public safety agencies with best-in-class technology so they can work faster and smarter while keeping their communities and responders safer. We view artificial intelligence as a powerful tool to support this mission, but only when it is designed and implemented thoughtfully and responsibly. In the demanding world of public safety, we believe AI must support human judgment, not replace it, without compromising oversight, transparency, security, or ethical use. Mark43’s Approach to Responsible AI reflects this commitment and guides how we design, build, deploy, and govern AI capabilities across the platform.

Human-First by Design

Mark43 designs AI to support human judgment, not replace it.

Public safety work depends on human experience, intuition, empathy, and accountability, and these qualities must remain central to decision-making. Mark43 builds AI that reduces administrative and cognitive load, while improving consistency and ensuring that public safety professionals maintain control through role-based permissions and human-review of AI-generated outputs (such as drafts, suggestions, validations, and search results).

This human-first approach is reinforced through close partnerships with public safety agencies. Mark43 works alongside agency leaders and frontline professionals to ensure AI tools reflect real operational needs, policy requirements, and accountability standards, and align with how public safety work is performed. Structured feedback loops, including pilot programs, early adopter and evaluation programs, validation checks, and ongoing customer feedback, ensure AI systems continue to evolve alongside changing operational demands, staffing realities, and regulatory expectations.

Built into Existing Public Safety Workflows

Responsible AI must be built into the workflows public safety professionals rely on daily to ensure accountability, adoption and governance.

Mark43 designs AI capabilities to align with existing public safety workflows by embedding them directly into the Mark43 platform. Rather than introducing separate tools or standalone systems, AI is built into the processes agencies already use for dispatch, reporting, case management, and review. This approach delivers the greatest benefit to public safety operations by reducing friction, improving adoption, and ensuring AI is applied in context with the data, policies, and workflows that govern agency work.

Embedding AI within the core platform also strengthens governance and accountability. By operating within established systems, AI use can be managed through existing role-based permissions, audit trails, supervisory controls, and compliance frameworks. This helps ensure AI is used consistently, responsibly, and in alignment with agency standards, while maintaining visibility, oversight, and trust across the organization.

New Orleans Police badge - Mark43 Responsible AI

“Mark43 is at the forefront of innovation and is embracing AI responsibly to bring us a solution that democratizes analytics and increases our efficiency and effectiveness in providing essential data for decision-making.”

Jessica Nezat, Director of Analytics, New Orleans Police Department

Transparency and Validation

Trust in AI depends on transparency and validation.

Mark43 designs AI systems to be transparent and auditable. Users can see what AI generates, understand the source data behind it, and validate outputs before use. AI assisted content is not hidden or opaque, and outputs link back to underlying records for verification.

When information is incomplete, AI flags gaps rather than making assumptions. This approach supports supervisory review, legal defensibility, and organizational accountability, critical requirements in public safety environments.

Security, Privacy, Compliance, and Ethics

Responsible AI must meet the highest standards for data protection, governance, and oversight.

Mark43 builds AI on our secure, CJIS-aligned cloud-native infrastructure and follows rigorous security, privacy, and compliance practices. Customer data remains customer owned, and Mark43 does not train AI models on agency data, unless explicitly agreed. The underlying AI services used within the platform are also designed to ensure customer data is not used to train foundational models. AI capabilities are developed and deployed within a security posture that has undergone extensive vetting as part of the broader Mark43 platform.

Mark43 also ensures transparency and accountability through auditability. When AI-generated content is used, agencies can review how that content was incorporated within existing report history and records, supporting internal review, supervision, and compliance requirements.

Beyond technical safeguards, Mark43 takes an ethical approach to AI deployment by embedding human oversight, enforcing appropriate use through system guardrails, and ensuring AI strengthens public trust rather than undermining it. Responsible AI is not a feature. It is a foundational design principle.

Wendy Gilbert - Mark43 Responsible AI

“AI is one of the most powerful tools shaping the future of public safety. At Mark43, we are intentional about how we build and apply AI, always focused on enhancing rather than replacing human judgment. Transparency and trust are central to our approach. Our goal is to help agencies use AI responsibly to increase efficiency, accuracy, and community confidence, ensuring technology truly serves those who serve their communities.”

Wendy Gilbert, Senior Vice President of Product, Mark43