Security and governance.
An AI that runs commands on production endpoints is only deployable if you can trust it. Here is how the trust model works.
Approval policies
Define what the AI engineer runs autonomously and what needs human sign-off. Configure per client, issue class, or risk tier.
- Low risk actions (restart a service, clear temp files) can auto-execute
- Higher risk actions require human sign-off, with single or multi-approver strategies
- Policies configurable per organization and per endpoint group
AI safety guardrails
Every proposed action passes through layered guardrails before it touches an endpoint.
- Agents can only invoke allow-listed actions. The allow-list is checked at execution time, not just at proposal, so a hallucinated command that names something off-list cannot run
- Three-tier risk classification (low, medium, critical) plus a fourth blocked tier. Each tier maps to a different execution path: auto-run under policy, route for approval, or refuse outright
- The blocked tier covers actions that cannot be safely executed in any context: filesystem destruction (rm -rf /, drive formatting), boot sector overwrites, registry hive deletion, BitLocker disable, fork bombs, and evidence destruction. These cannot be run and cannot be approved, by anyone, including admins
- Concrete examples of the medium tier: changing Windows Defender configuration, modifying HKLM registry keys, disabling a network interface, modifying firewall rules (iptables, pfctl, netsh advfirewall), restarting or shutting down a system, changing user passwords, modifying cron schedules, sudo package install/remove, and accessing sensitive files (shadow, sudoers, SSH keys). These never auto-execute regardless of confidence: they pause for the configured human approver
- Text arriving from outside the operator's control (ticket bodies, email content, attachments, webhook payloads, endpoint output) is isolated from the instruction layer of every AI prompt before the model sees it, so an attacker-crafted ticket or a malicious log line cannot override an operator's intent
- Risky reversible actions (privilege escalation, force-kill of system processes, network configuration changes, package installation, scheduled task changes) are routed for human approval through your approval policy. The AI proposes with full context, a designated approver signs off
- Pattern chaining detection: multiple low-risk patterns combined (download then decode then execute across separate lines) elevate the overall risk score even when no individual pattern is critical on its own
- Sensitive values (emails, phone numbers, card numbers, API keys, bearer tokens) are redacted before any prompt is sent to an LLM provider
- On LLM provider outage, in-flight investigations pause cleanly, new tickets queue, and operators see degraded mode in the dashboard. No partially-executed remediations
- Every command is logged before and after execution. The audit trail traces a bad decision to the exact endpoint, command text, output, risk classification, and approver, so root-cause analysis is fast. Reversal of side effects is not automatic: rollback is the operator's call, informed by the logged before-state
Full audit trail
Diagnostic runs, policy decisions, and approvals are all written down. Exportable for compliance reviews.
- Diagnostic commands and outputs recorded verbatim
- Approval decisions with timestamp and approver identity
- Post-remediation verification results
- Audit records are append-only: the application offers no path to modify or delete them
Bring your own model
When privacy or compliance requires that AI traffic never leave your environment, point GenticFlow at your own model. Tokens stay in your infrastructure, prompts never reach a third-party LLM provider, and the audit trail logs which model handled which command.
- OpenAI, Anthropic Claude, Google Gemini, any OpenAI API-compatible endpoint, or a self-hosted runtime via Ollama
- Per-tenant model selection: different clients or workloads can use different models
- Air-gapped deployments supported via self-hosted Ollama or any OpenAI-compatible runtime
- Bundled AI can be fully disabled at the policy level when BYOM is required
Platform foundations.
The governance story above rests on a platform built with the same discipline.
Identity and access
TOTP and FIDO2 passkeys protect user accounts. The client portal supports SSO via OpenID Connect. New users are added by signed, time-limited invitation, not shared credentials.
Tenant isolation
Each tenant has its own database credentials and its own object-storage credentials. Crossing the boundary between two tenants requires crossing a credential boundary, not just a code-level filter.
Encryption
TLS on every connection, with HSTS preload so browsers cannot downgrade to HTTP. Sensitive fields are encrypted at the application layer, with per-tenant key separation.
Where your data lives
Default deployment is in Frankfurt, Germany on a SOC 2 Type II certified hosting provider. Data does not cross regions in normal operation. Customers with specific residency requirements can request a regional deployment as part of their plan.
Your data, your exit
Audit trail, ticket history, and configuration are exportable in machine-readable formats via the platform and the API. After cancellation, customer data is retained for 30 days to support reactivation, then permanently deleted, with a deletion confirmation available on request.
Signed binaries
Windows agents are signed with an Extended Validation Authenticode certificate. macOS agents are code-signed and notarised by Apple. You can verify the signature chain on every endpoint you install.
Operational commitments.
How we treat your data, how we respond when something goes wrong, and how we work with the security community.
Your data is not training data
GenticFlow does not use customer data to train AI models. The bundled AI uses OpenAI's commercial API under terms that prohibit training on data submitted to the API. For stricter requirements, the Bring your own model pillar above keeps prompts and tokens in your environment.
Incident response
If a security incident affects your data, you are notified within 72 hours of confirmation, in line with GDPR Article 33. Notifications include scope, suspected cause, and immediate mitigation. A written post-incident summary follows once root cause analysis is complete.
Vulnerability disclosure
Security researchers and customers can report suspected vulnerabilities to [email protected]. We acknowledge reports within 24 hours, work toward a fix on a 90-day disclosure timeline, and credit researchers who request it. Coordinated disclosure preferred.
For your security review.
Detailed security documentation is available under NDA as part of customer security reviews and vendor risk assessments.
Available on request:
- Security architecture and data-flow diagrams
- Information security policy and procedure summaries
- Subprocessor inventory and data processing agreement
- Incident response and breach notification processes
- Vulnerability management and patching cadence
- Vendor risk questionnaires (CAIQ, SIG Lite) on request
To start a security review, request vendor documentation, or report a vulnerability, write to [email protected]. We will respond with an NDA and the relevant documentation package.