Einstein monotile background

Compliance

Security risk mitigation

We believe in security by design. Discover the Unless mitigation strategies to keep your data safe.

my image Some great alternative text

Unless adopted a layered security approach, combining robust validation processes, strict access controls, continuous monitoring, and educating users and developers about potential vulnerabilities. By focusing on least-privilege access, consistent data sanitization, and secure plugin designs, we enhance the resilience of our application, creating a safer, more reliable platform for users and organizations.

LLM security risk mitigation

OWASP Full Logo R Black

Below are the top ten risks for LLM applications as composed by OWASP, with a brief description of each and mitigation strategies that we apply at Unless.

  1. Prompt Injection
    Attackers may manipulate the LLM through crafted inputs, potentially leading to data exfiltration, social engineering, or backend system exposure.

    Mitigation: We enforce privilege control, validate and cleanse untrusted content, and limit backend access to only necessary API tokens.

  2. Insecure Output Handling
    This may occur when the outputs generated by an LLM are passed downstream without proper validation, leading to vulnerabilities like XSS or privilege escalation.

    Mitigation: We treat LLM responses with zero-trust, implement strict validation, and sanitize outputs.

  3. Training Data Poisoning
    Malicious actors may insert harmful or biased data into LLM training, which affects model behavior and output quality.

    Mitigation: We vet training data rigorously, establish trusted sources, and use anomaly detection to flag poisoned data that can be checked by a human before it gets used.

  4. Model Denial of Service (DoS)
    Attacks could cause resource-heavy LLM operations, degrading service quality or increasing operational costs.

    Mitigation: We limit context window usage, enforce API rate limits, and monitor usage patterns for abnormal resource consumption, on a per account basis.

  5. Supply Chain Vulnerabilities
    Risks arising from third-party model or dataset dependencies, which may introduce outdated or compromised components.

    Mitigation: we use secure model repositories, require a Content Security Policy (CSP), maintain a Software Bill of Materials (SBOM), and monitor suppliers for policy and security updates.

  6. Sensitive Information Disclosure
    LLMs might inadvertently reveal confidential information, exposing sensitive data.

    Mitigation: We employ data sanitization techniques, set strict access controls, and limit external data access within the LLM.

  7. Insecure Plugin Design
    LLM plugins can introduce vulnerabilities if they lack input validation or control over user actions, risking remote code execution.

    Mitigation: We use parameterized inputs, validate plugin authentication, and adopt least-privilege access control for plugin functions.

  8. Excessive Agency
    LLM-based systems with excessive permissions may take unintended actions due to inadequate user oversight.

    Mitigation: We minimize granted functions, require user approval for high-impact actions, and implement granular access levels for LLM plugins.

  9. Overreliance on LLMs
    Dependence on LLMs for critical decisions can lead to misinformation or insecure outputs without human review.

    Mitigation: We implement validation layers, use disclaimers in our UI, and limit the functional purpose of an LLM in our system.

  10. Model Theft
    Unauthorized access to a custom LLM's model, weights, or architecture, leading to intellectual property theft.

    Mitigation: we do not use custom models, hence theft would not be very useful, and by limiting the LLM scope within our services, we can switch our models instantly if a problem would occur.

Data storage

Before getting into the technical details, here are key points about our data handling:

  • We ensure consistent global performance by distributing data across 75 edge locations, but all customer and visitor data is stored exclusively in the EU.
  • User data is securely stored on Amazon Web Services in EU data centers. Our servers run in a Virtual Private Cloud, preventing any external connections to our database.
  • We retain data for up to 365 days and only use it to benefit our users and customers. Data is only used with explicit consent from an Unless account admin, with clear information on its intended use.

Architecture

We prioritize security by design, using tactics like operating in read-only mode, serving static data, and hiding it behind CDN endpoints for DDoS protection. Our edit mode is controlled via an API with similar protections.

This setup offers benefits such as quick DDoS mitigation through integrated systems and edge services. Techniques like stateless SYN Flood mitigation verify connections before they reach protected services. Auto-traffic engineering disperses or isolates DDoS attack impacts, and firewalls offer application layer defense.

Our infrastructure uses Lambda functions, which are triggered on demand and don’t run when idle, enhancing security compared to always-on servers. This serverless architecture eliminates the need for OS maintenance or server management.

We encrypt data-at-rest with FIPS 140-2 validated hardware modules, and data in transit is secured with TLS using SHA-256 with RSA Encryption. Data processed by Lambda functions is protected in a shielded environment.

Development

As a SaaS provider, Unless does not distribute new software versions in the way software vendors would. Our service represents one version that is constantly receiving our attention and focus regarding security protocol and development efforts.

Our software developers employ secure coding standards, with ongoing reverse reviews and automated unit testing. Our software is tested internally, and critical features are released in beta to a select number of test customers for live field testing.

Live service

After formal code reviews, all deployment procedures are automated, with human involvement only required for user testing. We continuously monitor performance, availability, and security through automated processes and occasional manual checks.

We implement additional firewalls to restrict open ports on internet-facing servers. An Intrusion Protection System (IPS) acts as a secondary security layer, blocking access upon detecting suspicious login attempts. Our threat detection service identifies and prevents unauthorized behavior to avert security breaches.

Only engineers who need access to perform their tasks efficiently are granted system access, with varying rights based on their responsibilities. Unique credentials are assigned, and SSH Key-Based authentication is used for server access. Security access rights are reviewed monthly.

Patch management is designed to avoid affecting security vulnerabilities. Our microservices architecture allows for independent updates of system components, enabling targeted patches and bug fixes. We conduct continuous data backups for point-in-time recovery (PITR), maintaining electronic copies for 35 days with encryption during transit and at rest.

A legal retention policy requires the deletion of personal data, including files, databases, and backups, with encryption using 256-bit Advanced Encryption Standard (AES-256).

Organizational aspects

We allow customers to further improve their organizational security, by offering different user access levels.

  • Admins: all rights within a customer account.
  • Users: everything except user management.
  • Support agents: access to customer support features only.

The customer has the sole right of granting access to anyone. Unless employees have no access by default. Passwords are always hashed and salted using bCrypt. Additionally, data at rest and in motion is always encrypted by using TLS with at least 128-bit AES encryption. Data transport is over TLS using SHA-256 with RSA Encryption.

my image Some great alternative text

Friendly support from real people

We’re here to help

We are known for our quick responses if you have an issue. Feel free to ask us anything. But you can also ask our conversational AI a question, of course!