Security & Data Protection Overview (Trust)
Last updated: 2025‑12‑08
This Security & Data Protection Overview (“Overview”) explains at a high level how we design and operate our products to protect the data you entrust to us. It is written for security, legal, IT and business stakeholders evaluating inAi and its products, including PageMind and Emplo.
This Overview is informational only. It is not itself a contract, does not create any legal warranties or guarantees, and may be updated from time to time. The binding terms governing our relationship with you are set out in:
- the applicable Terms of Service for each product,
- our Data Processing Agreement (DPA) where we act as processor,
- our Privacy Policy, and
- any separate Master Service Agreement or pilot agreement we sign with you.
If there is any conflict between this Overview and those binding documents, the latter will always prevail. In case of any discrepancy, no additional rights or obligations arise from this Overview beyond those expressly set out in the applicable contracts.
Nothing in this Overview is legal advice. Each customer remains responsible for its own legal, regulatory, and internal-policy compliance when using our services. You should consult your own legal counsel before relying on any part of this document for compliance decisions.
We reserve the right to update this Overview at our discretion (for example, to reflect new laws, regulations, or technical measures). Where changes are material for existing customers, we may highlight them via the website or direct communications, in addition to updating this Overview.
1. Who we are and where we operate
1.1 Company identity
inAi is a French AI company based in Lille. We are incorporated as INAI, a French Société par Actions Simplifiée à associé unique (SASU) with share capital of €1,000, registered with the RCS Lille Métropole under number 987 977 386.
- Registered office and principal establishment: 142 rue d’Iéna, apt. 21, 59000 Lille, France.
- Corporate purpose: conception, development, publishing and commercialisation of software, SaaS applications and services using artificial intelligence, together with related consulting, training and R&D.
- Official domains: inai.fr, inai.world, pagemind.fr and emplo.fr.
We operate as a deep‑tech studio that turns large‑language‑model research into practical, auditable AI systems. Our current public products are:
- PageMind, focused on retail & catalog content automation for retailers, brands and marketplaces.
- Emplo, an AI career agent for job‑seekers in Europe.
Unless explicitly stated otherwise, the security and data‑protection practices described here apply to all inAi products and to shared infrastructure.
1.2 Geographic and legal environment
We design our systems and processes with EU law and French law as our primary legal frameworks. That includes, in particular:
- The EU General Data Protection Regulation (GDPR) and corresponding French implementing rules.
- The emerging EU AI Act, especially where our systems are used in contexts that may be considered high‑risk (e.g. employment support, certain compliance‑sensitive retail workflows).
- The evolving European cybersecurity framework (including the NIS2 Directive and related national rules); we monitor whether and when these obligations apply to our services and will adjust our security and incident‑handling practices accordingly.
Our architecture is intended to keep primary customer data at rest in virtual private cloud (VPC) environments located in data centres within the European Union and operated on top‑tier European or international providers. Our Legal Hub and FAQ describe this EU‑centric posture, including data‑location, retention and deletion commitments. In limited cases, some processing or access may involve locations outside the EEA (for example, through specific sub‑processors, remote support, or user access from abroad) under the safeguards described in Section 12.
1.3 Scope of this Overview
This Overview covers:
- How we govern security and data protection at company level.
- How we treat our roles as controller and processor for different product flows.
- What kinds of data we process and why.
- Our approach to data residency, sub‑processors and international transfers.
- Technical and organisational security measures.
- Data subject rights, incident response and shared responsibilities.
It does not:
- Replace or modify our Terms of Service, DPA, Privacy Policy or any signed contract.
- Provide detailed configuration guides for each customer environment.
- Guarantee compliance of your specific use case with any sector‑specific regulation (e.g. financial, health, food‑labelling, employment law). Those determinations remain your responsibility, although we can provide supporting information where appropriate.
2. Regulatory foundations and compliance posture
2.1 GDPR and data protection by design
We design our data‑protection programme around the EU General Data Protection Regulation (GDPR) wherever it applies, and we generally apply similar safeguards as a baseline to other personal data, subject to local law.
Key GDPR‑aligned principles that inform our design:
Lawfulness, fairness, transparency We process personal data only on documented legal bases (for example, performance of a contract with a customer, legitimate interests, compliance with legal obligations, or consent where applicable). The specific legal bases for each product and data category are explained in the relevant Privacy Policy and, where we act as processor, in the DPA.
Purpose limitation We process personal data only for specified, explicit and legitimate purposes, such as providing PageMind catalog workflows for a retailer or Emplo’s career‑assistant features for a candidate. Any reuse of data for secondary purposes (e.g. aggregated analytics, product improvement) is limited and described in our legal docs.
Data minimisation Our products are designed to require only the data necessary to perform the requested service. For example, PageMind workflows operate primarily on supplier product files and technical metadata rather than pervasive personal data; Emplo minimises the data needed for matching and application preparation.
Accuracy We design PageMind to avoid inventing facts, to rely on evidence in source documents, and to expose inconsistencies through QA and flags; and Emplo to allow candidates to review and edit application materials before use. Nevertheless, customers remain responsible for validating any content before using it in production or for regulated purposes.
Storage limitation We apply reasonable retention limits and deletion mechanisms (self‑service where possible; otherwise on request), with backups kept only for a limited period (approximately 30 days) before being overwritten, as described in our Legal Hub and FAQ.
Integrity and confidentiality We implement technical and organisational measures designed to ensure an appropriate level of security, including encryption, access control, logging and monitoring (detailed below in the security section).
Data protection by design and by default New features are evaluated for privacy impact, and we prefer privacy‑friendly defaults (e.g. limited retention, opt‑in for more invasive analytics or automation, no use of customer data for third‑party advertising).
Where a processing activity is likely to result in high risk to individuals (for example, certain employment‑related or extensive profiling scenarios), we may conduct or support a Data Protection Impact Assessment (DPIA) in line with GDPR requirements where they apply, and may carry out equivalent assessments for other processing where appropriate. For customer‑specific high‑risk use cases, the responsibility to conduct a DPIA lies primarily with the customer as controller, but we can provide information about our systems to support their assessment.
2.2 EU AI Act and AI governance
Our products incorporate AI models and orchestrations in ways that may fall under the EU AI Act as the regulation becomes fully applicable. We anticipate two main roles:
- As a provider of certain AI systems (e.g. Emplo’s candidate‑side AI assistant, some PageMind flows that perform compliance‑sensitive operations).
- As a deployer of general‑purpose AI and other third‑party models within our hosted platform.
Consistent with the AI Act’s direction of travel, we aim to implement the following practices, especially for higher‑risk contexts:
Risk management and documentation Internal documentation of system purposes, data sources, model choices, and known limitations; structured evaluation protocols for orchestration and decision quality (e.g. our internal OVC‑1, ADA‑1, KCR‑1 frameworks).
Data and model governance Preference for high‑quality, lawful data sources; minimal use of personal data for training; controls on vendor configurations so that customer data is not used to train public models by default.
Transparency and user information Clear messaging that users are interacting with AI‑enabled features, especially when AI generates or materially influences outputs such as catalog texts or application materials.
Human oversight Design of workflows where customers (for PageMind) or candidates (for Emplo) can review and correct AI‑generated outputs before relying on them, and where more automated behaviours (such as any future Emplo Auto‑Apply feature) are strictly opt‑in, rule‑based, and reversible.
Logging and auditability Retention of relevant logs, run configurations, and evidence traces so that behaviour can be reconstructed for troubleshooting or legal review, particularly in PageMind and its Verify.EU module.
We do not position this Overview as a full AI‑Act compliance statement. Detailed mapping of specific systems to AI‑Act categories and obligations will be set out separately where required.
2.3 Cybersecurity regulations and standards
We align our security posture with relevant European cybersecurity expectations rather than any one certification alone:
- We design our internal security management system broadly in line with ISO/IEC 27001:2022 control families (information security policies, asset management, access control, cryptography, physical security, operations security, communications security, system acquisition/development/maintenance, supplier relationships, incident management, business continuity).
- As the NIS2 Directive and national transpositions take effect for digital and cloud service providers, we monitor their applicability to our services and will adjust our practices accordingly, particularly around risk management, incident classification and reporting, and governance oversight.
At this stage, we do not claim to hold any formal certifications such as ISO 27001 or SOC 2. Where we refer to those frameworks, we do so as alignment goals and internal benchmarks, not as completed independent attestations. Formal certifications, if and when obtained, will be clearly indicated in our Legal Hub and on this page.
3. Data roles and shared responsibilities
3.1 Controller vs processor roles
Our role under data‑protection law depends on the product and the specific processing context.
PageMind
For PageMind’s core catalog workflows:
- The customer (e.g. retailer, brand, marketplace) is typically the controller of personal data contained in the product catalog and related files.
- inAi acts primarily as a processor, processing data on behalf of and under the instructions of the customer to deliver PageMind’s functions (ingesting supplier files, extracting attributes, generating texts, producing catalog files and reports).
For some limited operations (e.g. security logs, aggregated analytics, platform monitoring), inAi may act as an independent controller of strictly necessary technical and usage data; this is described in our Privacy Policy.
Emplo
For Emplo:
- inAi generally acts as controller for personal data of candidates who sign up directly to use Emplo as an AI career assistant (CV, preferences, job search history, application status, connected job‑board credentials, etc.).
- Where Emplo is provided through a third party or integrated into another platform, the precise controller/processor split may differ and will be documented in the applicable contractual and privacy documentation.
Other interactions
For website visitors, partners, and other business contacts (e.g. investors, incubators), inAi also acts as controller of the personal data it collects (contact details, communication history, basic web analytics), as described in our Privacy Policy and Legal Hub.
3.2 Customer responsibilities as controller
Where you are the controller (for example, using PageMind on your own catalog data), you remain responsible for:
- Determining the purposes and legal bases for processing any personal data you upload or cause to be generated.
- Providing appropriate information and notices to data subjects (customers, suppliers, employees) whose data is included in content you process via our services.
- Ensuring you have a lawful basis (e.g. contract, legitimate interests, consent) to process such data and to engage inAi as your processor.
- Deciding whether and how to use any AI‑generated outputs (e.g. catalog texts, compliance notes) in your production systems, including validating them against your own quality, legal and compliance standards.
- Conducting DPIAs and other regulatory impact assessments for your specific use cases where required by law.
- Responding to data subject requests (access, rectification, erasure, etc.) in respect of the data for which you are controller, with our support as processor under the DPA.
Our DPA sets out in more detail how we assist you with security, data subject rights, and regulatory obligations and how we follow your documented instructions.
3.3 inAi responsibilities as processor and/or controller
Subject to the precise contract and legal role, we are responsible for:
- Implementing and maintaining technical and organisational measures appropriate to the risk, including access control, encryption, logging and monitoring.
- Ensuring our staff and sub‑processors are subject to confidentiality obligations and process data only in accordance with documented instructions.
- Not engaging sub‑processors for customer data without appropriate contracts and transparency on their roles.
- Supporting controllers in meeting their obligations (e.g. responding to data subject requests, performing DPIAs, handling incidents) within the scope and timeframes set out in the DPA.
- Informing customers without undue delay of relevant personal‑data breaches affecting their data, to allow them to meet their own notification obligations.
We do not take on responsibilities that belong to the customer as controller, such as deciding what data should be uploaded, which outputs should be published, or how those outputs are used in customer decision‑making.
3.4 Shared responsibility model
Use of our services follows a shared responsibility pattern:
We are responsible for
- Security and availability of the platform (cloud infrastructure, core services, base application security).
- Designing and maintaining reasonable safeguards for confidentiality, integrity, and resilience of the systems processing your data.
- Providing you with tools and information (e.g. logs, configuration options, documentation, audit trails) to support your compliance efforts.
You are responsible for
- Access management and configuration in your own organisation (e.g. identity provider, user provisioning, role assignments, MFA settings).
- Endpoint and network security for the devices and systems your users use to access our services.
- The content of the data you upload and the security/standing of your external accounts (job boards, email) when using our automation features (including accepting the risk of third‑party account bans), including compliance with IP, trade secrets, confidentiality commitments and platform terms, and decisions to rely on outputs in production or in regulated contexts.
- Performing any legal assessments (e.g. DPIAs, AI‑risk assessments, sector‑specific compliance checks) that depend on your broader systems and processes.
We cannot be responsible for security issues or legal violations caused by:
- Your misconfiguration or failure to implement reasonable security on your side (e.g. leaked credentials, disabled MFA, compromised devices).
- Uploading unlawful, infringing, or inappropriate content into the platform.
- Using outputs in ways that are inconsistent with applicable laws, regulations, or third‑party platform terms.
Except where expressly agreed otherwise in writing, you are solely responsible for complying with laws and regulations that apply to your own products, services and business processes, including any sector‑specific and consumer‑protection rules, even where you use our services as part of those processes.
4. What data we process (high‑level)
4.1 PageMind
PageMind is designed primarily for B2B product‑content workflows and operates mainly on non‑personal data contained in supplier files and catalog structures. However, some personal data may be present incidentally (e.g. contact details, signatures).
Typical categories include:
Supplier and product content
- Supplier PDFs, images, Word documents, spreadsheets and other files describing products and specifications.
- Product identifiers (SKUs, GTINs, manufacturer codes), attributes (dimensions, power, materials), titles, descriptions, technical and regulatory notes.
Customer configuration & metadata
- Catalog templates (column names, required fields).
- Glossaries and allowed lists (brand terms, colour/size enumerations).
- Workflow configurations (languages, rules, flags).
Operational and security logs
- Run metadata (timestamps, user IDs, number of items processed).
- Statuses and error codes, flags for anomalies, hashes of inputs/outputs and configuration signatures for auditability.
- Basic technical logs (IP addresses, user agent, authentication events) necessary for security and performance.
4.2 Emplo
Emplo processes more direct personal data because it operates on candidate profiles:
Identity and contact data
- Name, email address and other contact details supplied by the user.
- CV/resume and similar documents, including education and employment history.
Preference and profile data
- Stated job preferences (roles, locations, salary expectations, contract type, relocation, constraints).
- Derived profile features used internally for matching and filtering (e.g. seniority estimates, skill tags).
Job search and application data
- Lists of job postings considered, shortlisted, or applied to.
- Tailored CVs and cover letters generated for specific roles.
- Application statuses and notes as recorded in the dashboard.
Account and credential data
- Login credentials for Emplo itself.
- Where the user connects job‑board or email accounts to Emplo, encrypted credentials or tokens required to access those services on the user’s behalf (if and only if the user opts in).
Telemetry and logs
- Actions taken within Emplo (searches, approvals, edits).
- For any future automated features, logs of auto‑submitted applications (job link, time, materials used, rules applied).
We do not intentionally seek to collect special categories of personal data (e.g. health data, political opinions, religious beliefs) through Emplo or PageMind. Users are discouraged from including such data in free‑text fields, and customers should avoid uploading documents that are unnecessarily sensitive. In practice, CVs and application materials may nevertheless contain information that qualifies as sensitive under data‑protection law (for example, health‑related information, union membership, or references to political or religious activities in experience sections). Our handling of such data, including the applicable legal bases and additional safeguards, is governed by the Emplo‑specific Privacy Policy and any applicable local requirements. Emplo is intended for adults; it is not directed to children under 16 (or any higher minimum age specified in the relevant jurisdiction), and we do not knowingly collect personal data from such individuals. If we become aware that we have collected personal data from a child in contravention of these intentions, we will take reasonable steps to delete it.
4.3 Website, support and business contacts
In addition to product data, we process limited personal data in other contexts:
- Website forms (contact, demo requests, pilot enquiries, partner forms).
- Support interactions via email or other channels.
- Business development, investor and incubator communications.
The types of data here are standard (names, emails, roles, messages) and are processed under our Privacy Policy and email/anti‑spam policy.
4.4 Ownership of data and outputs
- Customer inputs (e.g. supplier files, CVs, templates, configuration) remain the property of the customer or the user who provided them, subject to any third‑party rights.
- Outputs produced by our systems (e.g. catalog CSVs, run reports, evidence packs, tailored CVs and cover letters) are, by default, owned by the customer or user as described in the applicable Terms, with inAi retaining only those rights necessary to operate, maintain and improve the services as described in our contracts.
You are responsible for ensuring that you have all necessary rights (including intellectual‑property and confidentiality rights) to upload your data to our services and to use the outputs in your own systems and processes.
5. Data residency, hosting and sub-processors
5.1 Data location and EU-only VPC
We operate our products primarily on infrastructure located in the European Union, using a virtual private cloud (VPC) architecture. Our design goal is that primary customer data at rest, including derived artifacts such as embeddings, is stored on EU-based infrastructure.
In particular:
- Application servers, databases, and primary storage used for PageMind and Emplo are provisioned in EU data centres of our cloud providers.
- Backups and disaster-recovery replicas are also kept in the EU, subject to comparable access controls and encryption standards.
- Operational logs and monitoring data that may contain limited personal or pseudonymous identifiers are, by default, stored and processed within our EU-based environments unless explicitly stated otherwise in our Sub-processor List, Privacy Policy or your contract.
Where optional features rely on services outside the EU/EEA (for example, a specific third-party integration requested by a customer), those features will be:
- clearly marked as such,
- disabled by default unless explicitly enabled by the customer, and
- governed by appropriate contractual and transfer safeguards as described in Section 12.
5.2 Infrastructure providers
We build on large, established infrastructure and AI providers with strong security and compliance programmes. As of today:
- Our workloads are distributed across several major cloud platforms (e.g. Google Cloud, Microsoft Azure, AWS, Scaleway), supported by non-dilutive credit programmes and startup partnerships.
- We maintain relationships with multiple model and infrastructure vendors (including EU players) to remain vendor-agnostic and to support a resilient, low-cost orchestration layer.
All such providers:
- operate under data processing agreements that define their role as processors or sub-processors;
- commit to appropriate technical and organisational security measures;
- are contractually limited in their ability to use our data (for instance, they may not use customer content for their own advertising or unrelated product training without separate consent).
We do not publicly list every internal architecture detail on this page, as the exact setup can evolve. The authoritative list of sub-processors is maintained in our Legal Hub and, where applicable, in your DPA.
5.3 Sub-processors and transparency
We rely on sub-processors to provide core infrastructure (cloud hosting, storage, managed databases, monitoring) and supporting services (error tracking, customer support systems, email delivery, etc.).
We commit to:
Maintain an up-to-date Sub-processor List, accessible via the Legal or Data Protection section of our site or upon request.
Ensure each sub-processor:
- is bound by a written agreement that imposes obligations substantially equivalent to those in our DPA;
- processes personal data only for the purpose of providing the relevant service;
- implements adequate technical and organisational measures to protect personal data.
Notify customers of material changes to our sub-processor list (for example, the addition of a new sub-processor that will process customer personal data), via email, dashboard notice or other reasonable means.
Where required in enterprise contracts, we may offer:
- a right to object to certain sub-processors for well-founded data-protection reasons; and
- a defined process for addressing such objections (for example, by adjusting configuration or offering alternative solutions).
5.4 Customer-managed infrastructure (where applicable)
For certain enterprise deployments, we may support configurations where parts of the stack run in customer-managed environments (e.g. a dedicated VPC or “bring-your-own-cloud” setup). In such cases:
- Our responsibilities are limited to the components we control (agent orchestration, PageMind/Emplo application logic, etc.).
- The customer is responsible for the security and compliance of its own infrastructure (network configuration, VPNs, identity provider, endpoints, and any additional tools it deploys).
6. Security of processing – technical and organisational measures
We design our security controls to align with the requirements of GDPR Article 32 (security of processing) and with recognised frameworks such as ISO/IEC 27001:2022, while remaining pragmatic for a lean company.
6.1 Information Security Management
We operate an internal, lightweight Information Security Management System (ISMS) that includes:
- documented security policies (access control, change management, incident response, acceptable use) approved by management and reviewed periodically;
- an asset inventory of critical systems and data stores;
- basic risk-assessment processes to identify, prioritise and treat security risks, especially around product data, model orchestration and sub-processors;
- vendor risk management and due-diligence checks when onboarding new sub-processors or infrastructure vendors.
This ISMS is evolving and is intended to form the basis for future certification (e.g. ISO 27001) as the company grows; we do not currently claim any formal certification.
6.2 Access control and identity management
Access to production systems and customer data is intended to be strictly limited:
Principle of least privilege: each internal account and service is granted only the minimum permissions necessary to perform its function.
Role-based access control (RBAC):
- Separate roles for development, operations and support, with different access scopes.
- Time-limited, audited elevation when access to customer data is required (for example, during a support investigation).
Authentication:
- Strong password policies and secure password hashing for internal accounts.
- Multi-factor authentication (MFA) is required for access to production infrastructure and administrative tools wherever supported by providers, and we strongly encourage its use for other access paths.
- For customers, we either provide secure credential handling or integrate with external identity providers (where supported). Customers are encouraged to enable MFA and SSO options where available.
Sessions are subject to inactivity timeouts and can be terminated server-side if compromise is suspected.
6.3 Encryption and key management
We use encryption to protect data both in transit and at rest:
In transit: We aim to ensure that external and internal communications involving personal data or sensitive operational data use TLS (HTTPS or equivalent).
At rest:
- Databases, storage buckets and backups are encrypted using industry-standard algorithms (e.g. AES-256 or provider-equivalent).
- Encryption is enabled by default for all primary data stores and backup locations.
Cryptographic keys are:
- managed using cloud providers’ Key Management Services (KMS) or equivalent mechanisms;
- rotated periodically and when compromise is suspected;
- accessible only to a minimal set of privileged roles, with usage logged by the provider.
6.4 Network and infrastructure security
Our infrastructure is segmented and hardened:
Network segmentation:
- Separation of environments (development, staging, production) through distinct projects/accounts and network boundaries.
- Use of virtual private networks (VPCs), security groups and firewall rules to restrict inbound and outbound traffic.
Perimeter protection:
- Use of reverse proxies and, where appropriate, web application firewalls (WAFs) to mitigate common web threats.
System hardening:
- Use of managed platform services where possible (managed databases, serverless functions, container orchestration) to reduce operating-system level exposure.
- Regular application of security patches to base images and runtime environments.
We seek to monitor provider security advisories and aim to apply relevant updates in a timely manner.
6.5 Application security and SDLC
We follow secure-development practices appropriate for our size:
Code management:
- Version control with enforced code review for changes to critical components.
- Branch protection on main branches; no direct pushes to production.
Dependency and vulnerability management:
- Use of automated tools to detect known vulnerabilities in third-party dependencies.
- Regular review and update of dependencies.
Testing:
- Unit and integration tests covering key pipelines (ingestion, extraction, translation, QA, Verify.EU).
- Targeted end-to-end tests on representative datasets to detect regressions.
Change management:
- Structured deployment pipelines with staging environments.
- Rollback procedures for failed deployments.
From time to time, we may engage external security assessors or participate in structured reviews (e.g. through incubator programmes) to evaluate our security posture. Any serious findings are triaged and addressed according to severity.
6.6 Logging, monitoring and alerting
We maintain logs and metrics for security-relevant and operational events:
Logging:
- Authentication events (log-ins, failed attempts, password resets).
- Privilege changes and administrative actions.
- Key product events (pipeline launches, configuration changes, PageMind and Emplo runs).
- System errors and exceptions.
Logs may contain limited personal data (such as user IDs, email addresses or IP addresses) where necessary for security, troubleshooting and compliance. Retention periods are limited and aligned with our data-minimisation principles (see Section 10).
Monitoring and alerting:
- Metrics and health checks on infrastructure and key services.
- Alerts for unusual error patterns, degraded performance, or suspected abusive behaviour.
- Where feasible, detection of anomalous access patterns indicating possible credential compromise.
6.7 Business continuity and disaster recovery
To preserve availability and resilience:
- We maintain regular backups of critical data stores, stored redundantly in EU regions.
- We periodically test restore procedures from backups to ensure they are usable.
- Our architecture is designed to tolerate failures of individual instances or components through the use of managed services and horizontal scaling where appropriate.
We define internal target objectives for Recovery Point Objective (RPO) and Recovery Time Objective (RTO) that are suitable for our current scale. Concrete SLA commitments, if any, are specified in product-specific contracts (MSA/ToS) rather than this Overview.
6.8 Organisational and personnel security
Security is also a people and process concern:
All employees and long-term contractors with access to systems:
- sign confidentiality and IP agreements;
- receive onboarding covering data protection, acceptable use, and incident reporting;
- are subject to offboarding procedures that remove access on departure.
Access rights are reviewed periodically, especially when roles change.
We require and encourage the use of secure devices and basic hygiene (e.g. full-disk encryption, up-to-date OS, security patches) for staff with access to sensitive environments, and may apply technical enforcement measures where appropriate.
6.9 Physical security
We rely on the physical security controls of our cloud and data-centre providers (badge access, CCTV, guards, environmental controls, redundancy). We do not operate our own physical data centres.
We never present any system as perfectly secure. No matter how carefully a system is designed, residual risk remains, and new classes of vulnerability can appear. Our goal is to keep risk proportionate and to respond rapidly and transparently when issues are detected.
Customers remain responsible for securing their own devices, networks and identity providers, and for protecting secrets such as API keys and passwords under their control (see Section 3.4).
7. Data protection by design and by default
We interpret “data protection by design and by default” (GDPR Article 25) as a combination of technical choices, product defaults, and customer-facing controls.
7.1 Minimisation and purpose limitation in products
We design PageMind and Emplo to collect and process only the personal data necessary for their core functions:
PageMind:
- primarily processes product and supplier documents for catalog workflows; personal data is typically incidental (e.g. contact details in PDFs) rather than central.
- we do not require customer teams to upload HR records, extensive customer lists or other unrelated personal data.
Emplo:
- collects candidate data (CV, preferences, application history) that is directly relevant to job search and application support.
We actively discourage:
- use of our services as a storage system for special-category data (e.g. detailed health records or political opinions) unless explicitly justified and legally grounded;
- inclusion of unnecessary personal data in free-text fields and attachments, especially where aggregated or shared across systems.
7.2 Product defaults and configuration
Wherever feasible, we choose default settings that minimise risk:
Retention:
- Default retention periods for workspaces, projects and logs are limited and documented; customers can request earlier deletion where appropriate (see Section 10).
Analytics and tracking:
- By default, we avoid invasive tracking in our products and do not rely on third-party behavioural advertising networks within PageMind or Emplo.
- Our marketing sites may use standard analytics or cookies as described in our separate Cookie and Privacy Policies.
- Where additional analytics features are introduced, they are either aggregated or opt-in.
Automation features:
- For Emplo, more automated behaviours (such as any Auto-Apply-style feature) are opt-in, rule-based and controllable by the user (on/off, limits), not forced defaults.
- For PageMind, publication of catalog outputs remains under customer control; our systems do not directly push changes to production channels without customer configuration and integration.
7.3 Pseudonymisation, aggregation and masking
Where practical, we:
- use pseudonymous identifiers (e.g. internal IDs, hashes) instead of direct personal identifiers in logs and analytics;
- aggregate metrics for monitoring and research (e.g. error rates, fill-rates, latency) to avoid exposing detailed individual data when not necessary;
- provide options to mask or omit certain fields from logs or diagnostic exports for highly sensitive contexts.
7.4 DPIAs and risk assessments
We perform internal risk analyses and, where appropriate, Data Protection Impact Assessments (DPIAs) or equivalent exercises for processing activities that:
- involve significant profiles or behavioural analytics;
- may be treated as high-risk under GDPR or the EU AI Act (for example, Emplo’s impact on job opportunities, or PageMind’s role in safety-critical product categories).
However:
- For use cases where you are the controller (e.g. using PageMind to feed your PIM and e-commerce channels), your organisation remains responsible for conducting any DPIAs required for your broader system, which includes your own tools, processes and decisions.
- We can provide documentation about our processing to support your DPIA (architecture descriptions, data flows, security measures, logs and audit information), typically under NDA and/or as part of a DPA.
7.5 Alignment with internal research
Our internal research themes (“Agentic Decision Systems”, “AI for Business Operations”, etc.) include metrics and methods for:
- verifying outputs against sources;
- measuring error rates, oversight minutes and incident frequencies;
- tracking how tasks move from “assist” to “supervise” to “run” autonomy levels.
These same mechanisms help enforce data protection by design, by:
- providing explicit gates and thresholds before automation is enabled;
- making it easier to explain and justify system behaviour to auditors and regulators.
8. AI models, data use and training
Our products rely on multiple AI models and orchestration layers. This section explains how we use AI, what data is sent to which models, and how that relates to privacy and compliance.
8.1 Use of third-party AI and model vendors
We are model-agnostic and use a mix of proprietary and open-source models, selected by task (OCR, extraction, translation, generation, re-ranking, etc.).
Key points:
- We integrate with several external AI providers (e.g. OpenAI, Anthropic, Mistral and others), often under startup or partner programmes.
- For API-based or enterprise-grade model services, we either rely on vendor plans or documentation that state that customer data sent via those services is not used to train or improve their foundation models by default, or we configure available settings to disable such training where this option is offered. Vendor policies and features can evolve over time, so we review them periodically and adjust our configuration and provider choices as needed. Where technically and commercially feasible, we prefer EU/EEA processing locations or providers’ “EU data boundary” offerings, but such options may not be available for all models and use cases.
When we send data to external models, we:
- limit the content to what is needed for the specific call (for example, the relevant product text, not entire archives);
- avoid sending sensitive or high-risk fields when they are not necessary;
- apply additional pseudonymisation in some contexts.
Details of each sub-processor relationship, including AI providers, are set out in our Sub-processor List and, where applicable, in your DPA.
8.2 Internal use of data to improve the service
We may use limited subsets of customer data and system outputs to improve our services, under strict controls:
For all products:
- Aggregated statistics (e.g. error rates, latency distributions, fill-rates) feed into our evaluation protocols (OVC-1, ADA-1, KCR-1) to measure orchestration quality and system reliability.
- We may inspect individual failure cases (e.g. mis-parsed tables, incorrect attribute extraction) for debugging, subject to strict access controls and only where necessary; where practicable we use pseudonymised or minimised datasets rather than raw personal data.
For PageMind:
- We may analyse anonymised records of extraction and QA flags to improve parsing rules, allowed lists and glossary enforcement.
For Emplo:
- We may analyse, in aggregate, which kinds of generated CV and letter variants perform better (e.g. based on user feedback), without using this to make independent decisions about candidates.
Unless explicitly permitted in a separate agreement:
- We do not sell customer data.
- We do not use individual customer content as generic training data to build a separate product for third parties.
- We do not use customer data for third-party advertising or cross-customer profiling.
Any broader training uses (for example, training a custom model on a customer’s own data) will be subject to explicit customer agreement and, if needed, additional contractual terms.
8.3 AI transparency and user information
We aim to be transparent about where and how AI is used:
PageMind:
- uses AI to extract attributes, translate/localise product texts, generate compliance-assistant texts, and to orchestrate QA and retry flows.
- outputs are accompanied by flags and evidence traces so that human reviewers can understand what the system did.
Emplo:
- uses AI to analyse CVs, search and pre-filter job postings, and generate tailored CVs and cover letters.
- the UI is designed to make clear when texts are AI-generated and to allow candidates to review and edit before applications are sent.
For more automated features (e.g. planned Auto-Apply-style functionality in Emplo):
we will provide clear explanations of:
- what is automated and what remains under user control;
- which criteria govern automated actions (e.g. salary thresholds, locations, match scores);
- how users can review logs and disable automation at any time.
We do not make fully automated decisions that, by themselves, produce legal or similarly significant effects on individuals (such as hiring decisions); those decisions are made by human controllers (employers, recruiters) using their own systems. This statement reflects our current product designs; if this changes in the future, we will update our documentation and, where required, implement appropriate safeguards and transparency measures.
8.4 Limits and responsibilities around AI outputs
AI-generated content and recommendations are inherently probabilistic and may contain errors or omissions. To manage this:
We design our systems to:
- minimise factual invention (e.g. PageMind’s “no evidence, no publish” stance in Verify.EU for compliance-critical fields);
- make it easy to audit and correct outputs (QA lists, flags, evidence packs).
However, customers and users are responsible for:
- validating outputs before using them in production contexts (catalog pages, marketing materials, compliance documents, job applications);
- ensuring that outputs meet applicable legal and regulatory standards (product-information rules, employment law, consumer protection, etc.);
- not relying on AI outputs as a substitute for professional advice (legal, regulatory, HR, medical, financial).
Our Terms of Service and product-specific policies contain more detailed disclaimers and limitations of liability for reliance on AI outputs.
9. Data retention, deletion and backups
9.1 Default retention principles
We apply retention rules according to the type of data, not a single global timeline. In general:
We keep data only as long as necessary to:
- provide the services you requested,
- meet our legitimate security and operational needs, and
- comply with legal obligations (for example, accounting or fraud-prevention rules).
We aim for shorter retention for:
- detailed logs and low-level telemetry,
- temporary internal artefacts (e.g. intermediate parsing results),
- transient caches.
We allow longer retention for:
- core workspace and project data (PageMind),
- candidate accounts and histories (Emplo),
- business records required by law.
Concrete retention periods may evolve and are documented in product-specific policies and, where required, in the DPA. This Overview describes the high-level model only.
9.2 PageMind retention
For PageMind, we typically distinguish:
Workspace and project data
Supplier files, parsed representations, configuration, and generated outputs (catalog CSVs, descriptions, evidence packs).
Retained for as long as the workspace or project is active, subject to customer configuration.
When a workspace or project is deleted, we:
- delete associated primary data from active systems, and
- let encrypted backups expire on their regular schedule.
Run reports and audit artefacts
- Run-level metadata (what was processed, when, by whom) and configuration signatures may be retained longer than raw inputs for auditability and troubleshooting.
- Where these artefacts contain personal data, we minimise and pseudonymise where possible.
Operational logs
- Low-level logs (e.g. transient application and infrastructure logs) are kept for shorter periods, typically days to a few months.
- Some higher-level logs (e.g. security logs, critical error logs) may be kept longer where required for incident investigation, legal defence or fraud detection.
Customers can request early deletion of specific projects or workspaces, subject to reasonable verification and to any legal duties we have to preserve data (for example, during a dispute or investigation).
9.3 Emplo retention
For Emplo, retention is tied more closely to the candidate’s account and activity:
Candidate account and profile
Stored as long as the account is active.
Users can request account deletion; this typically results in:
- deactivation of the account,
- deletion or anonymisation of profile data, CVs, and application histories from active systems, and
- eventual removal from backups once they age out.
Connected-service credentials
- Credentials or tokens for job boards or email services are kept only while the connection is active.
- If a user revokes a connection or deletes their account, we delete the corresponding credentials from our systems.
Telemetry and logs
- Interaction logs (e.g. which suggestions were accepted, basic usage events) are kept for a limited time to support product improvement and abuse detection.
- After this period, logs may be aggregated or anonymised.
Where Emplo is used in contexts subject to long-term retention requirements (for example, if an employer uses it as part of a recruitment process), those requirements are imposed and managed by the employer as controller, not by inAi.
9.4 Deletion requests and backups
When we delete data from active systems (due to account deletion, workspace deletion, or a valid data subject request):
- the data is removed or irreversibly anonymised in our production databases and storage;
- some remnants may persist in encrypted backups for a limited retention window (for example, up to about 30 days), after which they are overwritten in the course of regular backup cycles;
- we do not actively restore backups solely to delete individual records, except where legally required (for example, under a regulator’s order).
Where data has been exported or shared by you (e.g. downloaded catalog files, exported CVs, integrations into your systems), deletion on our side does not affect those external copies; managing those copies is your responsibility as controller.
9.5 Legal holds and exceptions
We may preserve certain data beyond normal retention where:
- required by law (e.g. tax or accounting records);
- reasonably necessary to establish, exercise or defend legal claims; or
- needed for the investigation of security incidents, abuse or fraud.
In those cases, access is restricted and data is used only for the relevant legal or security purpose.
10. Data subject rights and support
10.1 Rights under GDPR
For processing of personal data subject to GDPR, data subjects have the following rights (subject to conditions and exceptions):
- Right of access: to know whether their data is being processed and obtain a copy.
- Right to rectification: to correct inaccurate or incomplete data.
- Right to erasure (“right to be forgotten”): to delete data in certain circumstances.
- Right to restriction of processing: to limit processing in specific cases.
- Right to data portability: to receive certain data in a structured, commonly used format and transmit it to another controller.
- Right to object: to object to processing based on legitimate interests or direct marketing.
- Rights related to automated decision-making, including profiling, where decisions have legal or similarly significant effects.
We respect these rights within our role and legal obligations. For individuals whose data is processed outside the scope of GDPR, similar or different rights may be available under local law; we handle such requests in line with the applicable legislation and our Privacy Policy.
10.2 Where inAi is controller (e.g. Emplo, website)
Where we act as controller (for example, for Emplo candidate accounts or website visitors):
Data subjects can exercise their rights by contacting us via the privacy contact listed in Section 15 or, where available, through in-product controls (e.g. account deletion, profile editing).
We:
- authenticate the requester using reasonable measures (for instance, verifying control of the registered email address);
- assess the request according to GDPR and other applicable laws;
- respond within applicable statutory timeframes (typically one month, extendable in complex cases);
- explain when we must retain some data (e.g. logs required for security or legal obligations) even after an erasure request.
10.3 Where inAi is processor (e.g. PageMind for retailers)
Where we act as a processor (for example, processing catalog data for a retailer via PageMind):
Data subjects should direct their requests primarily to the controller (the retailer, brand or marketplace that owns the catalog).
We do not usually respond directly to access or erasure requests from data subjects concerning data for which we are purely a processor, unless instructed or expressly authorised by the controller or required by law.
Upon receiving a valid request from a controller, we:
- assist them with retrieval, correction or deletion of the relevant data from our systems, in line with the DPA;
- implement technical measures (e.g. search tools, targeted deletion routines) to make this practical.
10.4 Limits, security and identification
To protect all users:
We may decline or narrow a request where:
- we cannot adequately verify the requester’s identity;
- the request is manifestly unfounded or excessive (for example, repeated very frequently without additional justification);
- data must be retained under overriding legal obligations.
We may also redact information that would reveal trade secrets, confidential information about other users or customers, or security-sensitive details.
In all cases, we aim to be transparent about what we can and cannot do, and why.
11. Security incidents and personal-data breaches
11.1 Internal incident response
We maintain an internal incident response process that defines:
- what counts as a security incident or personal data breach;
- how incidents are identified, triaged and prioritised;
- roles and responsibilities (e.g. technical lead, communications lead, management escalation);
- steps for containment, eradication, recovery and post-incident review.
Typical steps include:
Detection and reporting Incidents may be detected via automated alerts, internal monitoring, or external reports (e.g. from customers or security researchers). All staff are instructed on how to escalate suspected incidents quickly.
Triage and containment We quickly assess severity and scope, then isolate affected systems or credentials where necessary to contain the issue.
Investigation and remediation We collect logs and evidence, identify root causes and implement fixes (patches, configuration changes, access revocations, additional monitoring, etc.).
Recovery We restore normal operations, including from backups if appropriate, and monitor for recurrence.
Post-incident review We document the incident, impact, root causes and corrective actions. Where needed, we adjust processes or technical defences.
11.2 Personal-data breach notifications
If we become aware of a personal data breach affecting customer data:
We will notify affected customer controllers without undue delay after becoming aware of the breach, using the contact information we have on record. Notifications are sent to the contact information and channels specified in your account or contract documentation; you are responsible for keeping those details accurate and up to date.
Our notification will aim to include:
- a description of the nature of the breach (categories and approximate number of data subjects and records concerned, where known);
- likely consequences of the breach;
- measures we have taken or propose to take to address the breach and mitigate its possible adverse effects;
- information needed to help controllers meet their own notification obligations toward supervisory authorities and data subjects, where applicable.
We may provide initial notifications with limited information (if the investigation is still in early stages) and follow up as we learn more.
Where we act as controller (for example, for Emplo users or website visitors), we handle breach notifications directly in accordance with GDPR and national law, including any obligations to notify supervisory authorities and affected individuals.
11.3 Customer responsibilities in incidents
Customers remain responsible for:
- promptly informing us if they suspect a compromise of credentials or misuse of accounts within our services;
- investigating and addressing incidents in their own systems (devices, networks, identity providers, integrated tools);
- carrying out their own regulatory notifications where they are the controller and our services are only one part of their broader system.
Our DPA and product terms may contain further detail on cooperation, logs we can provide, and limitations of liability in case of incidents.
11.4 Government and law-enforcement requests
From time to time we may receive binding requests from courts, law-enforcement agencies or other public authorities seeking access to data. We review such requests to assess their legal basis and scope. Where permitted by law and contract, we:
- seek to limit any disclosure to what is strictly necessary to comply with the request; and
- notify the affected customer or user before providing data, or as soon as we are legally allowed to do so.
Nothing in this Overview requires us to disclose data in circumstances where we are prohibited from doing so by law or where we successfully challenge an overbroad or invalid request.
12. International data transfers
12.1 Default EU-only posture
As described above, our default and preferred posture, and design goal, is that:
- all primary processing and storage of customer data at rest (including embeddings and backups) takes place within data centres located in the European Union;
- we prioritise EU-based or EEA-based providers and regions for our core infrastructure and logs.
This approach reduces the number of scenarios where international data transfers arise.
12.2 When transfers may occur
Despite our EU-centric design, some transfers of personal data outside the EEA may occur or become necessary, for example:
- where a sub-processor providing a specialised service (e.g. email delivery, abuse detection, model inference) operates from or routes data through non-EEA locations;
- where support involves staff or contractors located outside the EEA accessing systems or data (always under strict access controls and confidentiality obligations);
- where you or your users access the services from outside the EEA (in which case certain usage data is inherently transferred to and from their devices).
We aim to keep such transfers limited and proportionate. For many customers, especially those based in the EU, it is possible to operate with little or no non-EEA transfer for core data.
12.3 Safeguards for international transfers
Where personal data is transferred outside the EEA, we use recognised transfer tools such as adequacy decisions or Standard Contractual Clauses (SCCs) (or their successors), supplemented by additional technical and organisational measures where appropriate (for example, encryption, minimisation, and access limitations) and by contractual restrictions on onward transfers and processing purposes. We aim to limit such transfers to what is necessary to provide and support the services.
12.4 Customer choices
Enterprise customers with strict data localisation or data sovereignty requirements should:
- inform us of those requirements early in the engagement;
- review the Sub-processor List and negotiate any additional restrictions or configurations required (for example, use of only EU-region AI providers, or exclusion of specific vendors).
We will work in good faith to accommodate reasonable requirements, subject to technical feasibility and commercial terms.
13. Compliance roadmap and certifications
13.1 Current alignment and non-certification
At present:
- we align our internal security and privacy practices with key requirements of GDPR, the emerging EU AI Act, and widely recognised security standards such as ISO/IEC 27001:2022;
- we leverage security controls offered by major cloud providers and platform services (e.g. encryption, access control, monitoring, hardened managed services);
- we implement internal processes (policies, risk assessments, incident response, DPIA-like analyses) that reflect these frameworks.
We do not currently claim:
- ISO 27001 certification,
- SOC 2 attestation,
- or any similar independent certification.
Any such claims will be clearly communicated and supported by formal certificates or reports when and if obtained.
13.2 Roadmap intentions
As we grow and our systems become more widely used, we intend to:
gradually formalise and expand our ISMS (roles, documentation, internal audits) to make external certification feasible;
evaluate and, where appropriate, pursue:
- ISO/IEC 27001:2022 certification for information security management;
- relevant attestations or reports (e.g. SOC 2 Type I/II) if commercially justified;
track developments in:
- EU AI Act secondary legislation and guidance,
- NIS2 implementation in France and across relevant sectors,
- any new AI codes of practice or industry standards for LLM-based systems.
These roadmap intentions are not binding commitments or guarantees. They represent our internal planning direction and may evolve as laws, standards and business priorities change.
13.3 Customer audits and questionnaires
For enterprise customers, we may:
complete security questionnaires and due-diligence forms under NDA;
provide architecture summaries, data flow diagrams, and descriptions of controls relevant to their use case;
participate in reasonable audit exercises as defined in the DPA or MSA, subject to:
- advance notice,
- scope limitations,
- confidentiality obligations, and
- proportionality (to avoid exposing other customers or our own trade secrets).
14. Product-specific disclaimers and limitations of responsibility
14.1 PageMind
PageMind is a tool for catalog content automation and verification, not a legal or compliance advisor.
In particular:
PageMind:
- helps ingest and structure supplier and product documents;
- generates catalog texts and attributes;
- highlights inconsistencies, missing fields and possible compliance issues;
- may enforce internal business rules such as “no evidence, no publish” for certain fields in the Verify.EU module.
PageMind does not:
- guarantee compliance with all applicable laws (e.g. consumer information rules, labelling directives, environmental or safety regulations);
- replace mandatory human review or internal quality processes;
- assume responsibility for the legal sufficiency, accuracy or completeness of your product information.
You remain responsible for:
ensuring that:
- products are correctly described and labelled,
- mandatory information is present and accurate,
- claims (technical, environmental, marketing) comply with applicable law;
validating any PageMind outputs (attributes, texts, evidence packs) before publishing them to your PIM, e-commerce platforms or other channels.
Where regulators, marketplaces or other third parties challenge product information, that is generally a matter between them and you as the controller and publisher of that information. Our role is to provide tools that support your workflows and reduce errors, not to underwrite legal compliance.
14.2 Emplo
Emplo is an AI assistant for job-seekers, not:
- an employer,
- a recruitment agency placing candidates with specific employers, or
- a provider of legal or immigration advice.
In particular:
Emplo:
- helps candidates analyse their CVs and profiles;
- suggests and prioritises job opportunities on third-party platforms;
- drafts or refines application materials (CV variants, cover letters);
- may, in future, automate parts of the application process under explicit user control.
Emplo does not:
- guarantee any particular employment outcome (interviews, offers, salary levels);
- make hiring decisions;
- represent candidates to employers as an agency;
- provide binding legal advice about employment rights, visas or work status.
Candidates remain responsible for:
- the truthfulness, accuracy and legality of the information they provide (CVs, claims of skills and experience, documents);
- ensuring that automated actions (if enabled) align with job-board terms and do not constitute spam or abusive behaviour;
- seeking qualified professional advice where required (e.g. on visas, employment law, discrimination issues).
Emplo is intended for adult users capable of entering into employment contracts in their jurisdiction. It is not directed to children under 16 (or any higher minimum age defined in the applicable local law), and we do not knowingly offer the service to such individuals.
Pricing models and fee structures for Emplo may vary by jurisdiction in order to comply with local rules on employment services and fees charged to job seekers. The applicable terms and any jurisdiction-specific restrictions are described in the Emplo Terms of Service.
14.3 General limitations and disclaimers
Across all products:
- No system can be guaranteed error-free or invulnerable to attack.
- AI-generated outputs, even with strong safeguards, may be incomplete, outdated, misleading or simply unsuitable for a particular context.
- We provide tools and infrastructure; how you use them, and how you interpret and act on outputs, is largely under your control.
Our Terms of Service, DPA and any MSA contain the binding and detailed provisions on:
- warranties and disclaimers,
- caps and exclusions of liability,
- indemnities,
- specific remedies (e.g. service credits),
- governing law and jurisdiction.
Those documents prevail over any simplified or high-level descriptions in this Overview.
15. Contacts, vulnerability reporting and document updates
15.1 Security contact and vulnerability disclosure
If you believe you have discovered a security vulnerability or incident affecting our systems or your data:
- please contact our security team at the address indicated on our website (e.g. security@[domain]),
- include as much detail as reasonably necessary for us to investigate (affected components, steps to reproduce, any evidence),
- avoid public disclosure until we have had a reasonable opportunity to investigate and address the issue.
We aim to:
- acknowledge receipt promptly,
- triage severity,
- keep you informed in general terms (subject to confidentiality and security considerations),
- credit responsible reporters appropriately if we later publish a security notice or changelog.
15.2 Privacy and data protection contact
For questions about this Overview, data protection, or for data subject requests where we act as controller, you can contact our privacy/data protection contact at the address indicated in our Privacy Policy (for example, privacy@[domain] or dpo@[domain]).
For controller/processor questions (DPAs, sub-processors, data flows) in a B2B context:
- your primary point of contact is usually your commercial or customer-success contact, who can involve security/legal as needed;
- for formal notices, please follow the procedures and addresses in the MSA/ToS.
15.3 Abuse and misuse reporting
If you encounter or suspect:
- abuse or misuse of our products (e.g. Emplo accounts used for spam or fraud),
- content that violates our Acceptable Use Policy,
- attempts to compromise other users via our services,
you can report it to an abuse contact (e.g. abuse@[domain] or via a dedicated form). We will assess and, where needed, take action such as account suspension, IP blocking, or notifying affected parties.
15.4 Updates to this Overview
We may update this Security & Data Protection Overview to:
- reflect changes in our products or infrastructure;
- incorporate new legal or regulatory requirements;
- clarify or expand explanations of existing controls.
Each version will include:
- a “last updated” date at the top; and
- a brief change summary in a changelog section or at the bottom of the page, highlighting materially significant changes (for example, new categories of data processed, significant shifts in hosting locations, or changes to key sub-processor arrangements).
Where changes are likely to materially affect existing customers’ risk assessments, we will use reasonable efforts to draw attention to them (e.g. via in-product notices or email), in addition to updating the page itself.
