I herely accept the AI Policy.
Read and approved,
Date:
For idloom
Full name:
Title:
The customer
Company:
Full name:
Title:
Last updated: November 03, 2025
Purpose of artificial intelligence (AI) use at idloom
idloom integrates AI in a limited and responsible way to improve the user experience across our SaaS platform.
The first AI feature currently available is the idloom Help Center Assistant — an intelligent support tool that helps users find relevant answers in our documentation more quickly and intuitively.
The assistant:
Our role under the EU AI Act
Under the EU Artificial Intelligence Act, idloom acts as the Provider of a limited-risk AI system.
This means we design, deploy, and maintain the AI assistant in compliance with transparency, safety, and ethical requirements defined by EU law.
Our customers who use the platform interact with the assistant as Deployers of the system.
When you interact with the AI assistant, you will always see a clear notice informing you that you are communicating with an AI system.
Example message:
“You are chatting with an AI assistant trained on idloom’s official documentation. Its responses may not always be accurate — please verify important information or contact our support team.”
This notice fulfils Articles 50 of the EU AI Act (transparency for AI systems that interact with people or generate text).
The Help Center assistant only accesses idloom’s own documentation and knowledge base (user guides, FAQs, release notes, and help articles).
It does not use customer event data, attendee data, or personal information.
Interactions are logged for security and performance monitoring only.
Personal data (if any) contained in user queries is neither stored nor used to train AI models.
All processing occurs in secure environments compliant with GDPR and ISO 27001 standards.
For more information about how we process personal data, please consult our Privacy notice.
The AI assistant uses large language models (LLMs) provided by reputable vendors such as OpenAI, Gemini (Google), or Mistral, integrated through secure APIs.
All AI components are hosted under idloom’s ISO 27001-certified Information Security Management System (ISMS).
Access to the AI environment is restricted, logged, and regularly reviewed by idloom’s security team.
The AI assistant is designed to support — not replace — human expertise.
Users can always reach a human support agent for verification or clarification.
idloom’s support and compliance teams monitor AI outputs for accuracy, relevance, and ethical consistency.
Any detected issue triggers a review under our internal incident-response process.
AI operations are integrated into idloom’s ISMS, which includes:
Continuous monitoring and vulnerability testing;
Role-based access controls and encryption of data in transit and at rest;
Regular audits under ISO 27001.
idloom’s AI systems are developed and operated in line with the EU’s Ethics Guidelines for Trustworthy AI.
We adhere to the following principles:
Human-centric design: AI assists users but never replaces their judgment.
Fairness and non-discrimination: Outputs are based only on verified technical content, avoiding personal or sensitive data.
Transparency: Users are always informed of AI interaction.
Accountability: idloom remains fully responsible for the behaviour and outcomes of its AI systems.
Additional AI-powered features (such as event configuration assistants or automated analysis tools) might be developed within the “Code Barnum” project, based on customer requests.
Each new AI component will undergo a dedicated risk assessment and compliance review before release.
We will update this page to reflect any changes in scope or functionality.
idloom is committed to deploying artificial intelligence responsibly — enhancing efficiency and accessibility without compromising ethics, privacy, or trust.
If you have questions about AI use at idloom or wish to report an issue, please contact:
privacy@idloom.com (Data Protection)
compliance@idloom.com (Compliance)
I herely accept the AI Policy.
Read and approved,