Ledgy AI Policy
1. Introduction
Policy Statement
Ledgy is committed to using Artificial Intelligence (AI) technologies in a responsible and ethical manner.
We recognise the potential of AI to enhance our operations and improve the experiences of all participants in our ecosystem. This Charter outlines our approach to ensuring that AI systems are developed and used in a manner that aligns with our organisational values, meets applicable regulatory requirements and delivers value to our clients while mitigating potential risks.
Scope and Applicability
This Charter:
- Applies to all Ledgy services which utilise AI
- Must be adhered to by all Ledgy’s employees, contractors and partners involved in AI-related activities or utilising AI technologies
- Provides guidance to Ledgy clients on how to utilise the AI capabilities within Ledgy’s platform
- Governs the entire AI lifecycle at Ledgy, from conception to retirement
Regulations
We maintain an up-to-date registry of relevant AI regulations in Switzerland, the UK and Germany and conduct regular reviews of AI systems and processes against the backdrop of these regulations.
General Usage Principles
- AI use cases across all aspects of our business: We enable teams to adopt generative AI in a variety of ways to accelerate their work, including but not limited to: content generation, code generation, chatbots, workflow automation, automated answers, and product-embedded AI features. We prioritise a human-centred approach to AI that empowers individuals and teams. AI technologies will assist and augment human capabilities, not replace human creativity and decision-making.
- Transparency: Through this Charter, we inform users when and how AI is being used as part of their Ledgy experience. Where AI-generated output is produced on the Ledgy platform, we also explicitly inform users of this.
- Governance and Accountability: Ledgy has an established governance framework ensuring accountability for AI usage and development within the business. We ensure that humans remain at the center of all AI applications, providing critical oversight and judgment.
- Bias and Fairness: We take steps to mitigate bias in our systems and ensure that our use of AI is fair and equitable. We monitor our systems and workflows to ensure that we have not inadvertently introduced bias that could result in any group or population being discriminated against.
- Data Privacy and Security: All data used within our AI systems is handled in a secure and responsible manner, complying with all relevant data privacy and security regulations and our security certifications.
- Protection of IP and Confidential Data: We restrict all data that contains sensitive Ledgy or client data from being input into public or open-source generative AI tools without proper vetting and approval.
- Continuous Monitoring and Improvement: We continuously monitor our use of AI and evaluate its impact on our operations and clients, using this information to make improvements to our systems and processes.
We describe further how we embed these principles into our policies and processes at Ledgy below.
2. Transparency and Use Cases
This Charter is designed, by its content, to ensure transparency about Ledgy’s use of AI.
Ledgy has developed and utilises AI within its platform in the following ways:
- Structured data extraction from pdf documents
- Chatbot for company admin support (powered by Intercom’s FinAI)
In the interests of providing further transparency, where AI is used to generate output through a chatbot functionality, it is possible for a user to ask the AI for an explanation of the logic it has used to provide an answer, while Ledgy maintains the confidentiality of its proprietary AI algorithms.
Ledgy also utilises off the shelf third-party AI tools for business efficiency which are pre-vetted and approved by the Data Governance Committee and where such use complies with our data privacy and security policies.
3. AI Governance Structure
Ledgy has established a Data Governance Committee which includes responsibility for AI usage and development across the business, comprised of Head of InfoSec, Head of Legal, Head of Engineering, Principal Platform Engineer and Head of Revenue Operations. This Committee meets at least quarterly to review AI initiatives, assess risks, and ensure alignment with company values and evolving regulatory requirements.
Ledgy will, via the Data Governance Committee, perform regular audits on AI outputs and questions raised by users, as well as analyse any feedback provided by its clients and users to improve its AI systems’ ethics and efficacy.
4. Bias and Fairness
Currently, Ledgy’s AI systems have been built simply to summarise existing data and/or to direct users to sections of the platform rather than to provide advice and therefore we do not expect there to be any meaningful risk of bias within its outputs.
However, clients should ensure that its users refrain from asking Ledgy’s AI systems for advice, should not treat any output of Ledgy’s AI systems as advice, and must adhere to the data privacy requirements of our Terms of Service, which includes refraining from entering any special category or sensitive personal data into Ledgy’s platform (including any AI systems), such as data relating to race, ethnicity, religious beliefs, political affiliations or sexual orientation of stakeholders.
Clients should also report any identified incidents arising out of Ledgy’s current AI use cases - including any instances of inaccurate responses - for review and resolution by us.
5. Data Privacy and Security
Data Classification
Ledgy complies with all applicable data privacy and security laws in the countries in which it operates, including ISO-27001.
Data governance documentation reflects data that should be treated with additional privacy and/or security measures when inputting that data into an AI tool.
Experimentation on public data is encouraged. Systems using information with a more confidential or sensitive classification must be vetted by the Data Governance Committee.
Please see the addendum to our Data Processing Addendum for a detailed description of our full security and data handling procedures.
6. IP and Confidentiality
Ledgy respects intellectual property rights and confidentiality by ensuring that any client data input into AI tools within our platform is processed strictly within that client’s secure tenant environment and is not used to train any external or general-purpose AI models.
Additionally, where our employees utilise third-party AI tools, we have robust policies and oversight in place to prevent the disclosure of confidential or proprietary information, and to ensure such data is not used to train or improve external AI systems.
7. Third Party Management
We assess all AI vendors and third-party tools against our AI ethics and performance standards and conduct periodic audits of third-party AI systems used in Ledgy’s operations.
8. Continuous Monitoring and Improvement
Ledgy adopts a standardised AI development lifecycle as follows:
- Planning and requirements scoping
- Testing and validation
- Deployment
- Monitoring and maintenance
We conduct rigorous testing of AI systems across various scenarios relevant to their domain expertise.
A formal validation process is implemented before any AI system is made available to clients.
Our Data Governance Committee is responsible for continuous monitoring and improvement across the business, including human expert review of AI-assisted outputs on a sample basis.
We also collect and analyse user feedback and incorporate validated feedback into system improvements.
9. Contact
Ledgy periodically reviews and updates this policy as needed to ensure it remains effective and complies with applicable laws and regulations.
Clients should immediately notify Ledgy by emailing security@ledgy.com in case Ledgy’s AI has led to a clear error as a result of either incorrect information, misrepresentation, misinformation, not being transparent or not being fair.