Artificial intelligence has transformative potential for driving value and insights from data, the core element of our business. Our commitment to maintaining the highest standards of privacy, security, and responsibility will guide a thoughtful and just approach to AI that creates trust while accelerating innovation, ensuring customer success, and demonstrating a commitment to excellence. InterSystems has developed a set of principles to guide incorporation of artificial intelligence into our products, in order to drive innovation with AI systems and illuminate its purpose as an advisor, an assistant, and an enabler.
InterSystems AI Ethics Principles
To support our commitment to the ethical use of artificial intelligence, we have established a set of AI ethics principles emphasizing transparency, responsibility, and explainability. These principles are integral to fulfilling our social and ethical responsibilities while complying with relevant laws.
To ensure the practical application of these principles, we will actively promote employee awareness of AI ethics through comprehensive training programs. We have implemented an evaluation process for AI projects and products designed to avoid harmful bias and discrimination, guard privacy and data ownership, and ensure safety, responsibility, and security by subjecting every AI product and project to rigorous evaluation and governance.
Our commitment to these principles will also include collaboration with healthcare and technology industry professionals, groups, and external consortia to ensure a comprehensive approach to promote trust and responsibility with artificial intelligence.
Our Commitment
- We recognise AI as a facilitator of human potential and productivity through enhancing human creativity, and decision-making processes.
- We prioritise education, training and deployment of AI technologies while safeguarding against the potential for automation to undermine or displace human workers.
- Automation facilitated by AI should complement and support human workers. We are committed to safeguarding against any potential negative impacts on employment and workforce dynamics.
- We advocate for transparency, responsibility, and explainability as foundational elements in establishing trust and fostering innovation within AI systems.
- To build trust in artificial intelligence while encouraging collaboration and experimentation, we are committed to transparency around the integration of AI in products and projects to contribute to the development of more innovative and ethically sound AI solutions.
- Upholding transparency, responsibility, and explainability is integral to safeguarding patient safety and welfare in AI-powered healthcare products. Transparency around AI-created content and source data, including where modifications are made to AI-generated content, will instil trust and ensure clarity around the use of AI in healthcare settings.
- We avoid harmful bias and discrimination by ensuring that projects and products using artificial intelligence utilise data from diverse sources and actively seek to mitigate under-representation or marginalisation of certain groups.
- AI systems should prioritise fairness and equity in their outcomes, ensuring that decisions do not disproportionately harm or disadvantage specific individuals or groups.
- We are committed to ensuring the systems incorporate robust mechanisms to identify, analyse, and mitigate biases present in datasets, algorithms, and decision-making processes.
- We will continue to prioritise the protection of privacy and ownership rights concerning data, and data utilised in artificial intelligence systems. Our Global Trust program serves as a robust framework to fortify data security and privacy measures, fostering trust within our customer relationships.
- InterSystems is fully committed to ensuring compliance with applicable regulations, standards, and laws pertaining to artificial intelligence.
- This commitment underscores our dedication to upholding data protection and privacy standards across all facets of AI development and deployment by ensuring products and projects are regularly assessed in-line with the evolving regulatory and legal frameworks.
- We will validate AI algorithms and technologies through rigorous testing and evaluation to ensure their effectiveness and reliability in clinical settings, and otherwise.
- We assure patient safety and welfare through clinical validity and efficacy of healthcare products.
- All solutions with AI capabilities must comply with regulations associated with their scope, use case, and limitations, especially if used for diagnostic purposes subject to medical device regulations.
- We will prioritise evidence-based approaches and collaboration with healthcare professionals to validate AI-driven interventions.