Ethical AI: Pre-regulation considerations

The AI ​​leviathan continues to tower over every data center, with organizations rushing to deploy AI-based solutions for immediate benefits, or to build the infrastructure and models to deliver ambitious long-term returns on research projects. Regardless of where an organization is on its AI journey, the rapid development of this technology has left regulators playing catch-up when it comes to how AI should be moderated to ensure the technology is used ethically . There is an urgent need for clarity on liability in the event of errors or unintended consequences. There is also a clear need to develop legal frameworks that provide guidance for determining responsibility when AI systems cause harm or fail to meet expected standards.

Alex McMullan

CTO International at Pure Storage.

What is ethical AI?

Ethical AI means supporting the responsible design and development of AI systems and applications that do not harm people and society as a whole. While this is a noble goal, it is not always easy to achieve and requires in-depth planning and constant vigilance. For developers and designers, key ethical considerations should include, at a minimum, protecting sensitive training data and model parameters from manipulation. They must also provide real transparency into how AI models work and are affected by new data, which is essential to ensure proper oversight. Regardless of whether ethical AI is approached by the C-level of a private company, government or regulatory agency, it can be difficult to know where to start.

Transparency as a basis