For AI-driven companies, awareness of the urgency of the responsible application of AI became essential with the increase of interest from different stakeholders. Responsible artificial intelligence (RAI) emerged as a practice to guide the design, development, deployment, and use of AI systems to ensure benefit to users and those impacted by the systems’ outcomes. This benefit is achieved through trustworthy models and strategies that assimilate ethical principles to ensure compliance with regulations and standards for long-term trust. However, RAI comes with the challenge of lack of standardization when it comes to which principles to adopt, what they mean, and how they can be operationalized.
This survey aims to bridge the gap between principles and practice through a study of different approaches taken in the literature and the proposition of a foundational framework.
The paper will be published soon in a special issue of IEEE Intelligent Systems dedicated to AI Ethics and Trust: From Principles to Practice. Unfortunately, It will only be available to subscribers, but do not hesitate to contact us for more information or the pre-print version!
Maryem Marzouk, Cyrine Zitoun, Oumaima Belghith, Sabri Skhiri, The Building Blocks of a Responsible AI Practice: An Outlook on the Current Landscape, published in IEEE Intelligent Systems (2023).