From the course: Ethics in the Age of Generative AI

Preparing C-Suite in directing responsible AI

From the course: Ethics in the Age of Generative AI

Preparing C-Suite in directing responsible AI

- CEOs and the C-Suite play a critical role in building cultures of responsible AI. They set the tone by establishing practices and principles and they ensure that every individual in the organization feels like they're a part of making ethical decisions. Earlier in this course, we discussed the example of Alice Wong and her company that faced a critical dilemma around the deployment of an AI chat bot. In that instance, if we were advising the C-suite of the company, we'd start with the following recommendations. First, to make sure that a responsible AI policy and governance framework is in place. This is a statement from the C-Suite about how the organization should design and manage AI technologies. It should describe how to make ethical decisions. It should protect privacy, and it should focus on the elimination or reduction of bias. For example, the C-suite might mandate that AI tools are trained with diverse data sets or they might require that chatbots are always identified as an AI and not impersonating a human customer support agent. These guiding principles create a shared set of values that everyone from data scientists to supervisor, to field staff, can use to evaluate and guide the deployment of artificial intelligence. Next, we might advise the C-Suite, provide an maybe even mandate responsible AI training and education for every person in the organization. This method of democratizing decision-making around AI tools can be very powerful, can bring business knowledge, present and frontline service fields to help train and develop internal models. For example, in Alice Wong's example, Alice relied on customer service agents with years of direct customer experience, helping solve consumer challenges to validate the recommendations of the AI models. Empowering these agents to understand the limitations of the model can increase the quality of their feedback. Then, C-Suites should insist on building ethical AI elements into all of their technologies and conduct regular audits. The C-Suite can identify specific metrics such as customer satisfaction and create regular reporting mechanisms to ensure that the company's AI practices are aligned with responsible AI principles. Here's an example. That might look like setting up monthly standups, where technology executives join the C-suite and present ethical challenges that have emerged in the past month. This could be the start of a dialogue with C-Suite executives understanding and documenting ongoing interventions and improving the ethical nature of the product. Much like safety practices in other industries, this practice socializes and makes ethical AI development a shared and accountable responsibility. Finally, the C-Suite might consider hiring a chief AI ethics officer. The company might establish a specific senior role, focused on AI ethics that can develop and oversee the use of responsible AI practices and serve as a central audit for other departments. They should understand the intersection of the business the technology and the customer experience and they could provide a check-in balance for technology development to ensure that community voices are also present and ensure that potential risks are identified early in the creation process. The C-Suite sets the tone for responsible AI across the organization, creating strong policies, ensuring that there's appropriate training, establishing monitoring and reporting mechanisms, and potentially creating roles focused on AI Ethics. With the C-Suites primary responsibility for guiding responsible AI, we can build cultures that focus on ethical decisions, even as we deploy great new products.

Contents