Managing the Risks of Generative AI

Generative AI(GenAI) has become immensely popular but its adoption by businesses has a built-in risk. Organisations must have an ethics policy in place to guide the users on using GenAI responsibly. 

GenAI technology has the potential to transform the way we learn and work. In business, GenAi has the potential to transform the way businesses interact with their users, suppliers and all stakeholders and drive growth. Various functions within the business like HR, finance, sales, marketing, logistics and customer support are exploring different ways by which the potential of GenAI can be unleashed benefiting the organisation as a whole.

However, leaders need a secure and trusted way to let users access these new technologies. these technologies carry a potential security risk and the way these programs are trained could lead to biases in the final outputs. Businesses need to ensure an ethical, transparent use of these technologies.

The use of GenAI tools is different from the way individuals access the same tools. Businesses have to adhere to regulations about their industries as well legal and ethical implications have to be borne in mind if the outcome is inaccurate. Businesses need a complete framework of how to use GenAI tools and how to align them with the existing regulations. This framework should be transparent, fair, responsible, accountable and most importantly reliable. these principles should be built into the operations through responsible product development to mitigate potential harms and maximise potential benefits. 

Here are a few guidelines that organisations can use to operationalise the responsible use of GenAI and be implemented as businesses develop products and technology using the new technology.

Guidelines for ethical use of GenAI.

These guidelines cover five important areas and help organisations consider the risks as these tools become mainstream.

  • Accuracy. Organisations should train the GenAI models with their data to deliver results that can be verified. It is important to communicate when there is uncertainty about GenAI outputs and it requires to be validated by people. This communication should include the sources that are being used, explaining why the AI gave such an output, as it did and highlighting the uncertainty. this helps build guardrails around the results and inform competent people to make the right decision. 
  • Safety Every effort must be made to eliminate biases, toxicity and harmful outcomes by conducting explainability assessments. Organisations must take measures to protect private information to prevent potential harm. Planned and periodic security assessments must be carried out to test the vulnerabilities of the system to prevent its exploitation.
  • Honesty. When collecting data to train GenAI models respect the source of data and ensure you are authorised to use that data. this can be open-source or even user-provided data. the results that are shared should state explicitly that this is an AI-generated output. 
  • Empowerment. GenAI should play a more supporting role though there could be use cases that justify its use in a fully automated process. In businesses where trust is an important requirement like finance, people must be involved in the final decision with the data-driven insights provided by AI tools. Also, the outputs from the program should be accessible to all.
  • Sustainable. AI, in general, is based on Large Language models(LLMs) to ensure the accuracy of the results. As companies strive to develop their models, they should focus less on the volume of data and more on the accuracy of the model using large amounts of highly qualified data. this would help reduce the carbon footprint leading to less energy consumption.

Integrating GenAI.

When businesses choose to use ready tools available off the shelf and integrate them with their systems and processes, use the following guidelines on the precautions to be taken while integrating them safely with internal systems.

  • Use zero-party or first-party data:  Companies should use data that customers provide proactively(zero-party) and the data they collect directly(first-party). Data sources are crucial to ensure that they are accurate, reliable, original and trusted. Relying on third-party data makes it very difficult to ensure the accuracy of the data. 
  • Keep data fresh and well-marked: AI is as good as the data it is trained on. Models will produce inaccurate results if they are based on old, incomplete data. Again data should be free of biases. Businesses should review all the datasets and documents that will be used to train the data models and ensure that they are free from any bias toxicity or inaccuracies.
  • Ensure human supervision in the process:  Just because something is automated does not mean that there would be no human intervention. Humans need to be involved in interpreting the outputs for accuracy and ensure that it is working as planned. GenAI programs should be seen as a tool to augment human capabilities rather than displace them.

Businesses play a critical role in ensuring that these tools enhance the working experience of their employees. they should ensure the responsible use of AI in ensuring accuracy, safety, unbiasedness and mitigating risks. this commitment should extend beyond individual business goals and encompass broader social responsibilities.

Test & test ….

GenAI needs to be tested for accuracy and other factors on an ongoing basis. Companies can automate the review process and develop standard mitigations for specific risk cases. Humans have to be involved in checking the outputs for accuracy and biases. If resources are a constraint they should prioritise testing models that can potentially be the most harmful.

Feedback.

Feedback from employees and impacted communities is crucial to identify risk and make the necessary corrections. Companies should create channels for employees to share their concerns and even the users to report their feedback. Have open lines of communication with stakeholders to avoid unintended consequences.

With GenAi becoming mainstream, businesses have to ensure that they use the technology ethically and mitigate the potential harm quickly. Companies should stick to an ethical framework to navigate this period of rapid transformation as the regulations are put in place to guard the users against the harmful effects of inaccurate results from the use ofGenAI.

Managing the Risks of Generative AI
by Kathy Baxter and Yoav Schlesinger
HBR 2023/06

Leave a comment