Published

A CCO Perspective on Artificial Intelligence

Artificial Intelligence – you’re surely seeing a lot of stories and opinions on this very relevant topic. Many of the stories you’ve read include both the good (optimizing business operations, incorporating predictive analytics, etc.) and the bad (large language model (LLM) “hallucinations”, plagiarism concerns, misinformation, bias, etc.). Between the future AI regulatory landscape and the powerful ways AI can enhance business operations, it will continue to be a topic of conversation for business leaders – and compliance professionals should be at the center of those discussions.

Most risk and compliance leaders realize the potential for AI  to enhance the business – but with great technology capabilities comes the need for proper governance. In fact, the topic of how to ethically use AI in the workplace is so salient, it is one of the keynote sessions at the NAVEX Next Virtual Conference on September 20, 2023, featuring Dr. Rumman Chowdhury, a lauded thought leader and expert on ethical AI.

Register for NAVEX Next Now

To further discuss the nuance of this top-of-mind topic, I recently sat down with NAVEX to answer some burning questions about how to respond to AI use in business.

There is no “one size fits all" approach to many things in business and the same is certainly true for AI. So, how should organizations approach the journey to ethical AI given their use cases, business needs, etc.? 

The first thing to do from a compliance perspective is to perform a risk assessment. We start by asking if AI is currently being used in the business, and if so, how? Is there documentation? Are there current rules about its use?

Next, we ask whether there are planned AI projects. We then ask if there is documentation or any rules/policies that will apply.

Lastly, we evaluate what needs to be done to make sure the use of AI does or will follow (1) the law, (2) the company’s values, and (3) ethical business practices.

When it comes to AI, the law is changing all the time. The proposed EU Artificial Intelligence Act will create regulatory obligations. The U.S. recently had congressional hearings on the subject and is considering how to implement effective regulations. Compliance must pay close attention to these new laws so that it can advise appropriately about planned activities. The EU GDPR already applies to some AI-related activities like automatic review of resumes to find the best candidates – there is a lot to follow.

The bottom line is that it’s critical for compliance to know the planned business activities so it can respond quickly and appropriately. Just because an activity is legal doesn’t mean that it fits the company’s values or ethical business practices. AI activities need to be reviewed using those criteria as well.

How can businesses effectively establish guardrails for their AI program that balances the excitement for AI’s possibilities with responsible use of the technology?

Any guardrail put around AI needs to include reference to the three issues named above – compliance with (1) the law, (2) the company’s values, and (3) ethical business practices.

Guardrails should never be too prescriptive because business practices and activities in this space change constantly. People engaged in trying to use AI need to be trained to consider these three focal points, and they should be written into a policy document or advisory note that is publicly available on an intranet site or other repository.

Do organizations have an ethical obligation to help their employees understand the planned impacts of AI in the organization? If so, what’s the best way to approach this communication/conversation?

Absolutely – any organization using or considering using AI has an ethical obligation to help employees understand the planned impacts of AI in the organization. They also have an obligation to train employees on the pitfalls of the use of AI and red flags to look out for based on the business activities occurring or being considered. Training can be done in person, via eLearning, or through webinars. Communication can be made in myriad ways. The important thing is reinforcement. People learn through the reinforcement of key ideas, and it is critical for the company to ensure everyone understands their ethical obligations with respect to AI usage.

Want to learn more?

The September 20th NAVEX Next Virtual Conference will explore the topic of AI in several ways, including ethical AI and how to leverage AI in the workplace to streamline operations and optimize compliance programs. To view the full and agenda and save your spot:

Register Here


Chat with a solutions expert to learn how you can take your compliance program to the next level of maturity.



NAVEX Next Virtual Conference is September 20th – Here’s What You Need to Know

On September 20, 2023, the 12th annual NAVEX Next Virtual Conference will kick off, sharing global insights from risk and compliance professionals. This article shares some key highlights on what you can expect to learn by attending.

Previous/Next Article Chevron Icon of a previous/next arrow. Previous Post

How (and Why) to Make a Meaningful Code of Conduct for Improved Workplace Culture

How does your organization communicate its values, priorities and commitments to your workforce? Chances are, it’s through your code of conduct – but what state your code of conduct is in may open an entirely different can of worms. This post discusses why the code of conduct should be prioritized as a valuable asset to improve the culture of compliance in your organization.

Next Post Previous/Next Article Chevron Icon of a previous/next arrow.