Published

With AI, Compliance Should Focus on the "Intelligence" not the "Artificial"

Since the publication of this blog, Google disbanded its ATEAC. Though this was the catalyst for this blog, it does not change its sentiment. 


In June 2018, Google released an ethical charter in the form of seven AI principles to ensure the company’s artificial intelligence research and products would be responsibly developed. To build on those efforts, last month Google announced the Advanced Technology External Advisory Council (ATEAC) – a governance body charged with ensuring these ethical AI principles are truly embedded into the company’s innovation.

As one of the principal engines driving the evolution and ubiquity of artificial intelligence, Google is a rightful leader in the development of proper governance and oversight structures. But compliance professionals at any size company should take note: As the ethical stewards of the modern-day workplace, compliance officers can and should be at the forefront of developing the acceptable-use frameworks that guide organizations as they adopt new technologies, especially when that new tech may affect ethics and compliance program efforts and the workplace cultures they drive.

The Venn Diagram of Compliance Management & Artificial Intelligence

You’d be hard pressed to find a compliance professional who is also a programmer, developer or engineer, so it might seem a bit outside the box to connect the compliance function deeply to AI strategy. But as mentioned in NAVEX Global’s 2019 Top 10 Ethics & Compliance Trends eBook, “…compliance professionals do not need to become AI experts….Compliance professionals are experts on corporate culture, risk mitigation, and change management. It is that expertise that needs to be applied to the adoption of AI solutions.”

Problems arise when those solutions create opportunities for abuse or lack proper oversight from an expert with a keen understanding of ethics, compliance and corporate culture.

When we look at the risks associated with AI through the lens of organizational intelligence, familiar issues start to arise. For instance, unintended bias in facial recognition software can look a lot like workplace discrimination; automated employee performance scoring might create scenarios of digitized manipulation or retaliation; and complexities in accounting AI have the ability to create unnecessary shadows in which bribery and corruption can take hold. Now the software and solutions are not necessarily the problem. Problems arise when those solutions create opportunities for abuse or lack proper oversight from an expert with a keen understanding of ethics, compliance and corporate culture.

Here the “artificial” side of AI can be separated from the “intelligence” side to create clear intersections for E&C pros to identify and lean into their strengths.  

While all of Google’s AI Principles provide insightful guidance applicable to ethics and compliance officers, I will highlight three that readily offer these points of intersection.

Be Socially Beneficial

This principle aims to ensure the benefits of new technology drastically exceed downsides for all the groups and environments that can be affected directly or indirectly. “Can be” and “indirectly” are operative words here. With every piece of software in our tech stacks we can know who will be affected directly. However, it is important to play out multiple scenarios of implementation and adoption to uncover groups that may be incidentally affected – positively or negatively – to deliver a more equitable outcome.

Avoid Creating or Reinforcing Unfair Bias

Compliance professionals are instrumental here with their diverse arsenal of knowledge and experience on employee perception and behavior.

This has and will continue to be a key area for oversight for compliance governance. Compliance professionals are not the engineers developing AI algorithms, so it is not their biases that will be embedded into the programs; however, they can ensure that the biases of others are not embedded either. This requires a careful and empathetic approach to understand multiple viewpoints on process and outcomes. Compliance professionals are instrumental here with their diverse arsenal of knowledge and experience on employee perception and behavior.

Be Accountable to People

Here, the charter documents the need to create opportunities for human feedback on AI solutions. Compliance can take that a step further by turning human accountability into a key performance indicator (KPI) for program evaluation. New efficiencies should not create unintended burdens for the employee experience. Employee adoption and improved or neutral workplace experiences should be considered as an indicator of success.

People, People, People

You’ll notice all the principles I highlight here as well as a number of other principles listed by Google are directly related to human interaction with AI, not necessary the technology. This should put ethics and compliance practitioners in a seat of comfort when it comes to being leaders on technology within their organizations. Your real-world intelligence will to guide the way for artificial intelligence.

Read More: Moving from Speculative to Realistic Conversations on Artificial Intelligence


Chat with a solutions expert to learn how you can take your compliance program to the next level of maturity.



Group-Level Accountability for Third-Party Risk: Why It’s So Hard

Third-party risk ownership sits across any number of functions in today’s organizations, so why doesn’t third-party accountability do the same? “Accountability” means different things to different people within your enterprise, and pain points are unique as well. If there is no single owner of third-party risk, group-level accountability can help ensure there is proper responsibility for third-party management.

Previous/Next Article Chevron Icon of a previous/next arrow. Previous Post

Building Trust Through Whistleblowing. From Suspected to Expected and Protected.

Next Post Previous/Next Article Chevron Icon of a previous/next arrow.

Comments