How does your compliance and ethics program deal with “conformity bias”?
In his blog on the Ethics Unwrapped website published by the University of Texas’ McCombs School of Business, Prof. Robert Prentice reviews some important recent research on the behavioralist phenomenon of “conformity bias” – “the tendency of people to take their cues as to the proper way to think and act from those around them.” As he describes, in one experiment conducted by Francesca Gino, “students were more likely to cheat when they learned that members of their ‘in-group’ did so, but less likely when learning the same about members of a rival group.” In a related vein, a study by Scott A. Wright, John B. Dinsmore and James J. Kellaris showed that the identity of the victim was also influential in forming individuals’ views of cheating – and specifically that “in-group members who scammed other in-group members were judged more harshly than in-group members who scammed out-group members.” (Citations/links to these and other studies on conformity bias can be found in Prentice’s post – which I encourage you to read.)
As with various other behavioral ethics concepts previously reviewed in the COI Blog, the ideas here may seem obvious (“When in Rome…”) – but being able to prove the points with data could help C&E officers get the attention they need in their companies to deal with conformity bias based ethical challenges. But even if the leaders in their organizations agree that something should be done about conformity bias, what is that something?
One step in this direction – which potentially covers a lot of ground – is to include a conformity bias perspective in C&E risk assessment. For instance, where, based on the findings of a risk assessment, the victims of a particular type of violation are likely to be seen more as out-group members than in-group ones, that may suggest the need for extra C&E mitigation measures (of various kinds) to address the risk area in question. Similarly, risk assessment surveys should (as many, but not all, currently do) target regional or business-line based employee populations that may be setting a bad example for other member employees. Additionally, one should – for the purposes of identifying conformity-biased based risks – consider whether for some employee populations the most relevant in-group is defined less by the culture in your organization but rather by that of members of their industry, as industries (as much as companies or geographies) can have unethical cultures (as suggested most recently in this Wall Street Journal story on the LIBOR manipulation scandal).
More broadly, just as the sufficiency of internal controls (policies, procedures, etc.) need to be assessed in any analysis of risk, so do “inner controls,” which is another way of thinking about how various behavioral ethics related factors diminish or enhance the risk of C&E violations. That is, the weaker the inner controls (based not only on conformity bias but other risk causing phenomena, behaviorist or otherwise), the greater the need for traditional internal controls.
A second such type of measure – which also is potentially broad – is in the realm of training and communications, and specifically finding ways to highlight the connections employees may have to those who otherwise are likely to be viewed as out-group members. The good news here, as Prentice writes, is that “[a]mong the most interesting findings in this entire line of research is how little it takes for us to view someone as part of our in-group, or of an out-group.”
At least in theory, this seems to underscore the benefits of a broad “stakeholder” approach of C&E. Ultimately, however, what may be needed here is less the skills of those who draft codes of conduct than of those can reach us on a deeper level regarding how we should really view our “group” membership – as was perhaps most famously done by Charles Dickens.