Bias

Two very different types of bias topics will be examined in the blog: A) Under what situations involving business organizations should bias be treated like a traditional COI. B) How often-unrecognized biases can inhibit ethical decision making, which is one of the principal teachings from behavioral ethics (i.e., “cognitive biases.”)

Behavioral ethics, the board and C&E officers

In Conflicts and Biases in the Boardroom, recently posted on the Harvard Law School Forum on Corporate Governance and Financial Regulation, Frank Glassner, of Veritas identifies five ways that cognitive bias can inhibit great governance:

– A board is reluctant to ask the right questions

– The group is unable to fully and effectively involve new board members

– Excessive deference is afforded to a few board members with a long company history

– Peer pressure and conformance minimize constructive dissent

– Inflexible adherence to tradition limits consideration of new initiatives.

He further writes: “Every board member must acknowledge that implicit biases impact his/her objectivity.”

Not surprisingly (given the focus of the COI Blog), I agree with this. But I also wonder if there is a place for the compliance  and ethics officer in helping to address this daunting area.

In an earlier post  I wrote: Ultimately, for a company to have not only a strong compliance program but also an ethics one, the CEO and other leaders would empower the C&E officer to identify and challenge decisions that may be based on bias. (Note that I don’t mean literally all such decisions, but those that are significant in potential impact and have a meaningful ethics/fairness dimension.) The leaders would do so because they would understand that being fair is not just a matter of good intentions; rather, it can also require expertise and effort – both of which the C&E officer can bring to a challenging set of circumstances.

Here is another prior post addressing the issue:  Behavioral ethics and compliance: what the board should ask..

Finally, here is an index of  behavioral ethics and compliance posts generally.

 

 

Extra compliance for high-risk-potential employees

In “The Power Few of Corporate Compliance,” 53 Georgia Law Review 128 (2018), Todd Haugh of Indiana University’s Kelley School of Business argues: “Corporate compliance in most companies is carried out under the assumption that unethical and illegal conduct occurs in a more or less predictable fashion. That is, although corporate leaders may not know precisely when, where, or how compliance failures will occur, they assume that unethical employee conduct will be sprinkled throughout the company in a roughly normal distribution, exposing the firm to compliance risk but in a controllable manner. This assumption underlies many of the common tools of compliance—standardized codes of conduct, firmwide compliance trainings, and uniform audit and monitoring practices. Because regulators also operate under this assumption, what is deemed an ‘effective’ compliance program often turns on the program’s breadth and consistent application. But compliance failures—lapses of ethical decision making that are the precursors to corporate crime—do not necessarily conform to this baseline assumption.”

I agree that in many – but certainly not all – companies there is too much emphasis on C&E breadth and too little on depth. Indeed, many years ago, Joe Murphy  – who can be justly called “the father of compliance” – cautioned against overreliance on employee compliance surveys with the memorable words “criminal conspiracies do not operate by majority rule.” I also recall a CEO advising me, as I set out to conduct a risk assessment of his company, “not to spread the peanut butter too thinly.” Moreover, compliance program evaluation standards recently issued by the Justice Department’s Criminal and Antitrust divisions  will likely cause more companies to make their respective programs risk based.

Haugh further argues that: “Extreme failures are more likely the result of small groups of individuals acting unethically or illegally, who by virtue of their social and organizational networks account for an outsized amount of bad conduct, and therefore harm. These individuals are the power few of corporate compliance…Companies seeking to improve compliance, and therefore corporate governance, should no longer focus indiscriminately on organizational culture writ large. Instead of designing compliance programs aimed generally at promoting ethical culture as suggested by the Organizational Sentencing Guidelines and adopted by regulators and compliance professionals, compliance should be approached from a behavioral ethics risk management paradigm. Compliance efforts should target those individuals within the company whose unethical decision-making pose the greatest risk according to behavioral and organizational factors such as job task, leadership role, propensity to rationalize wrongdoing, and social and organizational networks. This risk-based approach may be consistent with current compliance efforts to improve companies’ ‘tone at the top,’ assuming that is where the behavioral ethics risk lies. But it also recognizes that the focus of these efforts may correctly bypass the C-suite in order to lessen the significant compliance risk caused by the power few, wherever they may be within an organization.”

Being of the behavioral persuasion I generally agree with this too. But I do think there will always be a need for enterprise-wide compliance efforts, both as an operational and symbolic matter.

Finally, Haugh identifies other compliance program recommendations based on the “power few” theory including:

– “Identify employee ethics during the hiring stage.”

– “Identify the power few in the organization…Once identified, these employees are monitored for risk-taking behavior, and how they manage it factors into promotion and compensation decisions.”

– “To ensure unethical employee decision-making is properly targeted, companies ‘need to frame [their] training around . . . specific, risky job tasks ‘.”

– “Finally, as ethical employees advance in the company and become hubs of influence themselves, they should be leveraged as ‘behavioral compliance ambassadors.’”

These are generally sound ideas and to some extent are already in use (as Haugh notes), particularly the risk-based training one. Still, having what is in effect an employee integrity watch list may be a tough sell in many companies.

Lawyers as compliance officers: a behavioral ethics perspective

What role do corporate lawyers play in preventing wrongdoing by executives in their client organizations? And how is this role impacted by behavioral ethics?

In “Behavioral Legal Ethics Lessons for Corporate Counsel,” to be published in the Case Western Reserve Law Review, Paula Schaefer of the University of Tennessee College of Law  first examines “the corporate lawyer’s consciously held conceptions and misconceptions about duty owed to her corporate client when company executives propose a plan that will create substantial liability for the company—when and if it is caught.” As she shows, lawyers often have an unduly limited view of what that duty is.

Schaefer next “turns to behavioral science and highlights some of the key factors that corporate attorneys are unconsciously influenced by as they try to decide how (or if) to address client conduct that may amount to a crime or fraud.” Those factors are:

Attorney self-interest. A key point on how to become a criminal defense lawyer is this: “Corporate advisors keep their jobs (as inside or outside counsel) when they keep executives happy; they do this by finding ways to implement corporate executives’ plans, and not by saying no.” Of course, on some level this is obvious but, based on the research of Tigran W. Eldred of New England Law School,  she notes that lawyers are often not aware of the extent to which self-interest corrupts the professional conduct of attorneys vis a vis clients.

Obedience Pressure. A key point here: “Obedience research explains the power an authority figure or colleagues have to influence bad advice.” The best-known study in this area is, of course, that conducted by Stanley Milgram, which measured the extent to which participants were willing to inflict shocks on apparent learners in the experiment when instructed to do so by an apparent authority figure and which demonstrated just how powerful obedience pressure could be. As Schaefer notes: “In the case of a corporate attorney addressing planned conduct that may be criminal or fraudulent, the authority figure is likely the corporate executive that the attorney reports to in the professional relationship.” And as she notes this is likely to create more pressure than the instruction of some man in a white coat in Milgram’s experiment.

Conformity Pressure. Here, Schaefer describes experiments by Solomon Asch concerning the extent to which the participants gave knowingly incorrect answers to a question because of the fact that other participants did so. The results showed a high degree of such correlation. As she notes: “Asch’s research should be particularly concerning for lawyers. For Asch’s subjects, the stakes were low—the subjects likely did not know the other participants in the study and had no ongoing relationship with them. Further, the right answer was black and white, and they still felt pressured to choose the wrong answer selected by the majority. For a corporate lawyer addressing possibly fraudulent or criminal conduct, the group (with whom she feels pressure to conform) might be fellow attorneys or other decision makers at the corporation.”

Partisan Bias. Schaefer writes: “The research reveals that partisanship makes it difficult for a lawyer to filter and interpret information objectively. One study found that students who participated in a moot court competition overwhelmingly perceived that their assigned side had the better case. In another study, subjects were asked to play the role of attorney for plaintiff or defendant in determining the settlement value of a case. Even though both sides received identical information, those who were randomly assigned to play the plaintiff predicted an award substantially higher than that predicted by the defendant.”

Schaefer next considers “interventions to combat a corporate attorney’s wrongful obedience and conformity.” All of these seem sound, but I don’t have space to discuss them here.

However, I do want to add that – although not the focus of Schaefer’s paper – the research may also be relevant to the longstanding debate about whether the general counsel or other member of the law department should serve as chief ethics and compliance officer (CECO)  or if the individual in that role should be independent with respect to reporting purposes. At least to me, the research suggests that it may be more difficult for in-house attorneys to rise above the potential conflicts in this role than is generally thought.

Of course, even an independent CECO would be subject to the various biases described in this article. However, they would still – in my view – stand a better chance of ethical success since the notion of independence is truly foundational to their role, i.e., there is presumably not the same confusion about their duty than Schaefer found was the case with in-house attorneys.

Finally, note that I am not saying that this means that the General Counsel can never serve in a CECO role – only that the implications of this research should be considered along with various other factors in determining what approach makes the most sense for a given company.

For further reading:

– The Legal Ethics Blog

– An earlier post from the COI Blog with a different view on lawyers as compliance officers

Ethics made easy

We justly praise those who show true ethical heroism.  But to protect business organizations  and society generally from legal and ethical breaches we need to aim our efforts more broadly.

In “How to Design an Ethical Organization” in the May-June 2019 issue of the Harvard Business Review, Nicholas Epley, John Templeton Keller Professor of Behavioral Science at the University of Chicago Booth School of Business, and Amit Kumar, an assistant professor of marketing and psychology at the University of Texas at Austin, argue: few executives set out to achieve advantage by breaking the rules, and most companies have programs in place to prevent malfeasance at all levels. Yet recurring scandals show that we could do better. Interventions to encourage ethical behavior are often based on misperceptions of how transgressions occur, and thus are not as effective as they could be. Compliance programs increasingly take a legalistic approach to ethics that focuses on individual accountability. They’re designed to educate employees and then punish wrongdoing among the “bad apples” who misbehave. Yet a large body of behavioral science research suggests that even well-meaning and well-informed people are more ethically malleable than one might guess…Creating an ethical culture thus requires thinking about ethics not simply as a belief problem but also as a design problem. We have identified four critical features that need to be addressed when designing an ethical culture: explicit values, thoughts during judgment, incentives, and cultural norms.

The first of these is “explicit values.” Among the key points here are that:

Strategies and practices should be anchored to clearly stated principles that can be widely shared within the organization. A well-crafted mission statement can help achieve this, as long as it is used correctly. Leaders can refer to it to guide the creation of any new strategy or initiative and note its connection to the company’s principles when addressing employees, thus reinforcing the broader ethical system.

A mission statement should be simple, short, actionable, and emotionally resonant. Most corporate mission statements today are too long to remember, too obvious to need stating, too clearly tailored for regulators, or too distant from day-to-day practices to meaningfully guide employees.

The second design consideration is “thoughts during judgment.” Among the key points here are that:

– Most people have less difficulty knowing what’s right or wrong than they do keeping ethical considerations top of mind when making decisions. Ethical lapses can therefore be reduced in a culture where ethics are at the center of attention. … Behavior tends to be guided by what comes to mind immediately before engaging in an action, and those thoughts can be meaningfully affected by context.

– Several experiments make this point… In a large field experiment of approximately 18,000 U.S. government contractors, simply adding a box for filers to check certifying their honesty while reporting yielded $28.6 million more in sales tax revenue than did a condition that omitted the box.

The third consideration is incentives. Here the authors note: Along with earning an income, employees care about doing meaningful work, making a positive impact, and being respected or appreciated for their efforts… An ethical culture not only does good; it also feels good.

The final design consideration is cultural norms. Here the authors recount the results of several experiments showing the often underappreciated power of such norms in creating ethical risk.

The authors conclude the article with several helpful suggestions for putting ethical design into practice – including in the contexts of hiring, personnel evaluation, compensation.

Note that there is a lot more that could be said about how behavioral ethics can inform and fortify compliance programs. (See this index of prior posts on this subject.) But the ideas and information is this article are very helpful, and the overall point – that [o]rganizations should aim to design a system that makes being good as easy as possible – seems exactly right to me.

Point-of-risk compliance

Here is my latest column from C&E Professional – on  “point-of-risk compliance.”

I hope you find it useful.

Behavioral ethics program assessments

Rebecca Walker and I have an article in the Spring 2019 issue of Ethical Boardroom on behavioral ethics program assessments.

We hope you find it interesting.

Being a parent as a source of ethics risk

In 1973, in speaking to colleagues on the  Cook County Democratic Committee, Mayor Richard Daley of Chicago defended his having directed a million dollars of insurance business to an agency on behalf of his son John with the immortal words: “If I can’t help my sons, then [my critics] can kiss my ass. I make no apologies to anyone.”

I thought of this when I read about the college admission bribery scandal that emerged this past week. The scandal called to mind other cases where parents violated the law to benefit their children. A famous instance of this sort from the 1980s concerned the hiring (by a former Miss America) of a NY judge’s daughter to influence the judge’s decision on a pending case.  In 2016, JP Morgan settled a “Princeling” case, which involved the bank’s hiring the sons and daughters of important Chinese officials in return for business. And there are many other cases like these – presumably going back to our early history.

The behavioral ethics and compliance perspective focuses on structural causes of wrongdoing. There are many fruitful avenues for behavioral ethics inquiry suggested by the college admission bribery scandal. The one that most interests me is whether it is easier to commit a crime when one is doing so not to help oneself but to help one’s child.  (Note that I understand that there are also personal reputational benefits that parents get from having their child admitted to a prestigious university but still think that the principal beneficiaries of this corruption are the children.)

Given how powerful the drive to help one’s offspring is – both as a matter of the instincts we are born with and the social norms that we adopt – the answer is almost certainly Yes, at least as a general proposition. If this turns out to be an operative fact in the admissions bribery scandal, then I hope a lesson will be that parents should refrain from doing things for their children that ethically they wouldn’t do for themselves.

As the scandal unfolds I’ll also be interested in learning what was the role – or lack thereof – of risk assessment and auditing in the respective compliance programs of the universities involved. Based on the press accounts it seems as if this kind of corruption was probably fairly common. If that is so, where were the compliance programs?

Behavioral ethics and compliance assessments

New from the Compliance Program Assessment Blog.

Rebecca Walker and I hope you find it interesting.

Behavioral ethics and compliance index 2019

While in the more than seven years of its existence the COI Blog  has been devoted primarily to examining conflicts of interest it has also run quite a few posts on what behavioral ethics might mean for corporate compliance and ethics programs. Below is an updated version of a topical  index to these latter posts.  Note that a) to keep this list to a reasonable length I’ve put each post under only one topic, but many in fact relate to multiple topics (particularly the risk assessment and communication ones); and b) there is some overlap between various of the posts.

INTRODUCTION 

– Business ethics research for your whole company (with Jon Haidt)

– Overview of the need for behavioral ethics and compliance

Behavioral ethics and compliance: strong and specific medicine

– Behavioral C&E and its limits

Another piece on limits

– Behavioral compliance: the will and the way

Behavioral ethics: back to school edition

A valuable behavioral ethics and compliance resource

BEHAVIORAL ETHICS AND COMPLIANCE PROGRAM COMPONENTS

Risk assessment

–  Being rushed as a risk

–  Too big for ethical failure?

– “Inner controls”

– Is the Road to Risk Paved with Good Intentions?

– Slippery slopes

– Senior managers

– Long-term relationships

– How does your compliance and ethics program deal with “conformity bias”? 

– Money and morals: Can behavioral ethics help “Mister Green” behave himself? 

– Risk assessment and “morality science”

 Advanced tone at the top

Communications and training

– “Point of risk” compliance

–  Publishing annual C&E reports

– Behavioral ethics and just-in-time communications

– Values, culture and effective compliance communications

– Behavioral ethics teaching and training

– Moral intuitionism and ethics training

Reverse behavioral ethics

The shockingly low price of virtue

Imagine the real

Positioning the C&E office

– What can be done about “framing” risks

Compliance & ethics officers in the realm of bias

Accountability

– Behavioral Ethics and Management Accountability for Compliance and Ethics Failures

– Redrawing corporate fault lines using behavioral ethics

– The “inner voice” telling us that someone may be watching

–  The Wells Fargo case and behavioral ethics

Whistle-blowing

– Include me out: whistle-blowing and a “larger loyalty”

Incentives/personnel measures

– Hiring, promotions and other personnel measures for ethical organizations

Board oversight of compliance

– Behavioral ethics and C-Suite behavior

– Behavioral ethics and compliance: what the board of directors should ask

Corporate culture

– Is Wall Street a bad ethical neighborhood?

– Too close to the line: a convergence of culture, law and behavioral ethics

–  Ethical culture and ethical instincts

Values-based approach to C&E

 A core value for our behavioral age

– Values, structural compliance, behavioral ethics …and Dilbert

Appropriate responses to violations

– Exemplary ethical recoveries

BEHAVIORAL ETHICS AND SUBSTANTIVE AREAS OF COMPLIANCE RISK

Conflicts of interest/corruption

– Does disclosure really mitigate conflicts of interest?

– Disclosure and COIs (Part Two)

– Other people’s COI standards

– Gifts, entertainment and “soft-core” corruption

– The science of disclosure gets more interesting – and useful for C&E programs

– Gamblers, strippers, loss aversion and conflicts of interest

– COIs and “magical thinking”

– Inherent conflicts of interest

Inherent anti-conflicts of interest

Conflict of interest? Who decides?

Specialty bias

Disclosure’s two-edged sword

Nonmonetary conflicts of interest

Charitable contributions and behavioral ethics

Insider trading

– Insider trading, behavioral ethics and effective “inner controls” 

– Insider trading, private corruption and behavioral ethics

Legal ethics

– Using behavioral ethics to reduce legal ethics risks

OTHER POSTS ABOUT BEHAVIORAL ETHICS AND COMPLIANCE

– New proof that good ethics is good business

How ethically confident should we be?

– An ethical duty of open-mindedness?

– How many ways can behavioral ethics improve compliance?

– Meet “Homo Duplex” – a new ethics super-hero?

– Behavioral ethics and reality-based law

Was the Grand Inquisitor right (about compliance)?

Hire the guilt prone

In a recent edition of Knowledge at Wharton, Maurice Schweitzer of that school discusses a paper, “Who is Trustworthy? Predicting Trustworthy Intentions and Behavior,” he co-authored with T. Bradford Bitterly, a postdoctoral research fellow at the University of Michigan’s Ross School of Business, Taya R. Cohen, a professor at Carnegie Mellon University’s Tepper School of Business, and Emma Levine, a professor at the University of Chicago’s Booth School of Business. Schweitzer notes:

We tapped into a personality trait that hasn’t received as much attention as say, the “Big Five” personality traits [extraversion, openness, agreeableness, neuroticism and conscientiousness.] The personality trait we tapped into is something called guilt proneness, or how prone someone is to feeling guilty. Imagine you’re out at a party. You have a glass of red wine, and you spill some red wine onto a white carpet. How would you feel? The people who would feel extremely guilty about that are the people who are prone to feeling guilt. Now what’s interesting is that people who are prone to feeling guilt, they don’t actually experience a lot more guilt because they spend a lot of effort trying to avoid putting themselves in that position. Those are the people who would say, if I’m if I’m going to be drinking wine over a white carpet, I’m having white wine. Those are the people that are thinking ahead to make sure they’re not missing deadlines. They’re not falling short of your expectations. They’re going to take their time and work extra hard to take other precautions. Those are the guilt-prone people. And it turns out that those people are pretty reliable. And when it comes to being trustworthy, those are the people we should be trusting.

This makes sense to me as an intuitive matter. But more than that, we have only to look at the example set by President Trump, who seems to show no guilt about anything – and who is as untrustworthy as any leader can be.

I’m not sure how compliance officers can operationalize this research. But for citizens the implications couldn’t be clearer.