Bias

Two very different types of bias topics will be examined in the blog: A) Under what situations involving business organizations should bias be treated like a traditional COI. B) How often-unrecognized biases can inhibit ethical decision making, which is one of the principal teachings from behavioral ethics (i.e., “cognitive biases.”)

Lawyers as compliance officers: a behavioral ethics perspective

What role do corporate lawyers play in preventing wrongdoing by executives in their client organizations? And how is this role impacted by behavioral ethics?

In “Behavioral Legal Ethics Lessons for Corporate Counsel,” to be published in the Case Western Reserve Law Review, Paula Schaefer of the University of Tennessee College of Law  first examines “the corporate lawyer’s consciously held conceptions and misconceptions about duty owed to her corporate client when company executives propose a plan that will create substantial liability for the company—when and if it is caught.” As she shows, lawyers often have an unduly limited view of what that duty is.

Schaefer next “turns to behavioral science and highlights some of the key factors that corporate attorneys are unconsciously influenced by as they try to decide how (or if) to address client conduct that may amount to a crime or fraud.” Those factors are:

Attorney self-interest. A key point on this: “Corporate advisors keep their jobs (as inside or outside counsel) when they keep executives happy; they do this by finding ways to implement corporate executives’ plans, and not by saying no.” Of course, on some level this is obvious but, based on the research of Tigran W. Eldred of New England Law School,  she notes that lawyers are often not aware of the extent to which self-interest corrupts the professional conduct of attorneys vis a vis clients.

Obedience Pressure. A key point here: “Obedience research explains the power an authority figure or colleagues have to influence bad advice.” The best-known study in this area is, of course, that conducted by Stanley Milgram, which measured the extent to which participants were willing to inflict shocks on apparent learners in the experiment when instructed to do so by an apparent authority figure and which demonstrated just how powerful obedience pressure could be. As Schaefer notes: “In the case of a corporate attorney addressing planned conduct that may be criminal or fraudulent, the authority figure is likely the corporate executive that the attorney reports to in the professional relationship.” And as she notes this is likely to create more pressure than the instruction of some man in a white coat in Milgram’s experiment.

Conformity Pressure. Here, Schaefer describes experiments by Solomon Asch concerning the extent to which the participants gave knowingly incorrect answers to a question because of the fact that other participants did so. The results showed a high degree of such correlation. As she notes: “Asch’s research should be particularly concerning for lawyers. For Asch’s subjects, the stakes were low—the subjects likely did not know the other participants in the study and had no ongoing relationship with them. Further, the right answer was black and white, and they still felt pressured to choose the wrong answer selected by the majority. For a corporate lawyer addressing possibly fraudulent or criminal conduct, the group (with whom she feels pressure to conform) might be fellow attorneys or other decision makers at the corporation.”

Partisan Bias. Schaefer writes: “The research reveals that partisanship makes it difficult for a lawyer to filter and interpret information objectively. One study found that students who participated in a moot court competition overwhelmingly perceived that their assigned side had the better case. In another study, subjects were asked to play the role of attorney for plaintiff or defendant in determining the settlement value of a case. Even though both sides received identical information, those who were randomly assigned to play the plaintiff predicted an award substantially higher than that predicted by the defendant.”

Schaefer next considers “interventions to combat a corporate attorney’s wrongful obedience and conformity.” All of these seem sound, but I don’t have space to discuss them here.

However, I do want to add that – although not the focus of Schaefer’s paper – the research may also be relevant to the longstanding debate about whether the general counsel or other member of the law department should serve as chief ethics and compliance officer (CECO)  or if the individual in that role should be independent with respect to reporting purposes. At least to me, the research suggests that it may be more difficult for in-house attorneys to rise above the potential conflicts in this role than is generally thought.

Of course, even an independent CECO would be subject to the various biases described in this article. However, they would still – in my view – stand a better chance of ethical success since the notion of independence is truly foundational to their role, i.e., there is presumably not the same confusion about their duty than Schaefer found was the case with in-house attorneys.

Finally, note that I am not saying that this means that the General Counsel can never serve in a CECO role – only that the implications of this research should be considered along with various other factors in determining what approach makes the most sense for a given company.

For further reading:

– The Legal Ethics Blog

– An earlier post from the COI Blog with a different view on lawyers as compliance officers

Ethics made easy

We justly praise those who show true ethical heroism.  But to protect business organizations  and society generally from legal and ethical breaches we need to aim our efforts more broadly.

In “How to Design an Ethical Organization” in the May-June 2019 issue of the Harvard Business Review, Nicholas Epley, John Templeton Keller Professor of Behavioral Science at the University of Chicago Booth School of Business, and Amit Kumar, an assistant professor of marketing and psychology at the University of Texas at Austin, argue: few executives set out to achieve advantage by breaking the rules, and most companies have programs in place to prevent malfeasance at all levels. Yet recurring scandals show that we could do better. Interventions to encourage ethical behavior are often based on misperceptions of how transgressions occur, and thus are not as effective as they could be. Compliance programs increasingly take a legalistic approach to ethics that focuses on individual accountability. They’re designed to educate employees and then punish wrongdoing among the “bad apples” who misbehave. Yet a large body of behavioral science research suggests that even well-meaning and well-informed people are more ethically malleable than one might guess…Creating an ethical culture thus requires thinking about ethics not simply as a belief problem but also as a design problem. We have identified four critical features that need to be addressed when designing an ethical culture: explicit values, thoughts during judgment, incentives, and cultural norms.

The first of these is “explicit values.” Among the key points here are that:

Strategies and practices should be anchored to clearly stated principles that can be widely shared within the organization. A well-crafted mission statement can help achieve this, as long as it is used correctly. Leaders can refer to it to guide the creation of any new strategy or initiative and note its connection to the company’s principles when addressing employees, thus reinforcing the broader ethical system.

A mission statement should be simple, short, actionable, and emotionally resonant. Most corporate mission statements today are too long to remember, too obvious to need stating, too clearly tailored for regulators, or too distant from day-to-day practices to meaningfully guide employees.

The second design consideration is “thoughts during judgment.” Among the key points here are that:

– Most people have less difficulty knowing what’s right or wrong than they do keeping ethical considerations top of mind when making decisions. Ethical lapses can therefore be reduced in a culture where ethics are at the center of attention. … Behavior tends to be guided by what comes to mind immediately before engaging in an action, and those thoughts can be meaningfully affected by context.

– Several experiments make this point… In a large field experiment of approximately 18,000 U.S. government contractors, simply adding a box for filers to check certifying their honesty while reporting yielded $28.6 million more in sales tax revenue than did a condition that omitted the box.

The third consideration is incentives. Here the authors note: Along with earning an income, employees care about doing meaningful work, making a positive impact, and being respected or appreciated for their efforts… An ethical culture not only does good; it also feels good.

The final design consideration is cultural norms. Here the authors recount the results of several experiments showing the often underappreciated power of such norms in creating ethical risk.

The authors conclude the article with several helpful suggestions for putting ethical design into practice – including in the contexts of hiring, personnel evaluation, compensation.

Note that there is a lot more that could be said about how behavioral ethics can inform and fortify compliance programs. (See this index of prior posts on this subject.) But the ideas and information is this article are very helpful, and the overall point – that [o]rganizations should aim to design a system that makes being good as easy as possible – seems exactly right to me.

Point-of-risk compliance

Here is my latest column from C&E Professional – on  “point-of-risk compliance.”

I hope you find it useful.

Behavioral ethics program assessments

Rebecca Walker and I have an article in the Spring 2019 issue of Ethical Boardroom on behavioral ethics program assessments.

We hope you find it interesting.

Being a parent as a source of ethics risk

In 1973, in speaking to colleagues on the  Cook County Democratic Committee, Mayor Richard Daley of Chicago defended his having directed a million dollars of insurance business to an agency on behalf of his son John with the immortal words: “If I can’t help my sons, then [my critics] can kiss my ass. I make no apologies to anyone.”

I thought of this when I read about the college admission bribery scandal that emerged this past week. The scandal called to mind other cases where parents violated the law to benefit their children. A famous instance of this sort from the 1980s concerned the hiring (by a former Miss America) of a NY judge’s daughter to influence the judge’s decision on a pending case.  In 2016, JP Morgan settled a “Princeling” case, which involved the bank’s hiring the sons and daughters of important Chinese officials in return for business. And there are many other cases like these – presumably going back to our early history.

The behavioral ethics and compliance perspective focuses on structural causes of wrongdoing. There are many fruitful avenues for behavioral ethics inquiry suggested by the college admission bribery scandal. The one that most interests me is whether it is easier to commit a crime when one is doing so not to help oneself but to help one’s child.  (Note that I understand that there are also personal reputational benefits that parents get from having their child admitted to a prestigious university but still think that the principal beneficiaries of this corruption are the children.)

Given how powerful the drive to help one’s offspring is – both as a matter of the instincts we are born with and the social norms that we adopt – the answer is almost certainly Yes, at least as a general proposition. If this turns out to be an operative fact in the admissions bribery scandal, then I hope a lesson will be that parents should refrain from doing things for their children that ethically they wouldn’t do for themselves.

As the scandal unfolds I’ll also be interested in learning what was the role – or lack thereof – of risk assessment and auditing in the respective compliance programs of the universities involved. Based on the press accounts it seems as if this kind of corruption was probably fairly common. If that is so, where were the compliance programs?

Behavioral ethics and compliance assessments

New from the Compliance Program Assessment Blog.

Rebecca Walker and I hope you find it interesting.

Behavioral ethics and compliance index 2019

While in the more than seven years of its existence the COI Blog  has been devoted primarily to examining conflicts of interest it has also run quite a few posts on what behavioral ethics might mean for corporate compliance and ethics programs. Below is an updated version of a topical  index to these latter posts.  Note that a) to keep this list to a reasonable length I’ve put each post under only one topic, but many in fact relate to multiple topics (particularly the risk assessment and communication ones); and b) there is some overlap between various of the posts.

INTRODUCTION 

– Business ethics research for your whole company (with Jon Haidt)

– Overview of the need for behavioral ethics and compliance

Behavioral ethics and compliance: strong and specific medicine

– Behavioral C&E and its limits

Another piece on limits

– Behavioral compliance: the will and the way

Behavioral ethics: back to school edition

A valuable behavioral ethics and compliance resource

BEHAVIORAL ETHICS AND COMPLIANCE PROGRAM COMPONENTS

Risk assessment

–  Being rushed as a risk

–  Too big for ethical failure?

– “Inner controls”

– Is the Road to Risk Paved with Good Intentions?

– Slippery slopes

– Senior managers

– Long-term relationships

– How does your compliance and ethics program deal with “conformity bias”? 

– Money and morals: Can behavioral ethics help “Mister Green” behave himself? 

– Risk assessment and “morality science”

 Advanced tone at the top

Communications and training

– “Point of risk” compliance

–  Publishing annual C&E reports

– Behavioral ethics and just-in-time communications

– Values, culture and effective compliance communications

– Behavioral ethics teaching and training

– Moral intuitionism and ethics training

Reverse behavioral ethics

The shockingly low price of virtue

Imagine the real

Positioning the C&E office

– What can be done about “framing” risks

Compliance & ethics officers in the realm of bias

Accountability

– Behavioral Ethics and Management Accountability for Compliance and Ethics Failures

– Redrawing corporate fault lines using behavioral ethics

– The “inner voice” telling us that someone may be watching

–  The Wells Fargo case and behavioral ethics

Whistle-blowing

– Include me out: whistle-blowing and a “larger loyalty”

Incentives/personnel measures

– Hiring, promotions and other personnel measures for ethical organizations

Board oversight of compliance

– Behavioral ethics and C-Suite behavior

– Behavioral ethics and compliance: what the board of directors should ask

Corporate culture

– Is Wall Street a bad ethical neighborhood?

– Too close to the line: a convergence of culture, law and behavioral ethics

–  Ethical culture and ethical instincts

Values-based approach to C&E

 A core value for our behavioral age

– Values, structural compliance, behavioral ethics …and Dilbert

Appropriate responses to violations

– Exemplary ethical recoveries

BEHAVIORAL ETHICS AND SUBSTANTIVE AREAS OF COMPLIANCE RISK

Conflicts of interest/corruption

– Does disclosure really mitigate conflicts of interest?

– Disclosure and COIs (Part Two)

– Other people’s COI standards

– Gifts, entertainment and “soft-core” corruption

– The science of disclosure gets more interesting – and useful for C&E programs

– Gamblers, strippers, loss aversion and conflicts of interest

– COIs and “magical thinking”

– Inherent conflicts of interest

Inherent anti-conflicts of interest

Conflict of interest? Who decides?

Specialty bias

Disclosure’s two-edged sword

Nonmonetary conflicts of interest

Charitable contributions and behavioral ethics

Insider trading

– Insider trading, behavioral ethics and effective “inner controls” 

– Insider trading, private corruption and behavioral ethics

Legal ethics

– Using behavioral ethics to reduce legal ethics risks

OTHER POSTS ABOUT BEHAVIORAL ETHICS AND COMPLIANCE

– New proof that good ethics is good business

How ethically confident should we be?

– An ethical duty of open-mindedness?

– How many ways can behavioral ethics improve compliance?

– Meet “Homo Duplex” – a new ethics super-hero?

– Behavioral ethics and reality-based law

Was the Grand Inquisitor right (about compliance)?

Hire the guilt prone

In a recent edition of Knowledge at Wharton, Maurice Schweitzer of that school discusses a paper, “Who is Trustworthy? Predicting Trustworthy Intentions and Behavior,” he co-authored with T. Bradford Bitterly, a postdoctoral research fellow at the University of Michigan’s Ross School of Business, Taya R. Cohen, a professor at Carnegie Mellon University’s Tepper School of Business, and Emma Levine, a professor at the University of Chicago’s Booth School of Business. Schweitzer notes:

We tapped into a personality trait that hasn’t received as much attention as say, the “Big Five” personality traits [extraversion, openness, agreeableness, neuroticism and conscientiousness.] The personality trait we tapped into is something called guilt proneness, or how prone someone is to feeling guilty. Imagine you’re out at a party. You have a glass of red wine, and you spill some red wine onto a white carpet. How would you feel? The people who would feel extremely guilty about that are the people who are prone to feeling guilt. Now what’s interesting is that people who are prone to feeling guilt, they don’t actually experience a lot more guilt because they spend a lot of effort trying to avoid putting themselves in that position. Those are the people who would say, if I’m if I’m going to be drinking wine over a white carpet, I’m having white wine. Those are the people that are thinking ahead to make sure they’re not missing deadlines. They’re not falling short of your expectations. They’re going to take their time and work extra hard to take other precautions. Those are the guilt-prone people. And it turns out that those people are pretty reliable. And when it comes to being trustworthy, those are the people we should be trusting.

This makes sense to me as an intuitive matter. But more than that, we have only to look at the example set by President Trump, who seems to show no guilt about anything – and who is as untrustworthy as any leader can be.

I’m not sure how compliance officers can operationalize this research. But for citizens the implications couldn’t be clearer.

Imagine the real

 

An early post on this blog noted that among the more interesting phenomena of behavioral ethics was the impact that knowing or not knowing a party could have on how one treated that party.

A set of circumstances that is relatively likely to lead to an ethical shortfall is where we do not know who will be impacted by a contemplated act.   As described in this paper by Deborah A. Small and George Loewenstein,  in one study “subjects were more willing to compensate others who lost money when the losers had already been determined than when they were about to be” and in another “people contributed more to a charity when their contributions would benefit a family that had already been selected from a list than when told that the family would be selected from the same list.”   Beyond their direct application to the area of charitable giving, these findings may be relevant to a broader range of ethics issues, and, for instance, could help explain the relative ease with which so many individuals engage in offenses where the victims are not identifiable.  

One example of this is insider trading – a crime which, although widely known to be wrong, seems utterly pervasive (based, among other things, on the extent of trading in securities right before public disclosure of market moving events).  A behavioral ethics perspective suggests that (at least part of) the reason for this “inner controls” failure is that the victims of insider trading are essentially anonymous market participants. 

Another offense of this sort is government contracting fraud (where the victims tend to be everyone),  and indeed Ben Franklin famously described the risks of an ethics shortfall here as well as anyone could: “There is no kind of dishonesty into which otherwise good people more easily and more frequently fall than that of defrauding the government.”   Understanding why “otherwise good people” do bad things is much of what behavioral ethics is about.

But what about COIs? The picture there is mixed, as some COIs do involve identifiable victims – such as the job applicant who does not get hired because the position was filled by the boss’s son. Similarly, an organization might suffer identifiable harm when its procurement process is corrupted by a COI – e.g., paying too much or getting too little.

However, with other sorts of COIs the harm is less apparent. It is the damage to trust in key relationships.

For this reason, organizations might consider including the following question in their COI resolution protocols: “How likely would it be at that the COI would diminish the trust that stakeholders (shareholders, employees, customers, business partners, suppliers or regulators) would have in the Company or otherwise adversely impact the Company’s reputation?”

Of course, this thought experiment works only if you truly try to put yourself in the shoes of one of these parties. Or, to use the memorable words (albeit from  another setting) of philosopher Martin Buber: “Imagine the real.”

Compliance & ethics officers in the realm of bias

Bias and conflicts of interest are, of course, related to each other;  but they also differ, in that the former can be based purely on thoughts (or feelings or beliefs) whereas the latter generally requires something truly tangible, such as an economic or familial relationship. Prior postings on bias – particularly those underpinning the field of behavioral ethics – can be found here. But, the world of bias is a vast one, and there is much to be explored about it.

A study recently summarized on the Harvard Law School Forum on Corporate Governance and Financial Regulation offers an interesting example of one type of bias among CEOs. The author of the study – Scott E. Yonker, of Cornell University – sought to determine “Do Managers Give Hometown Labor an Edge?”, based on a review of certain employment-related decisions affecting company operations of varying distances from the hometowns of the companies’ respective CEO’s. The answer – somewhat unsurprisingly, at least to me – was Yes: “The results show that following periods of industry distress, units located near CEOs’ hometowns experience fewer employment and pay reductions, and are less likely to be divested relative to other units within the same firm. Units located closer to CEO birthplaces experience 4.1% greater employment growth and 2.4% greater wage growth compared to similar units. Since employment and wages fall by 3.0% at the average firm unit following industry distress, these findings suggest that hometown units are largely spared. Moreover, these differences have seemingly permanent effects, the wage differences last at least three years, while employment differences revert about three years after industry downturns. With regard to divestitures, units that are more distant from CEO birthplaces are about 6% more likely to be divested.”

Is this at all relevant to the work of compliance & ethics professionals? I think the answer to that is Yes, as well. Or more accurately, It should be.

Of course, “hometown” forms of bias are not as pernicious as are those concerning race, gender and other categories of individuals who have historically been the victims of societal oppression. But the true promise of C&E programs extends to addressing all forms of unfairness, both because non-merit-based decision making in the workplace is (from an economic  efficiency perspective) presumptively bad for businesses (i.e., an inefficient use of resources); and because such decisions can lead to demoralization of a workforce (adversely impacting, among other things, the ethical conduct of those so affected).

Ultimately, for a company to have not only a strong compliance program but also an ethics one, the CEO and other leaders would empower the C&E officer to identify and challenge decisions that may be based on bias. (Note that I don’t mean literally  all such decisions, but those that are significant in potential impact and have a meaningful  ethics/fairness dimension.) The leaders would do so because they would understand that being fair is not just a matter of good intentions; rather, it can  also require expertise and effort – both  of which the C&E officer can bring to a challenging set of circumstances.

The C&E movement has  made a lot of progress in the past quarter century, but we are a long way from getting to such a place. Still, as is often said, it is good to have a goal.