Bias

Two very different types of bias topics will be examined in the blog: A) Under what situations involving business organizations should bias be treated like a traditional COI. B) How often-unrecognized biases can inhibit ethical decision making, which is one of the principal teachings from behavioral ethics (i.e., “cognitive biases.”)

Meet “Homo Duplex” – a new ethics super-hero?

In “Behavioral Ethics for Homo Economicus, Homo Heuristicus and Homo Duplex” – which is published in the March 2004 issue of Organizational Behavior and Human Decision Processes   –  Jesse Kluver, Rebecca Frazier and Jonathan Haidt describe three views of human nature and consider the implications of each for the field of business ethics:

–          The traditionally dominant “Homo economicus” model, which sees human nature as based  on “rational self-interested actors within systems of economic or social exchange” and which views incentive alignment as the key motivator for human behavior.

–          A more recently emerged “Homo heuristicus” approach, which posits that heuristics (ingrained mental short cuts) and biases “drive decision making behavior, including ethical decision and behavior.” The authors view this model as more psychologically realistic than the Homo economicus approach and believe it offers a variety of insights that can be useful for shaping “ethical systems” (including, presumably, C&E programs).

–          “Homo duplex,” a term coined by the sociologist/psychologist/philosopher Emile Durkheim, which posits that we operate on (or shift between) two levels: a lower one – which he deemed “the profane,” in which we largely pursue individual interests; and a higher – more group-focused – level, which he called “the sacred.”  The authors see this view as an extension – not as a contradiction – of Homo heuristicus.

This last model has considerable potential, the authors believe, for promoting ethical behavior.  That is because various studies have shown that “some of the neurobiological adaptations humans have developed for moral behavior work explicitly at the group level rather than the individual level,” “above and beyond what might be expected under the Homo economicus or Homo heuristicus models.”   Yet, the authors argue, Homo duplex has received far too little attention to date, and the paper offers  ways in which this model of human behavior could be used to promote ethical conduct in businesses and also suggests avenues for further research.

There is much more to this paper  – concerning, among other things,  lessons for organizations seeking to build what Haidt calls “moral capital,” as well as  the importance of designing “ethical systems” to bring employees of an organization  to the above-described higher state, and I wholeheartedly commend the piece to readers of the COI Blog.  Indeed, I hope to explore some of these possibilities in future posts.

Having said all this, I should note that there may be limits to how far this thinking can take a company in promoting ethical and compliant behavior, given that so many major business crimes emanate from the “C-Suite,” the inhabitants  of which may be both less likely to act ethically as a  general matter (as discussed in this post ) and less inclined to participate in what the authors call “ego-dissolving activities” – i.e., the basis for Homo duplex’s  higher level – than are the rank-and-file.  Indeed, the most famous corporate example involving an attempt to build team spirit is, the authors note, “Wal-Mart, where each day employees participate in the Wal-Mart chant…”  While presumably effective in reducing the rate of petty theft by store employees, based on various press accounts, this doesn’t appear to have done much to deter massive bribery by the company, which on some level seems to have involved some of its senior managers.

In a related vein, while I am a big fan of Homo heuristicus (as reflected in my many earlier posts on “behavioral ethics and compliance”), and (based partly on my deep admiration of Haidt’s landmark book, The Righteous Mind), while I embrace the authors’ agenda of conducting more research into how a Homo duplex view can be used to promote ethical behavior, I think it important to continue to work with the central insight of the much-maligned Homo economicus framework too (and believe that the authors – who note that we do not need to rely solely on an one view of human nature – would agree with this). That is, while the incentive-based approach to promoting ethical behavior is as old as the Code of Hammurabi, at least in the modern corporate crime setting it has been hobbled by moral-hazard-related infirmities – i.e., it has not , in my view, had a real chance to live up to its  own potential to be an ethical super-hero.

For further reading see:

Scott Killingsworth’s excellent paper, on C-Suite behavior, discussed and linked to in this earlier post

My recent “Ethics Exchange” with Steve Priest about “Ethics, Compliance and Human Nature” on ECOA Connects.

 

 

Exemplary ethical recoveries

F. Scott Fitzgerald famously said, “There are no second acts in American lives,” but in the C&E world the second act may count for more than the first – for better or worse.

Instances of the latter – tragic second acts – include various cases where a company engaged in criminal conduct but failed to either fully “come clean” when it was prosecuted or do what was necessary to prevent future violations.  Prominent examples of this sort of failure  go as far back as cases involving the first massive penalties under the Federal Sentencing Guidelines for Organizations – the $340 million fine against Daiwa Bank for banking-related offenses in 1996 and the $500 million against Hoffman-LaRoche for antitrust crimes in 1999 –  and have occurred as recently as last month, when a $425 million criminal fine was imposed on Bridgestone  for antitrust offenses in part because the Company had failed to disclose those offenses at the time of an earlier plea. (The logic of severe punishment for a company that fails a second time is fairly obvious, and, although he doubtless would have been mortified to be quoted in a compliance blog, is perhaps best expressed by these words of  Oscar Wilde: “To lose one parent may be regarded as a misfortune; to lose both looks like carelessness.”)

But while the cases of failure such as these make the headlines, a second act need not be tragic.  For the good news, we turn from law to psychology, and a just-published paper “Better than ever? Employee reactions to ethical failures in organizations, and the ethical recovery paradox” by Marshall Schminke, James Caldwell, Maureen L. Ambrose and Sean R. McMahon.  In it, the authors review the results of a laboratory study and a field study showing “an ethical recovery paradox, in which exemplary organizational efforts to recover internally from ethical failure may enhance employee perceptions of the organization to a more positive level than if no ethical failure had occurred.”

These results are very encouraging even if, while perhaps paradoxical in the way the authors describe, they do not seem totally surprising. After all, a C&E failure can also be seen as presenting a test – and ethical standards at a company that have fared well on a test could seem more meaningful to employees than those that haven’t been tested at all. Of course, the same could be said of nearly any attribute of an organization – but it would be hard to find another area where the gap between what is proclaimed and what is practiced is as wide as, generally speaking, it is in the field of business ethics. So, there is every reason for much weight to be placed on the results of the sort of test that ethical failures offer.

Still, and beyond the important headline finding about the possibilities of ethical recovery, the paper should be useful to the C&E practitioners for a variety of reasons:

–          It has an extensive review of relevant literature, such as research showing that “ethics-based failures may have a more generalized impact [on employee perceptions of an organization] than other types of failures” – in part because of the strong negative emotions often triggered by the former.  This information should be helpful for briefing directors and senior managers on the importance of strong C&E measures generally (i.e., not just in the wake of failures).

–          The authors note that the results “raise a host of possibilities for considering additional implications of ethical repair, those even further downstream from the unethical event.”  I agree that this is an important area to explore.  Indeed, a company I know that has succeeded as well as any in maintaining an exemplary corporate culture has done so in part by staying mindful of a scandal that had occurred literally decades earlier, i.e., very “downstream.”  But, too many companies take the opposite approach – burying, rather than learning from, their failures.

–          The authors identify other implications for practitioners, including the need to have “systems, structures, processes, controls and policies…in place to stage a successful recovery in the event an ethical failure happens.” I agree with this as well, but note that perhaps more helpful than planning for true ethical crises is having systems for making the most of the small-scale ethics failures that occur on a routine basis – such as by publicizing the extent to which the company conducts rigorous investigations of employee reports of suspected C&E transgressions and imposes meaningful discipline for violations.

Indeed in this sense, exemplary ethical recovery should  not be viewed as a once- (or twice) in-a-lifetime event for a company, but an active ingredient of its very culture.

Finally, note that the research did not look closely at the issue of what made an ethical recovery exemplary; rather, it was based broadly on reported degrees of satisfaction by the study’s subjects.  “We know little about the attributes of an effective recovery,” the authors write.  One hopes that other researchers (or perhaps even these ones) will build upon this study to develop knowledge in that key area.

.

Too close to the line: a convergence of culture, law and behavioral ethics

To “walk the line” means something very different to  those who prosecute business crime cases than it did  to Johnny Cash. For instance, in a speech given last year,  Steven L. Cohen,  Associate Director for Enforcement at the Securities and Exchange Commission, said:  “Where we find fraud, there are often early warning signs that may have suggested a corporate compliance culture that is not meeting appropriate standards…..  Risk-taking in the area of legal and ethical obligations invariably leads to bad outcomes.  Any company or person prepared to come close to the line when it comes to legal and ethical standards is already on dangerous ground.  Tolerating close-to-the-line behavior sends a terrible message throughout an organization that pushing the envelope is acceptable.” Similarly, and also in a speech last year, New York federal prosecutor Preet Bharaha said:  “A single-minded focus on remaining an inch away from the legal line is just asking for trouble. It’s a dangerous thing to walk the line – and to train others to do it. Walking the line is like a driver constantly trying to game just how close to the legal alcohol limit he can come without getting a DUI. Now, one can do that. But how long do you think before that driver gets pulled over? How long before that driver blows the legal limit? And how long before that driver hurts someone on the highway?”

Keeping employees and agents from getting too close to the line has long been a focus of – and particular challenge for – C&E programs.  Part of the difficulty here comes from the fact that – at least in the U.S.-  the lines separating criminal from lawful conduct are often not clearly drawn.  These lines can also be subject to change without notice.  Additionally, under doctrines of conspiracy and accessorial liability, those who pay a brief visit to the other side of the line – or indeed are pulled over it by a colleague – can be punished as if they were a major offender.  (For more information on this aspect of U.S. law see the Ethical Systems web site.)

Added to this challenge are the various lessons of behavioral ethics research suggesting that individuals may have real – but unappreciated – difficulty identifying and steering clear of law- and ethics-related lines.   Discussions of some of these studies are collected here, and include the following posts:

–          How “conformity bias” adversely impacts our ability to see the wrongfulness of our behavior.

–          The blurring impact that the “distance” of victims of wrongdoing has on our ethical vision.

–          The various ways in which we are vulnerable to ethical “slippery slopes.”

–          The considerable difficulty we often have in recognizing wrongful behavior in others when it is in our interest not to do so.

–          The particular challenges that individuals in positions of power have in identifying their behavior as wrongful.

In sum, business people – particularly those in organizations that lack healthy cultures – often face a wicked one-two punch of a treacherous legal landscape and many unappreciated human ethical frailties that make navigating that landscape difficult indeed.

There is no easy way to deal with this.  But, at a minimum, C&E officers should train employees generally and managers in particular on all that they are up against.

More generally, understanding these risks should be seen in business schools and the business world as supporting the  need for a high degree of ethical awareness.  Only when business people view being ethically alert – rather than just relying on what they may see as their innate goodness – as an indispensable  professional skill will companies live up to the high expectations articulated by Messrs. Cohen and Bharara and doubtless shared by many others in the enforcement community.

The science of disclosure gets more interesting – and useful for C&E programs

In “Nothing to Declare: Mandatory and Voluntary Disclosure Leads Advisors to Avoid Conflicts of Interest,” published last month in Psychological Science,    Sunita Sah   and George Loewenstein   note that “[p]rior research documents situations in which advisors— subject to unavoidable COIs—feel morally licensed to give more-biased advice when their conflict is disclosed,” as well other  factors suggesting that disclosure is often less of an effective mitigant than might be imagined.  (For more information on some of this research see this post on moral licensing and this one  on the pressure that individuals to whom disclosure is made might feel to accept the conflict.)  However, the authors argue – and support with the results of several experiments  that they conducted –   “[w]hen COIs are avoidable … the situation can change dramatically because the ability to avoid conflicts brings other motives into play.”

One of these motives is that “disclosure becomes a potential vehicle for demonstrating one’s own ethics …to signal to themselves and to others that they are honest and moral …and that they prioritize others’ interests over their own.”  A second motive is that “in many situations advisors benefit financially when advisees follow their advice… [and] disclosing the absence of conflicts increases the likelihood that the advice will be followed,…”

Sah and Loewenstein also note: “Evidence from the field complements [their] findings. The American Medical Student Association’s PharmFree Scorecards program (which grades COI policies at U.S. academic medical centers…) has been successful in encouraging many centers to implement stronger COI policies.  Similarly, mandatory disclosure of marketing costs for prescription drugs in the District of Columbia produced a downward trend in marketing expenditures by pharmaceutical companies, including gifts to physicians, from 2007 to 2010…”

The authors’ findings make sense to me.  Indeed, in one of the above-noted earlier posts I suggested that the research indicating that disclosure could be harmful in the professional advisor context because it creates pressure to accept the COI  may not apply to the same extent “in the setting of a business organization – with defined and enforced ethical standards regarding COIs, where one might be more concerned about looking bad to one’s colleagues (or bosses) than to the conflicted party.”

That is, the first of the two motivations that Sah and Lowenstein identify as relevant to disclosure – the desire to show one’s trustworthiness – is likely to be a powerful force in many business organizations given the often strong enforcement of COI rules that began with the Sarbanes-Oxley Act and which is also supported  by the general importance of “organizational justice” to C&E program efficacy and the specific relevance of COI enforcement to organizational justice.  (The other motivation, however, is much less applicable outside of the professional advisor context, and indeed the notion of mandatory versus avoidable COIs may also be more relevant to the advisor context than for business organizations.)

So, the results of this study seem like good news.  But is it news that C&E professionals – who operate more in the business organization rather than in the professional advisor context – can use to make their companies’ C&E programs stronger?   Or, is it – as one C&E professional I know recently said of much behavioral ethics – the stuff of “parlor games”? (Note: I don’t agree with this critique, but it is worth noting that C&E practitioners, as a group, don’t seem to be doing much with behavioral ethics findings.)

I think that this knowledge can in fact be put to use for C&E purposes.   That is, it suggests that in policies, training and other C&E communications, companies should emphasize how timely and complete COI disclosure may be important to an employee’s being seen as trustworthy within an organization – as well as by other important parties (e.g., customers or suppliers).

More broadly, C&E professionals should find ways to address this motivation in helping employees understand the business case (in terms of their careers)  not just for full COI disclosure  but for ethical excellence  generally. Of course, this approach already exists to varying  modest degrees in some C&E programs, but there is plenty of room for many organizations to do more in this regard.

Moral intuitionism and ethics training

In their recent article in the Journal of ManagementMoral Intuition: Connecting Current Knowledge to Future Organizational Research and Practice –    Gary R. Weaver of the University of Delaware, Scott J. Reynolds of the University of Washington  and Michael E. Brown of the Pennsylvania State University  review “a rapidly growing body of social science research [that] has framed ethical thought and behavior as driven by intuition,” literature which they describe as “incredibly rich, fruitful, and meaningful to a wide range of audiences.” Among the process components of moral intuitionism are non-inferential judgments, meaning that “moral judgment and behavior can take place without prior deliberative reasoning”; “the automaticity of moral action,” meaning that ethical judgments can be essentially instantaneous; dual process thinking, made famous by Daniel Kahneman’s  Thinking Fast and Slow; and “intuitive primacy [meaning that] although sometimes the rational deliberation model accurately characterizes moral behavior, in the large majority of cases moral intuition rules.”  The content of moral intuition – made famous by Jon Haidt’s The Righteous Mind   – is often said to include five areas: “a) care (vs. harm), (b) fairness, or justice (vs. cheating), (c) in-group loyalty (vs. betrayal), (d) authority (vs. subversion), and (e) sanctity, or purity,” and perhaps a sixth — “liberty (vs. oppression).”

As the authors describe: “Although the value of the moral intuition perspective has been demonstrated in multiple fields (e.g., psychology, anthropology, evolutionary psychology, cognitive science, behavioral economics), its application in organizational contexts is limited,” and in this article they explore the significance of this body of   knowledge from four perspectives: “leadership, organizational corruption, ethics training and education, and divestiture socialization,”  looking at process and content for each. In this post, I review parts of what the authors discuss with respect to ethics training (leaving the related area of ethics education to professional educators), and hopefully will return in the not too distant future to their discussion of the import of moral intuition for organizational corruption.

Turning first to the process of ethics training, the authors express considerable skepticism about the value of computer-based training, which, as they note, is the most prevalent form of ethics training in businesses today: “moral intuition often involves a strong emotional component. Can computer exercises engage intuition by creating truly emotional experiences for participants? Can they trigger processes that make cognitive reappraisal of intuitions more likely? Similarly, moral intuitions are theorized to be multidimensional, involving many different types of information beyond just sights and sounds … The limited dimensionality of computer-based training likely is a substantial constraint on this format. Moreover, reappraisal and change of moral intuition often involve interaction within trusting relationships (in this case, trainer and trainee), which impersonal technology might be hard-pressed to simulate. Computer-based training might be incredibly efficient and serves purposes of external legitimation, but whether it engages moral intuition is open to question.”

They note further regarding moral intuitionism and training process: “At a deeper, developmental level, an intuitionist understanding of moral judgments and their origins looks more akin to long-term habit development than to immediate learning of information. In this, the ‘training’ of moral intuitions is closer to considerations of character education than to analytical exercises of reason.” Finally, they suggest: “Education and training might also focus on teaching about the process of moral intuition as well as the factors that influence it, so that students can learn to recognize when intuition or deliberation are likely and/or appropriate in a given context. If moral judgments typically are intuitive, and largely automatic, perhaps one key element of ethics training is developing an ability to exert some degree of cognitive control over intuition, so that trained individuals are better prepared to manage their immediate intuitive reactions to situations.”

Turning from process to content, they note: “Business ethics training and education has not typically treated concepts like authority and loyalty as moral ideals or ends in themselves (vs. pragmatic matters), and considerations of purity are highly uncommon. But some business practices and issues could be framed in those terms.” However, this would be a major and uncertain step for many business organizations, and they further note that research is needed to determine: “are some foundational intuitions, and efforts to link business practice to them, more conducive than others for ethically successful and productive employees, or is success a matter of context, such that some foundational categories are better suited for some industries, markets, or organizational contexts?”

All of the authors’ suggestions do seem to me to be valuable but – having been involved in corporate compliance and ethics training for more than two decades – also incredibly daunting.   However, at a minimum their thoughts should provide the basis for a dialogue – perhaps even a “rich, fruitful and meaningful” one – between researchers and C&E professionals on how to apply the results of recent moral intuitionism studies to the task of making business organizations more ethical.   And, one of my new year’s resolutions is to try to be part of that discussion.

Insider trading, private corruption and behavioral ethics

Both the contours and the purposes of the prohibitions against insider trading have been the subject of considerable dispute – indeed, the lack of clarity regarding insider trading enforcement may be unique among major laws in the business crime field, at least in the US.  Needless to say,  uncertainty regarding any criminal law is unfortunate, as it can serve to deter desirable, as well as undesirable, activity,  and also be the cause of unfairness – which, beyond being harmful to those touched  directly by a prosecution, can delegitimize the law in question.

In “Insider Trading as Private Corruption” – which will be published next year in the UCLA Law Review –  Prof. Sung Hui Kim offers what she describes as a new doctrinal approach to insider trading law:  such law should be viewed as “a form of private corruption, defined as the use of an entrusted position for self-regarding gain.” She explains: ” The corruption theory not only provides answers to the normative skeptics but, as compared to the two leading alternatives, the property theory and the unjust enrichment theory, better fits the core features of the received doctrine… Even better, the corruption theory provides relatively concrete guidance in hard cases, which is the sort of pragmatic theory that the SEC and the courts desperately need.”

Although I handled quite a few insider trading cases in the 1980’s and 1990’s (as a defense lawyer, before switching to full-time C&E work), I’m not familiar enough with the types of “hard cases” that those who trade securities (particularly professional traders/investors)  currently face to have an informed view of how pragmatic this theory is.   But, I do find the private corruption approach compelling from a doctrinal perspective  because, as described in this recent post:

– there are powerful behavioral-ethics-related challenges to promoting compliance with insider trading law having to do with the lack of immediacy of the harm in the offense;  and

– a corruption/conflict-of-interest based approach to promoting compliance in this area seems well suited to addressing these challenges.

In particular, such an approach can help show – hopefully in a powerful way that overcomes such obstacles – what the harm really is with insider trading.  Related to that point, Kim describes (on page 32 of the article) a behaviorist experiment “which found strong correlations between the high levels of perceived public sector corruption in the country and the tendency to view insider trading as acceptable. The more corrupt citizens viewed the country, the less objectionable were the inside trades, and vice versa. Although far from definitive, these correlations provide additional support to the idea that insider trading is best understood as a species of private corruption.”   She also notes: “A key benefit of seeing insider trading as private corruption is that it allows us to see the harms of insider trading more generally as the harms of corruption.”

More generally, given the unprecedented world-wide campaign against public sector corruption, I think broader law enforcement/compliance strategies using (where reasonably applicable) a corruption-based approach – like Kim has done with insider trading – should be considered.   Indeed, that is what I have tried to suggest in this piece about abuses in the gifts and entertainment area  being viewed as “soft-core corruption.”

For a post on private sector corruption generally please click here.  And here is one on the somewhat related topic of “informal” fiduciary duties.  Finally, here is a post on implications of behavioral ethics for the securities law notion of scienter.

Behavioral ethics teaching and training

In “Teaching Behavioral Ethics” – which will be published next year by the Journal of Legal Studies Education, and a draft of which can be found here  – Robert Prentice of the McCombs School of Business at the University of Texas  presents his pedagogical approach to  behavioral ethics.  The paper should be useful not only to other business school professors in preparing their own ethics classes but also to C&E professionals who are considering training business people on “‘the next big thing’ in ethics…”

Prentice’s article describes in considerable detail what he covers in each session of his course. The first addresses why it is important to be ethical, including the many positive as well as negative reasons, and the second the sources of ethical judgments, with a key point being that such judgments tend to be more emotion based than is commonly realized.

The next few classes are about “Breaking down the defenses,” which make the overarching behaviorist point “we are not as ethical as we think” and which explore many key concepts in the field, including self-serving bias;  role morality; framing;  the effect of various environmental factors – such as time pressure and transparency – on ethical behavior;  obedience to authority; conformity bias; overconfidence; loss aversion; incrementalism; the tangible and the abstract; bounded ethicality; ethical fading; fundamental attribution error; and moral equilibrium.  Prentice also discusses research showing that “people are of two minds,” and “tend to be very good at thinking of themselves as good people who do as they should while simultaneously doing as they want,” as well as the related facts that we often don’t do a very good job in predicting the ethicality of future actions and are not especially accurate in remembering the ethicality of our past actions.  At various points in the paper he illustrates these phenomena not only with behavioral studies but also with well-known cases of legal/ethical transgression (e.g., Martha Stewart’s conviction for obstruction of justice as a possible manifestation of loss aversion).

The final part of Prentice’s course is aimed at helping students be their “best selves.” This begins with teaching the differences between the “should self” and the “want self,” and the importance of incorporating the needs of the want self in advance, e.g., by rehearsing what one would do if faced by a particular ethical dilemma. Also important to being one’s best self is “keeping one’s ethical antennae up….[to] always be looking for the ethical aspect of a decision so that [one’s]  ethical values can be part of the frame through which” a problem is examined.  As well, Prentice exhorts his students to “monitor their own rationalizations,” and use pre-commitment devices to decrease the influence of the “want self.” Finally, he discusses research by Mary Gentile showing that more often than is appreciated, “one person can, even in the face of peer pressure or instructions from a superior, turn things in an ethical direction if only they will try.”

All told, this seems like a great course, and I wish that it could be taught in every company as well as in business school. Of course, those providing C&E training in the workplace typically are not given a semester’s worth of time to do so, and indeed there seems to be a recent trend in the field of C&E training – particularly given the “training fatigue” that one finds in some companies – to try to do more with even less.   However, I do think some of the behavioral notions discussed in Prentice’s article can be the basis of compelling workplace training.

First, the fact that it is a relatively new area of knowledge, that it is science based and that it is clearly interesting can make behavioral ethics more appealing to business people than a lot of traditional C&E training. Indeed, using behavioral ethics ideas and information can be a welcome relief from “training fatigue.”

Second, the lessons about how to become our “best selves” are indeed quite practical, and for that reason should be welcome in the workplace.  Indeed, given the many careers that have been damaged/destroyed by  business people not keeping their “ethical antennae up,” these lessons should be seen as business survival skills.

Third, the totality of these studies showing we’re not as ethical as we think  helps makes the case – as well as any legal imperative ever could – for the need for companies to have strong C&E programs.  This should be part of any C&E training (as well, in my view, business school ethics classes), but is particularly important to include in training of boards of directors and senior managers.

Finally, directors and senior managers have an espescially strong need to learn about behavioral ethics research showing that those with power tend to be more ethically at risk than are others, as discussed in various prior posts – such as this one  (review of an important paper by Scott Killingsworth), this one  and this one, to which should be added this recently posted paper  about a study to showing that “employees higher in a hierarchy are more likely to engage in deception…” than are others.  To my mind, the prospect of helping companies with the politically sensitive task of bringing sufficient compliance focus to bear on their heavy hitters is as important as is any of the other possible real-world contributions of this promising and fascinating new field of knowledge.

Behavioral ethics and C-suite behavior

As the COI Blog has discussed previously, CEOs often have different conflicts of interest from  you and me.   More generally we have seen from behavioral research that those with power may be at greater risk of engaging in unethical behavior than are others.  In “’C’ Is for Crucible: Behavioral Ethics, Culture, and the Board’s Role in C-suite Compliance,” Scott Killingsworth  carries this latter point forward a good distance, and does so in a way that should be of considerable interest to members of corporate boards, C&E officers and others with responsibility for promoting ethical and law abiding behavior in business organizations.

Killingsworth’s paper – which was presented at a RAND symposium in May and which can be downloaded here  (and will be published later this year in a proceedings book from the symposium) – first describes “the powerful forces [that] converge in the C-suite to test the mettle of executives and the board that supervises them.”  In this “crucible” one often finds greater temptations and pressures to engage in misconduct than typically face those in other parts of a company; a lack of effective controls to restrain those at the top; and the fact that “the winnowing process [for the C-suite] … selects, in some cases, for a much stronger-than-usual attraction to perquisites … that may be strong enough to overpower allegiance to ethical or legal rules.”

All of this is, of course, reasonably well known.  But much less well known is the behaviorist (and other) research reviewed by Killingsworth that suggests a considerable amplification of the already substantial ethical risks of being in the C-Suite crucible.  Within this body of work are studies concerning conflicts of interest, “motivated blindness” and “framing,” time pressure, irrationality and loss avoidance, overconfidence, power, and group dynamics.  Of course, the risks identified in this research (some of which have been discussed in other posts in this blog ) do not affect only denizens of the C-suite, but the author does make a compelling case that overall the risks are significantly higher the higher one goes up the corporate ladder.

Killingsworth is also quick to point out that none of this suggests that boards of directors should micromanage their companies’ senior executives.  Rather, he urges: “The greatest impact will be achieved if the board focuses on selecting executive leaders with unblemished records of integrity, working supportively with the [chief compliance officer] and other internal-control officers, maintaining continuity of ‘tone at the top’ as executives come and go, and promoting ethical leadership within the C-suite and ethical culture throughout the organization,” and he provides useful guidance with respect to each of these general areas.  For instance, he offers a three-part strategy for “harness[ing] organizational culture as a means of effectively monitoring and governing the C-suite: by modeling and articulating the culture the board wishes to instantiate (and thereby sending a powerful implicit message to management); by explicitly engaging the C-suite with cultural and ethical-leadership responsibilities; and by taking advantage of a positive culture’s potential as a compliance ‘information and reporting system’ for the board.”

I urge you to read “’C’ Is for Crucible: Behavioral Ethics, Culture, and the Board’s Role in C-suite Compliance.” As was also the case with an earlier paper  by the same author, it makes a substantial contribution to the C&E field by showing how the many compelling research findings of behavioral ethics can be put to use to make C&E programs more effective.

 

Using behavioral ethics means to reduce legal ethics risks

In various prior posts the COI Blog has explored the potential impact of “behavioral ethics” on how compliance and ethics programs are designed and deployed, and separately has asked whether law firms should have C&E programs to address legal-practice-related risks.  So, I was delighted to learn recently of a soon-to-be-published paper which more or less seeks to connect these two topics, and also does much more than that.

In “Behavioral Legal Ethics,” – which will soon appear in the Arizona State Law Journal  and a draft of which is available for free download here –   Jennifer K. Robbennolt, Professor of Law and Psychology at the University of Illinois    and Jean R. Sternlight, Director of the Saltman Center for Conflict Resolution and Michael and Sonja Saltman Professor of Law, William S. Boyd School of Law, University of Nevada Las Vegas  offer what is apparently the first comprehensive overview ever published of the many  implications of behavioral psychology for legal ethics.  They initially describe how – through “ethical blind spots,” slippery slopes, “ethical fading” and other behavioral ethics phenomena – lawyers (as well as others) are affected by “bounded ethicality.”   They next review how various professional norms and contexts (such as the principal/agent relationship) can lead to unethical conduct by attorneys, as can the intense economic pressures of legal practice and the relatively high status and power of many members of the profession.   Added to this parade of horribles are various factors – such as the “illusion of courage” –  that give attorneys (and others) a misleading sense of comfort that they will respond appropriately when faced with the misconduct of others.

Additionally, unlike many other behavioral ethics studies, Robbennolt and Sternlight also offer detailed and – to my mind –  compelling possible solutions to the ethics risks they identify.  On an individual level, these include attorneys:  maintaining an awareness of the impact of psychology on ethical issues they may face,  doing more actively to consider ethics in their professional lives and to be more self-critical, planning ahead as to how  they would deal with ethical dilemmas,  and recognizing and confronting others’ unethical conduct.

Most important from my perspective are the article’s recommendations on an organizational – i.e., C&E program –  level.  Among other things, the authors propose enhancing the ethical culture of the entities in which lawyers practice (i.e., firms, corporate law departments, government agencies, etc.),  such as by discussing and modeling appropriate professional conduct  and improving  ethics education (with the latter effort including helping lawyers understand behaviorist risks).  With respect to the important (and challenging)  area of C&E-related incentives, the authors recommend  that organizations do more both to protect lawyers from the various stresses – financial and other – that can contribute to ethical failures, and also to reward ethical behavior (i.e., use of positive incentives).

The authors suggest as well that organizations take greater steps to promote attorneys reporting of suspected ethics violations, including by:

–          making  “clear that ensuring organization-wide ethical compliance is part of attorneys’ job responsibilities and will benefit the organization”;

–          providing many channels through which to report suspected violations – including the appointment of  an ethics counsel, an ethics committee, or an ethics ombudsperson; and

–          “publiciz[ing] instances in which reporting led to positive change, while at the same time being careful to protect confidentiality and not to  spark retaliation.”

Finally, they argue that law firms should monitor the ethical conduct of their attorneys (such as using “software to monitor billing patterns…”).

For readers of this blog who share my interest both in behavioral ethics and compliance programs for lawyers, “Behavioral Legal Ethics” is an important article indeed (and I am looking forward to the publication of the final version in the coming months).

Is your C&E program ready for a behavioral ethics upgrade?

Back to school time is almost upon us, so now’s as good a time as any for C&E officers to ask: what should we learn from the many scholars conducting behavioral ethics research that can help strengthen our C&E programs?  Here – by way of links to some recent posts – are thoughts on specific ways in which programs can be upgraded using behavioral ethics ideas and information:

Promotion of whistleblowing

Gifts and entertainment policies

Imposing discipline and promoting accountability  

Risk assessment  (also discussed in this post)

Training  and other communications  (discussing and linking to an important article by Scott Killingsworth).

Note that those are just 2013 pieces from the blog.  Many earlier postings about what behavioral ethics can mean for C&E programs are collected in this article from the Hong Kong based governance journal CSj .

Will behavioral ethics be on your company’s “final exam?”  That’s not for me to say – since I don’t prepare the “tests.”  For that you’ll have to ask your (friendly?) local prosecutor.