Bias

Two very different types of bias topics will be examined in the blog: A) Under what situations involving business organizations should bias be treated like a traditional COI. B) How often-unrecognized biases can inhibit ethical decision making, which is one of the principal teachings from behavioral ethics (i.e., “cognitive biases.”)

Behavioral ethics and C-suite behavior

As the COI Blog has discussed previously, CEOs often have different conflicts of interest from  you and me.   More generally we have seen from behavioral research that those with power may be at greater risk of engaging in unethical behavior than are others.  In “’C’ Is for Crucible: Behavioral Ethics, Culture, and the Board’s Role in C-suite Compliance,” Scott Killingsworth  carries this latter point forward a good distance, and does so in a way that should be of considerable interest to members of corporate boards, C&E officers and others with responsibility for promoting ethical and law abiding behavior in business organizations.

Killingsworth’s paper – which was presented at a RAND symposium in May and which can be downloaded here  (and will be published later this year in a proceedings book from the symposium) – first describes “the powerful forces [that] converge in the C-suite to test the mettle of executives and the board that supervises them.”  In this “crucible” one often finds greater temptations and pressures to engage in misconduct than typically face those in other parts of a company; a lack of effective controls to restrain those at the top; and the fact that “the winnowing process [for the C-suite] … selects, in some cases, for a much stronger-than-usual attraction to perquisites … that may be strong enough to overpower allegiance to ethical or legal rules.”

All of this is, of course, reasonably well known.  But much less well known is the behaviorist (and other) research reviewed by Killingsworth that suggests a considerable amplification of the already substantial ethical risks of being in the C-Suite crucible.  Within this body of work are studies concerning conflicts of interest, “motivated blindness” and “framing,” time pressure, irrationality and loss avoidance, overconfidence, power, and group dynamics.  Of course, the risks identified in this research (some of which have been discussed in other posts in this blog ) do not affect only denizens of the C-suite, but the author does make a compelling case that overall the risks are significantly higher the higher one goes up the corporate ladder.

Killingsworth is also quick to point out that none of this suggests that boards of directors should micromanage their companies’ senior executives.  Rather, he urges: “The greatest impact will be achieved if the board focuses on selecting executive leaders with unblemished records of integrity, working supportively with the [chief compliance officer] and other internal-control officers, maintaining continuity of ‘tone at the top’ as executives come and go, and promoting ethical leadership within the C-suite and ethical culture throughout the organization,” and he provides useful guidance with respect to each of these general areas.  For instance, he offers a three-part strategy for “harness[ing] organizational culture as a means of effectively monitoring and governing the C-suite: by modeling and articulating the culture the board wishes to instantiate (and thereby sending a powerful implicit message to management); by explicitly engaging the C-suite with cultural and ethical-leadership responsibilities; and by taking advantage of a positive culture’s potential as a compliance ‘information and reporting system’ for the board.”

I urge you to read “’C’ Is for Crucible: Behavioral Ethics, Culture, and the Board’s Role in C-suite Compliance.” As was also the case with an earlier paper  by the same author, it makes a substantial contribution to the C&E field by showing how the many compelling research findings of behavioral ethics can be put to use to make C&E programs more effective.

 

Using behavioral ethics means to reduce legal ethics risks

In various prior posts the COI Blog has explored the potential impact of “behavioral ethics” on how compliance and ethics programs are designed and deployed, and separately has asked whether law firms should have C&E programs to address legal-practice-related risks.  So, I was delighted to learn recently of a soon-to-be-published paper which more or less seeks to connect these two topics, and also does much more than that.

In “Behavioral Legal Ethics,” – which will soon appear in the Arizona State Law Journal  and a draft of which is available for free download here –   Jennifer K. Robbennolt, Professor of Law and Psychology at the University of Illinois    and Jean R. Sternlight, Director of the Saltman Center for Conflict Resolution and Michael and Sonja Saltman Professor of Law, William S. Boyd School of Law, University of Nevada Las Vegas  offer what is apparently the first comprehensive overview ever published of the many  implications of behavioral psychology for legal ethics.  They initially describe how – through “ethical blind spots,” slippery slopes, “ethical fading” and other behavioral ethics phenomena – lawyers (as well as others) are affected by “bounded ethicality.”   They next review how various professional norms and contexts (such as the principal/agent relationship) can lead to unethical conduct by attorneys, as can the intense economic pressures of legal practice and the relatively high status and power of many members of the profession.   Added to this parade of horribles are various factors – such as the “illusion of courage” –  that give attorneys (and others) a misleading sense of comfort that they will respond appropriately when faced with the misconduct of others.

Additionally, unlike many other behavioral ethics studies, Robbennolt and Sternlight also offer detailed and – to my mind –  compelling possible solutions to the ethics risks they identify.  On an individual level, these include attorneys:  maintaining an awareness of the impact of psychology on ethical issues they may face,  doing more actively to consider ethics in their professional lives and to be more self-critical, planning ahead as to how  they would deal with ethical dilemmas,  and recognizing and confronting others’ unethical conduct.

Most important from my perspective are the article’s recommendations on an organizational – i.e., C&E program –  level.  Among other things, the authors propose enhancing the ethical culture of the entities in which lawyers practice (i.e., firms, corporate law departments, government agencies, etc.),  such as by discussing and modeling appropriate professional conduct  and improving  ethics education (with the latter effort including helping lawyers understand behaviorist risks).  With respect to the important (and challenging)  area of C&E-related incentives, the authors recommend  that organizations do more both to protect lawyers from the various stresses – financial and other – that can contribute to ethical failures, and also to reward ethical behavior (i.e., use of positive incentives).

The authors suggest as well that organizations take greater steps to promote attorneys reporting of suspected ethics violations, including by:

–          making  “clear that ensuring organization-wide ethical compliance is part of attorneys’ job responsibilities and will benefit the organization”;

–          providing many channels through which to report suspected violations – including the appointment of  an ethics counsel, an ethics committee, or an ethics ombudsperson; and

–          “publiciz[ing] instances in which reporting led to positive change, while at the same time being careful to protect confidentiality and not to  spark retaliation.”

Finally, they argue that law firms should monitor the ethical conduct of their attorneys (such as using “software to monitor billing patterns…”).

For readers of this blog who share my interest both in behavioral ethics and compliance programs for lawyers, “Behavioral Legal Ethics” is an important article indeed (and I am looking forward to the publication of the final version in the coming months).

Is your C&E program ready for a behavioral ethics upgrade?

Back to school time is almost upon us, so now’s as good a time as any for C&E officers to ask: what should we learn from the many scholars conducting behavioral ethics research that can help strengthen our C&E programs?  Here – by way of links to some recent posts – are thoughts on specific ways in which programs can be upgraded using behavioral ethics ideas and information:

Promotion of whistleblowing

Gifts and entertainment policies

Imposing discipline and promoting accountability  

Risk assessment  (also discussed in this post)

Training  and other communications  (discussing and linking to an important article by Scott Killingsworth).

Note that those are just 2013 pieces from the blog.  Many earlier postings about what behavioral ethics can mean for C&E programs are collected in this article from the Hong Kong based governance journal CSj .

Will behavioral ethics be on your company’s “final exam?”  That’s not for me to say – since I don’t prepare the “tests.”  For that you’ll have to ask your (friendly?) local prosecutor.

Include me out: whistleblowing and a “larger loyalty”

Loyalty plays a profound role in many aspects of modern civilization, including in the business world. Samuel Goldwyn – the font of much memorably expressed folk wisdom – was doubtless speaking for many business leaders in saying, “I’ll take fifty percent efficiency to get one hundred percent loyalty.” But when it comes to C&E, pure loyalty can be a mixed blessing.

In their soon-to-be-published paper “The Whistleblower’s Dilemma and the Fairness-Loyalty Tradeoff,”  Adam Waytz, of Northwestern University,  and James Dungan  and Liane Young,  both of Boston College, examine the powerful psychological conflict facing many potential whistleblowers: “Whistleblowing promotes justice and fairness but can also appear disloyal.” They note that prior “studies have shown that fairness norms typically dominate behavior but may be overwritten in contexts that pit fairness against loyalty,” and  show through five studies they conducted that “differences in valuing fairness over loyalty predict willingness to report unethical behavior.”  Their “findings offer recommendations for how to promote fairness and to encourage whistleblowing,” including reframing whistleblowing to be seen as reflecting a “’larger loyalty’… toward a more universal social circle…”

Of course, none of this will be a great revelation to C&E professionals. But, as with other studies that this blog has covered from the realm of behavioral ethics  or moral intuitionism,  having the data to back up what is anecdotally known from “field work”  can be helpful in focusing management and boards on key C&E program needs –  in this case the need to reframe  whistleblowing as reflecting a “larger loyalty.” 

C&E practitioners have long looked for ways to do just that.  For instance, years ago I helped to develop a short C&E training video that sought to evoke feelings of a larger loyalty by showing the faces of colleagues laid off in the wake of an accounting scandal that could have been, but wasn’t, stopped in the early stages by a potential whistleblower, and I imagine that other training programs have taken a similar approach.  In a somewhat like vein, Scott Killingsworth has recently published a very fine paper in The Georgetown Journal of Legal Ethics – discussed and linked to here – on compliance communications strategies that can help companies transcend the “us-versus-them” mindset which is harmful from a compliance perspective.

A related facet of promoting a larger loyalty is by striving – through various measures – to maintain “organizational justice” at a company. As described in this earlier post,   “According to research conducted by the Corporate Executive Board (the “CEB”) of about 600,000 employees of more than 140 companies, one of the most important steps to promoting compliance is maintaining ‘organizational justice.’  The CEB notes: ‘A firm’s culture has organizational justice when employees agree that 1) their firm responds quickly and consistently to proven unethical behavior and 2) that unethical behavior is not tolerated in their department.’” The significance of this research is that it suggests that individuals are more likely to embrace their company’s shareholders and fellow employees (rather than just a small circle of co-workers) within a larger loyalty if the loyalty is seen as mutual.

One of Goldwyn’s other celebrated sayings was “Include me out,” but – although the context for the remark is unknown to me –  I doubt this concerned whistleblowing.  (Okay – I basically quoted it just for fun.)  However, clearly programs need to find ways to make potential whistleblowers want to be included “in,” so that they share key information about wrongdoing with companies rather than letting the problems fester and grow. And the Waytz-Dungan-Young paper should help C&E officers make the case to leaders in their respective companies for finding effective ways to do that.

A final point: because this post is more about means than ends, I have deliberately taken a relatively narrow view of what a “larger loyalty” might mean, i.e., essentially one that is dictated solely by fiduciary duties.  However, my own view is that – as an ethical, if not legal, matter – we need to create a larger set of loyalties than this, as reflected in an earlier piece about our duties to future generations.

For further reading:

A post on whistleblower policies and procedures.

Other ways to use behavioral ethics knowledge and ideas to upgrade your C&E program.

And here’s the agenda for the Thomson Reuters Corporate Whistleblowing Forum being held next month in NYC, at which I’ll be speaking.

 

 

Gifts, entertainment and “soft-core” corruption

I once asked students in an executive MBA ethics class if they thought that their employer organizations should have restrictive policies on gift receiving.  Nearly all said that such policies were unnecessary – as the students were sure that they wouldn’t be corrupted by gifts from suppliers or customers.  I then asked if the school should allow teachers to receive gifts and entertainment from students. As you can imagine, the response was very different.

The ethical challenges of dealing with gifts have been with us since at least around 1500 B.C. when, according to this piece on the Knowledge at Wharton web site, “Gimil-Ninurta — a poor citizen of the city of Nippur in Mesopotamia — tried to enlist the assistance of the mayor of Nippur by offering him a goat. The mayor accepted the goat, but rather than providing assistance ordered that Gimil-Ninurta be beaten.” However, the extraordinary focus in present times on preventing bribery has drawn unprecedented attention to more “soft-core” versions of the problem, including traditional gift giving.  (For instance, in the past week, several large companies in Malaysia adopted a “no festive gift” policy.)

Global companies addressing issues of gift giving and receiving in the current environment indeed have a lot to deal with.

First, there is a growing body of laws and rules from around the world governing gift giving that must be complied with.  (The co-publisher of this blog –  ethiXbase – maintains an extensive data base of these standards for its members.)  For many companies and individuals, what previously had been in the realm of ethics/good-to-do has moved squarely into the province of law/need-to-do.

Second, one needs to be mindful of different cultural standards relating to gift giving and other COI-related issues, as discussed in this guest post  by Lori Tansey Martens of the International Business Ethics Institute.  A gifts-and-entertainment policy that is culturally narrow-minded can be ineffective.

Third, the operational aspects of compliance/ethics in this area can be daunting. Among other things, not only global companies but also organizations in highly regulated businesses may need to use technology to promote and track compliance to a sufficient degree, as described in this guest post by Bill Sacks of HCCS.

Moreover, all companies – regardless of where they operate or what they do – should have well thought out compliance standards for gift giving and receiving. This post  describes some of the considerations that might go into such a policy and this survey conducted by the Society of Corporate Compliance and Ethics in 2012 (available to members on the organization’s web site) could be useful for policy drafting, as well. For global companies, this recent piece by Tom Fox on FCPA cases involving gifts, entertainment and travel should be a helpful resource.

Finally, one should consider the role of behavioral ethics in developing/implementing gift-related compliance measures, and particularly the fact that we tend to underestimate how much COIs can impact our judgment. For instance, last year, the Wall Street Journal  reported on a study in which different groups of professionals were asked to assess the necessity of conflict of interest standards of conduct both for other professions and their own: “Doctors participating in [in a study] tended to think [certain COI-related] strictures sounded pretty reasonable [when applied to financial planners]. However, when ‘financial planners’ was replaced by ‘doctors,’ and ‘investment companies’ by ‘pharmaceutical companies,’ the doctors started to raise objections — that the supposed conflicts were hypothetical, for example, and that no one’s views about which drugs to prescribe could ever be swayed by a coffee mug. And investment managers surveyed by the researchers reacted similarly: The rules for doctors sounded fine to them, but the ones for investment professionals seemed petty and unnecessary.”

Is there a behaviorist-based cure for this aspect of “soft-core” corruption? Dan Ariely’s column in this weekend’s Wall Street Journal – although not specifically about COIs/gifts – may be instructive on that score.  He was asked the broad question, “What is the best way to inject some rationality into our decision-making?” and responded, “I am not certain of the best way, but here is one approach that might help: When we face decisions, we are trapped within our own perspective—our own special motivations and emotions, our egocentric view of the world at that moment. To make decisions that are more rational, we want to eliminate those barriers and look at the situation more objectively. One way to do this is to think not of making a decision for yourself but of recommending a decision for somebody else you like. This lets you view the situation in a colder, more detached way and make better decisions.” His piece also describes the results of a fascinating experiment that helps demonstrate this.

One can readily see how this framework could be useful for promoting ethical and law abiding behavior relating to gift giving and receiving, where our instincts might not be a reliable guide for identifying appropriate behavior.  Indeed, Ariely’s recommendation could help business people address many other areas of ethical challenge too.

Redrawing corporate fault lines using behavioral ethics

At various points in time – such as 1909, when the Supreme Court held that corporations could be  criminal liable for the offenses of their employees;   1943, when the Court developed the “responsible corporate officer” doctrine;  and 1991, when the Federal Sentencing Guidelines for Organizations went into effect  – U.S. law has changed to meet new or newly appreciated risks of misconduct by or in corporations.  Is it now time for a legal rewrite using a behavioral ethics perspective?

In her recently published “Behavioral Science and Scienter in Class Action Securities Fraud Litigation”  in the Loyola University Chicago Law Journal, Ann Morales Olazábal of the University of Miami School of Business Administration argues that various behavioral economics/ethics research findings  warrant revisiting the intent requirements of securities fraud law.   She reviews studies showing that the “systematic cognitive errors and mental biases” of “overconfidence, over-optimism, attribution error and illusion of control, the anchoring and framing effects, and loss aversion” can impact decision making in business contexts, and notes: “In the aggregate, these biases establish a human mental environment that is not perfectly rational, but boundedly so.  Like other humans, executives and those who surround and support their decision making are subject to flaws in their thinking—subconscious predispositions to see things in self-serving ways that result in a failure to actively and accurately perceive risks and warning signs.”

These factors can contribute in various ways, she argues, to businesses committing securities fraud – but as currently applied the intent element of that offense is inadequate to address risks of this nature.  So, she proposes that courts utilize a more objective approach to proving that defendants were reckless in securities fraud cases. The enhanced accountability brought about by such a change in the law, Olazábal says, should help promote greater honesty in corporate financial disclosures by incenting organizations and executives to overcome the effects of cognitive biases – a danger that was unappreciated until the advent of behavioral studies.

Of course, changing the law is not within the job descriptions of most C&E officers, but setting internal standards of accountability generally is.  As discussed in   this prior post, behavioral ethics research “underscores the importance of the Sentencing Guidelines expectation that organizations should impose discipline on employees not only for engaging in wrongful conduct but ‘for failing to take reasonable steps to prevent or detect, wrongdoing by others’  – something relatively few companies do well (and some don’t do at all).”  The post also describes five steps C&E officers can take to strengthen their programs in this regard: “build the notion of supervisory accountability into their policies – e.g., in the managers’ duties section of a code of conduct; speak forcefully to the issue in C&E training and other communications for managers; train investigators on the notion of managerial accountability and address it in the forms they use so that they are required to determine in all inquiries if a manager’s being asleep at the switch led to the violation in question; publicize (in an appropriate way) that managers have in fact been disciplined for supervisory lapses; [and] have auditors take these requirements into account in their audits of investigative and disciplinary records.”  Indeed, at least some of these reflect the types of steps that Olazábal’s suggested change in the law might well inspire companies and executives to take.

Money and morals: Can behavioral ethics help “Mister Green” behave himself?

The Bible says that the love of money is the root of all kinds of evil (or, depending on the translation, some variation on that thought). On the other hand, noted historian Niall Ferguson has shown that money is the foundation of much human progress.

“Mister Green” has, of course, long been known for both his bad and good sides, with the latest view coming from a recent behaviorist study, “Seeing green: Mere exposure to money triggers a business decision frame and unethical outcomes,” by Maryam Kouchaki, Kristin Smith-Crowe, Arthur P. Brief and Carlos Sousa in Organizational Behavior and Human Decision Processes.   As described in their abstract of the piece: “In four studies, we examined the likelihood of unethical outcomes when the construct of money was activated through the use of priming techniques. The results of Study 1 demonstrated that individuals primed with money were more likely to demonstrate unethical intentions than those in the control group. In Study 2, we showed that participants primed with money were more likely to adopt a business decision frame. In Studies 3 and 4, we found that money cues triggered a business decision frame, which led to a greater likelihood of unethical intentions and behavior. Together, the results of these studies demonstrate that mere exposure to money can trigger unethical intentions and, behavior and that decision frame mediates this effect.” 

The paper is well worth reading, not only for the above-described compelling findings but also for its review of other research on the impact of money on our behavior generally and social relationships in particular.  For instance, the authors describe one study in which participants “who were primed with money were more likely to choose an individual activity (e.g., four personal cooking lessons) over a group activity (e.g., an in-home catered dinner for four)” and also note that “researchers have demonstrated that activating the construct of money leads… to taking on more work for oneself, reduced helpfulness, and placing more distance between the self and others.”  This latter point is critical to the field of ethics because “[g]enerally speaking, morality has been said to be embedded in social relationships … The more tenuous the relationship, or social bond, the less morality matters.”  

Finally, the authors state: “Given the power of subtle environmental cues—such as the idea of money, discussed in this paper—organizations should identify the structural, institutional, and systematic factors that promote unethical behavior” and – citing a 2003 paper by Ann Tenbrunsel and others – further recommend that organizations attempt to establish “an ‘ethical infrastructure,’…formal and informal systems of communication, surveillance, and sanctioning mechanisms that should be aligned with strong organizational climates pertaining to ethics, justice, and respect.”

Of course, compliance and ethics programs are “ethical infrastructures”  and in the past ten years many organizations have indeed implemented such programs.  But far fewer have done so in an effective manner, and the general point of behavioral ethicists about our natural, but underappreciated, infirmities in ethical decision making suggests that more should be done in this regard – both by organizations on their own initiative and by government agencies in incenting companies to take such measures. (For a fuller discussion of why the government needs to step up its efforts in this regard see this paper from a 2012 Rand Symposium. )  

But beyond this general relevance of behavioral ethics to C&E programs, there are specific ways in which such programs can be fortified using behavioral ethics insights.  In this article published a few months ago in the Hong Kong based corporate governance journal CSj, I offer a preliminary catalogue of such possibilities  and the  study described by Kouchaki  and her colleagues suggests that one should seek to identify such opportunities from the perspective of the particular risks posed by Mister Green.

To begin, organizations should consider as part of their periodic C&E risk assessments which,  if any, of their “parts”– e.g., business, geographical or functional units – are extensively exposed to money. This  does not mean, of course, that individuals in such units are likely to act unethically. However, the socially corrosive effects of money should be taken into account along with other C&E risk relevant facts in designing and implementing mitigation measures, and not all parts of a business organization are likely to feel those effects to a uniform degree.

To take an example from C&E history, energy utilities traditionally have benefitted ethically from a focus on service to customers – i.e., this is a business where relationships with others are strong. But in the 1990s, some of these companies acquired or developed trading businesses, in which there were counterparties to trade against rather than customers to be served, and where money had a far greater presence than it would for other utility employees, e.g.,  someone whose job was fixing electrical lines.   No surprise then that while some of these companies had generally strong C&E programs, their trading operations proved to be an ethical Achilles Heel, often at great cost to their shareholders.

But how should one address these sorts of risks? One ways is through C&E-related communications.  The idea here is not to try to banish any mention of money in the workplace; as the authors of the study note, “Money is a ubiquitous feature of modern life and business organizations, in particular.” However, for parts of a company identified by the above-described risk assessment, an organization may wish to deploy communications designed to remind employees, in an impactful way, of the many  important  relationships  that the organization has with customers, stockholders, co-workers and others.  And that – combined with other C&E measures (e.g., targeted monitoring, auditing) – could help keep the lesser angels of Mister Green’s nature in check.

Finally, as with other posts on behavioral ethics, I don’t want to oversell the impact of this field of study on C&E, which should be based on knowledge from various disciplines – including management, philosophy, economics, law and, loosely speaking,  anthropology – as well as psychology.  (Indeed, in my energy trading example,  economics – meaning compensation structures  – probably would have been a more important area of focus than  behavioral ethics in preventing wrongdoing, although I do think the latter is relevant here, too.)   However,  behavioral ethics does have for some organizations the potential to enhance various specific areas of such programs. More generally –  by setting an example of using cutting edge social science research to address real C&E risks –  perhaps it will help  elevate the field as a whole.

How does your compliance and ethics program deal with “conformity bias”?

In his blog on the Ethics Unwrapped website published by the University of Texas’ McCombs School of Business,  Prof. Robert Prentice reviews some important recent research on the behavioralist phenomenon of “conformity bias” – “the tendency of people to take their cues as to the proper way to think and act from those around them.”  As he describes, in one experiment conducted by Francesca Gino, “students were more likely to cheat when they learned that members of their ‘in-group’ did so, but less likely when learning the same about members of a rival group.”  In a related vein, a study by Scott A. Wright, John B. Dinsmore and James J. Kellaris showed that the identity of the victim was also influential in forming individuals’ views of cheating – and specifically that “in-group members who scammed other in-group members were judged more harshly than in-group members who scammed out-group members.” (Citations/links to these and other studies on conformity bias can be found in Prentice’s post – which I encourage you to read.)

As with various other behavioral ethics concepts previously reviewed in the COI Blog, the ideas here may seem obvious (“When in Rome…”)  – but being able to prove the points with data could help C&E officers get the attention they need in their companies to deal with conformity bias based ethical challenges.  But even if the leaders in their organizations agree that something should be done about conformity bias, what is that something?

One step in this direction – which potentially covers a lot of ground – is to include a conformity bias perspective in C&E risk assessment. For instance, where, based on the findings of a risk assessment, the victims of a particular type of violation are likely to be seen more as out-group members than in-group ones, that may suggest the need for extra C&E mitigation measures (of various kinds) to address the risk area in question.  Similarly, risk assessment surveys should (as many, but not all, currently do) target regional or business-line based employee populations that may be setting a bad example for other member employees. Additionally, one should – for the purposes of identifying conformity-biased based risks – consider whether for some employee populations the most relevant in-group is defined less by the culture in your organization but rather  by that of members of their industry, as industries (as much as companies or geographies) can have unethical cultures (as suggested most recently in this Wall Street Journal story on the LIBOR manipulation scandal).

More broadly, just as the sufficiency of internal controls (policies, procedures, etc.) need to be assessed in any analysis of risk, so do “inner controls,”   which is another way of thinking about how various behavioral ethics related factors diminish or enhance the risk of C&E violations. That is, the weaker the inner controls (based not only on conformity bias but other risk causing phenomena, behaviorist or otherwise), the greater the need for traditional internal controls.

A second such type of measure – which also is potentially broad – is in the realm of training and communications, and specifically finding ways to highlight the connections employees may have to those who otherwise are likely to be viewed as out-group members.   The good news here, as Prentice writes, is that “[a]mong the most interesting findings in this entire line of research is how little it takes for us to view someone as part of our in-group, or of an out-group.”

At least in theory, this seems to underscore the benefits of a broad “stakeholder” approach of C&E. Ultimately, however, what may be needed here is less the skills of those who draft codes of conduct than of those can reach us on a deeper level regarding how we should really view our “group” membership – as was perhaps most famously done by Charles Dickens.

 

Insider trading, behavioral ethics and effective “inner controls”

Late last week the U.S. Securities and Exchange Commission announced that it had reached a settlement with a hedge fund involving the largest penalty ever imposed for insider trading. But it is a fair bet that even this record breaking fine will do little to deter future insider trading, because of the unique compliance challenges raised by this area of the law.

One challenge is that insider trading can be enormously difficult to detect.   This is particularly so where the individual misusing the information is neither an insider herself nor tied to one in an obvious way. Insider trading is sometimes described as a “perfect crime” and, sad to say, in many instances it doubtless proves to be just that. (Part of the way we know this is that “numerous academic studies [have] …. [u]ncover[ed] indicators like spikes in a stock’s trading volume just before key information, such as quarterly earnings, is made public…” which suggest that there is a fair bit of insider trading going on –  yet the number of actual prosecutions in this area is relatively low. )

The other challenge  (which is germane to the behavioral ethics aspect of this blog) is that the opportunity for insider trading may fail to trigger the sorts of “inner controls” – meaning an individual’s moral restraints – that the prospect of committing various other crimes typically does.  Part of the reason for this is that while in most instances (at least under U.S. law) insider trading involves a breach of fiduciary duty – i.e., improper disclosure of a corporate secret –  often the individual benefitting from that transgression is several steps removed from the original wrongdoing (due to the information being passed along or “tipped”).  Per several behavioral ethics experiments, “distance” between the wrongful act itself and the beneficiary of the transgression increases the likelihood of wrongdoing, and in insider trading that distance can be significant indeed.

A related problem is that the specific victims of insider trading – typically anonymous market participants – are not evident to would-be violators. Per  other behavioral research, this second type of distance also tends to diminish internal moral restraints.  Moreover, this sense that insider trading is harmless is, in my view, exacerbated by the arguments of some commentators that such conduct should actually be lawful, to make markets more efficient.

Can any of this be remedied?  I’ll leave the detection issue to those others, but on the behavioral ethics side I think it is imperative that these two types of distance be addressed by, among other things, imagining what things would be like if insider trading was not in fact a crime. Using this sort of “what if?” approach – as we did earlier with conflicts of interest generally  – one can envision a world in which businesses are reluctant to engage in transactions that require confidentiality to be successful, which would hurt productivity in many ways.  This thought experiment also suggests that individuals and organizations would be reluctant to invest in capital markets that they fear may be rigged by insiders, which, in turn, substantially raises the cost of equity to businesses.

Like “conflict of interest world,” insider trading world “is a place of needlessly diminished lives, resources and opportunities.” In my view, effective deterrence in this area requires greater recognition of these harms so that they can fully inform the operation of our inner controls.

Values, culture and effective compliance communications – the role of behavioral ethics

Compliance-related communications constitute a large part of the day-to-day work of many compliance-and-ethics departments.  But is this work being done in the most effective manner reasonably possible?

“Modeling the Message: Communicating Compliance through Organizational Values and Culture,” – published last fall by attorney  Scott Killingsworth in The Georgetown Journal of Legal Ethics  – provides a thoughtful examination of what we can learn about compliance  communications from various findings of behavioral science.  The article critiques the traditional approach to compliance communications – which focuses on avoidance of personal risks  – as being premised on a  “rational actor” theory that in recent years has been seriously undermined by the results of behavioral economics/ethics research. In this regard, Killingsworth argues: “Instead of conveying the message that compliance is non-negotiable, [the personal risk versus reward approach] implies that it may be negotiable if the price is right.”  An additional source of concern is that this way of communicating may send the implicit message “that management does not trust employees. Potential side effects of this message range from resentment, to an ‘us-versus-them’ attitude towards management, to a reverse-Pygmalion effect in which employees may tend to ‘live down’ to the low expectations that are projected upon them.”

As an alternative, Killingsworth draws upon the behaviorist concept of “framing” to suggest that communications framed in terms of values and ethics are more likely to be effective in reducing wrongdoing than are traditional compliance communications. In that connection, he describes a study showing “that over eighty percent of compliance choices [in the workplace] were motivated by internal perceptions of the legitimacy of the employer’s authority and by a sense of right and wrong, while less than twenty percent were driven by fear of punishment or expectation of reward.” A second benefit to the values-based approach is that it can better serve as “a source of internal guidance in novel situations” than does the traditional alternative.   Third, communications framed from the former perspective may enhance companies’ efforts to promote internal reporting of violations (obviously an important consideration in the Dodd-Frank era),  a contention that he bases on a study which showed that “the reporting of compliance violations encountered dramatically different effects depending on whether the subjects considered a particular infraction morally repugnant or not.”

As well as discussing communications per se, Killingsworth’s piece examines “the messages implicit in key company behaviors, which can either reinforce, undermine, or obliterate explicit compliance messages.”   So, while explicit communications are important, C&E officers must also “reach across functional boundaries to executive management and the human resources group and, if necessary, educate them about the principles of employee engagement and the value of consistent explicit and behavioral messaging that activates the employees’ values and brings out their [employees’] better natures.” The piece concludes with a list of other practical recommendations – concerning, among other things, culture assessments and communications strategies – for making all these good things happen.

Finally, I should emphasize that this posting only scratches the surface of what is in “Modeling the Message: Communicating Compliance through Organizational Values and Culture,” and I strongly encourage both C&E professionals seeking to up their respective companies’ communications efforts and behavioral scientists seeking to learn more about how their work can be put to practical use in compliance programs to read the piece in full.