Bias

Two very different types of bias topics will be examined in the blog: A) Under what situations involving business organizations should bias be treated like a traditional COI. B) How often-unrecognized biases can inhibit ethical decision making, which is one of the principal teachings from behavioral ethics (i.e., “cognitive biases.”)

Gifts, entertainment and “soft-core” corruption

I once asked students in an executive MBA ethics class if they thought that their employer organizations should have restrictive policies on gift receiving.  Nearly all said that such policies were unnecessary – as the students were sure that they wouldn’t be corrupted by gifts from suppliers or customers.  I then asked if the school should allow teachers to receive gifts and entertainment from students. As you can imagine, the response was very different.

The ethical challenges of dealing with gifts have been with us since at least around 1500 B.C. when, according to this piece on the Knowledge at Wharton web site, “Gimil-Ninurta — a poor citizen of the city of Nippur in Mesopotamia — tried to enlist the assistance of the mayor of Nippur by offering him a goat. The mayor accepted the goat, but rather than providing assistance ordered that Gimil-Ninurta be beaten.” However, the extraordinary focus in present times on preventing bribery has drawn unprecedented attention to more “soft-core” versions of the problem, including traditional gift giving.  (For instance, in the past week, several large companies in Malaysia adopted a “no festive gift” policy.)

Global companies addressing issues of gift giving and receiving in the current environment indeed have a lot to deal with.

First, there is a growing body of laws and rules from around the world governing gift giving that must be complied with.  (The co-publisher of this blog –  ethiXbase – maintains an extensive data base of these standards for its members.)  For many companies and individuals, what previously had been in the realm of ethics/good-to-do has moved squarely into the province of law/need-to-do.

Second, one needs to be mindful of different cultural standards relating to gift giving and other COI-related issues, as discussed in this guest post  by Lori Tansey Martens of the International Business Ethics Institute.  A gifts-and-entertainment policy that is culturally narrow-minded can be ineffective.

Third, the operational aspects of compliance/ethics in this area can be daunting. Among other things, not only global companies but also organizations in highly regulated businesses may need to use technology to promote and track compliance to a sufficient degree, as described in this guest post by Bill Sacks of HCCS.

Moreover, all companies – regardless of where they operate or what they do – should have well thought out compliance standards for gift giving and receiving. This post  describes some of the considerations that might go into such a policy and this survey conducted by the Society of Corporate Compliance and Ethics in 2012 (available to members on the organization’s web site) could be useful for policy drafting, as well. For global companies, this recent piece by Tom Fox on FCPA cases involving gifts, entertainment and travel should be a helpful resource.

Finally, one should consider the role of behavioral ethics in developing/implementing gift-related compliance measures, and particularly the fact that we tend to underestimate how much COIs can impact our judgment. For instance, last year, the Wall Street Journal  reported on a study in which different groups of professionals were asked to assess the necessity of conflict of interest standards of conduct both for other professions and their own: “Doctors participating in [in a study] tended to think [certain COI-related] strictures sounded pretty reasonable [when applied to financial planners]. However, when ‘financial planners’ was replaced by ‘doctors,’ and ‘investment companies’ by ‘pharmaceutical companies,’ the doctors started to raise objections — that the supposed conflicts were hypothetical, for example, and that no one’s views about which drugs to prescribe could ever be swayed by a coffee mug. And investment managers surveyed by the researchers reacted similarly: The rules for doctors sounded fine to them, but the ones for investment professionals seemed petty and unnecessary.”

Is there a behaviorist-based cure for this aspect of “soft-core” corruption? Dan Ariely’s column in this weekend’s Wall Street Journal – although not specifically about COIs/gifts – may be instructive on that score.  He was asked the broad question, “What is the best way to inject some rationality into our decision-making?” and responded, “I am not certain of the best way, but here is one approach that might help: When we face decisions, we are trapped within our own perspective—our own special motivations and emotions, our egocentric view of the world at that moment. To make decisions that are more rational, we want to eliminate those barriers and look at the situation more objectively. One way to do this is to think not of making a decision for yourself but of recommending a decision for somebody else you like. This lets you view the situation in a colder, more detached way and make better decisions.” His piece also describes the results of a fascinating experiment that helps demonstrate this.

One can readily see how this framework could be useful for promoting ethical and law abiding behavior relating to gift giving and receiving, where our instincts might not be a reliable guide for identifying appropriate behavior.  Indeed, Ariely’s recommendation could help business people address many other areas of ethical challenge too.

Redrawing corporate fault lines using behavioral ethics

At various points in time – such as 1909, when the Supreme Court held that corporations could be  criminal liable for the offenses of their employees;   1943, when the Court developed the “responsible corporate officer” doctrine;  and 1991, when the Federal Sentencing Guidelines for Organizations went into effect  – U.S. law has changed to meet new or newly appreciated risks of misconduct by or in corporations.  Is it now time for a legal rewrite using a behavioral ethics perspective?

In her recently published “Behavioral Science and Scienter in Class Action Securities Fraud Litigation”  in the Loyola University Chicago Law Journal, Ann Morales Olazábal of the University of Miami School of Business Administration argues that various behavioral economics/ethics research findings  warrant revisiting the intent requirements of securities fraud law.   She reviews studies showing that the “systematic cognitive errors and mental biases” of “overconfidence, over-optimism, attribution error and illusion of control, the anchoring and framing effects, and loss aversion” can impact decision making in business contexts, and notes: “In the aggregate, these biases establish a human mental environment that is not perfectly rational, but boundedly so.  Like other humans, executives and those who surround and support their decision making are subject to flaws in their thinking—subconscious predispositions to see things in self-serving ways that result in a failure to actively and accurately perceive risks and warning signs.”

These factors can contribute in various ways, she argues, to businesses committing securities fraud – but as currently applied the intent element of that offense is inadequate to address risks of this nature.  So, she proposes that courts utilize a more objective approach to proving that defendants were reckless in securities fraud cases. The enhanced accountability brought about by such a change in the law, Olazábal says, should help promote greater honesty in corporate financial disclosures by incenting organizations and executives to overcome the effects of cognitive biases – a danger that was unappreciated until the advent of behavioral studies.

Of course, changing the law is not within the job descriptions of most C&E officers, but setting internal standards of accountability generally is.  As discussed in   this prior post, behavioral ethics research “underscores the importance of the Sentencing Guidelines expectation that organizations should impose discipline on employees not only for engaging in wrongful conduct but ‘for failing to take reasonable steps to prevent or detect, wrongdoing by others’  – something relatively few companies do well (and some don’t do at all).”  The post also describes five steps C&E officers can take to strengthen their programs in this regard: “build the notion of supervisory accountability into their policies – e.g., in the managers’ duties section of a code of conduct; speak forcefully to the issue in C&E training and other communications for managers; train investigators on the notion of managerial accountability and address it in the forms they use so that they are required to determine in all inquiries if a manager’s being asleep at the switch led to the violation in question; publicize (in an appropriate way) that managers have in fact been disciplined for supervisory lapses; [and] have auditors take these requirements into account in their audits of investigative and disciplinary records.”  Indeed, at least some of these reflect the types of steps that Olazábal’s suggested change in the law might well inspire companies and executives to take.

Money and morals: Can behavioral ethics help “Mister Green” behave himself?

The Bible says that the love of money is the root of all kinds of evil (or, depending on the translation, some variation on that thought). On the other hand, noted historian Niall Ferguson has shown that money is the foundation of much human progress.

“Mister Green” has, of course, long been known for both his bad and good sides, with the latest view coming from a recent behaviorist study, “Seeing green: Mere exposure to money triggers a business decision frame and unethical outcomes,” by Maryam Kouchaki, Kristin Smith-Crowe, Arthur P. Brief and Carlos Sousa in Organizational Behavior and Human Decision Processes.   As described in their abstract of the piece: “In four studies, we examined the likelihood of unethical outcomes when the construct of money was activated through the use of priming techniques. The results of Study 1 demonstrated that individuals primed with money were more likely to demonstrate unethical intentions than those in the control group. In Study 2, we showed that participants primed with money were more likely to adopt a business decision frame. In Studies 3 and 4, we found that money cues triggered a business decision frame, which led to a greater likelihood of unethical intentions and behavior. Together, the results of these studies demonstrate that mere exposure to money can trigger unethical intentions and, behavior and that decision frame mediates this effect.” 

The paper is well worth reading, not only for the above-described compelling findings but also for its review of other research on the impact of money on our behavior generally and social relationships in particular.  For instance, the authors describe one study in which participants “who were primed with money were more likely to choose an individual activity (e.g., four personal cooking lessons) over a group activity (e.g., an in-home catered dinner for four)” and also note that “researchers have demonstrated that activating the construct of money leads… to taking on more work for oneself, reduced helpfulness, and placing more distance between the self and others.”  This latter point is critical to the field of ethics because “[g]enerally speaking, morality has been said to be embedded in social relationships … The more tenuous the relationship, or social bond, the less morality matters.”  

Finally, the authors state: “Given the power of subtle environmental cues—such as the idea of money, discussed in this paper—organizations should identify the structural, institutional, and systematic factors that promote unethical behavior” and – citing a 2003 paper by Ann Tenbrunsel and others – further recommend that organizations attempt to establish “an ‘ethical infrastructure,’…formal and informal systems of communication, surveillance, and sanctioning mechanisms that should be aligned with strong organizational climates pertaining to ethics, justice, and respect.”

Of course, compliance and ethics programs are “ethical infrastructures”  and in the past ten years many organizations have indeed implemented such programs.  But far fewer have done so in an effective manner, and the general point of behavioral ethicists about our natural, but underappreciated, infirmities in ethical decision making suggests that more should be done in this regard – both by organizations on their own initiative and by government agencies in incenting companies to take such measures. (For a fuller discussion of why the government needs to step up its efforts in this regard see this paper from a 2012 Rand Symposium. )  

But beyond this general relevance of behavioral ethics to C&E programs, there are specific ways in which such programs can be fortified using behavioral ethics insights.  In this article published a few months ago in the Hong Kong based corporate governance journal CSj, I offer a preliminary catalogue of such possibilities  and the  study described by Kouchaki  and her colleagues suggests that one should seek to identify such opportunities from the perspective of the particular risks posed by Mister Green.

To begin, organizations should consider as part of their periodic C&E risk assessments which,  if any, of their “parts”– e.g., business, geographical or functional units – are extensively exposed to money. This  does not mean, of course, that individuals in such units are likely to act unethically. However, the socially corrosive effects of money should be taken into account along with other C&E risk relevant facts in designing and implementing mitigation measures, and not all parts of a business organization are likely to feel those effects to a uniform degree.

To take an example from C&E history, energy utilities traditionally have benefitted ethically from a focus on service to customers – i.e., this is a business where relationships with others are strong. But in the 1990s, some of these companies acquired or developed trading businesses, in which there were counterparties to trade against rather than customers to be served, and where money had a far greater presence than it would for other utility employees, e.g.,  someone whose job was fixing electrical lines.   No surprise then that while some of these companies had generally strong C&E programs, their trading operations proved to be an ethical Achilles Heel, often at great cost to their shareholders.

But how should one address these sorts of risks? One ways is through C&E-related communications.  The idea here is not to try to banish any mention of money in the workplace; as the authors of the study note, “Money is a ubiquitous feature of modern life and business organizations, in particular.” However, for parts of a company identified by the above-described risk assessment, an organization may wish to deploy communications designed to remind employees, in an impactful way, of the many  important  relationships  that the organization has with customers, stockholders, co-workers and others.  And that – combined with other C&E measures (e.g., targeted monitoring, auditing) – could help keep the lesser angels of Mister Green’s nature in check.

Finally, as with other posts on behavioral ethics, I don’t want to oversell the impact of this field of study on C&E, which should be based on knowledge from various disciplines – including management, philosophy, economics, law and, loosely speaking,  anthropology – as well as psychology.  (Indeed, in my energy trading example,  economics – meaning compensation structures  – probably would have been a more important area of focus than  behavioral ethics in preventing wrongdoing, although I do think the latter is relevant here, too.)   However,  behavioral ethics does have for some organizations the potential to enhance various specific areas of such programs. More generally –  by setting an example of using cutting edge social science research to address real C&E risks –  perhaps it will help  elevate the field as a whole.

How does your compliance and ethics program deal with “conformity bias”?

In his blog on the Ethics Unwrapped website published by the University of Texas’ McCombs School of Business,  Prof. Robert Prentice reviews some important recent research on the behavioralist phenomenon of “conformity bias” – “the tendency of people to take their cues as to the proper way to think and act from those around them.”  As he describes, in one experiment conducted by Francesca Gino, “students were more likely to cheat when they learned that members of their ‘in-group’ did so, but less likely when learning the same about members of a rival group.”  In a related vein, a study by Scott A. Wright, John B. Dinsmore and James J. Kellaris showed that the identity of the victim was also influential in forming individuals’ views of cheating – and specifically that “in-group members who scammed other in-group members were judged more harshly than in-group members who scammed out-group members.” (Citations/links to these and other studies on conformity bias can be found in Prentice’s post – which I encourage you to read.)

As with various other behavioral ethics concepts previously reviewed in the COI Blog, the ideas here may seem obvious (“When in Rome…”)  – but being able to prove the points with data could help C&E officers get the attention they need in their companies to deal with conformity bias based ethical challenges.  But even if the leaders in their organizations agree that something should be done about conformity bias, what is that something?

One step in this direction – which potentially covers a lot of ground – is to include a conformity bias perspective in C&E risk assessment. For instance, where, based on the findings of a risk assessment, the victims of a particular type of violation are likely to be seen more as out-group members than in-group ones, that may suggest the need for extra C&E mitigation measures (of various kinds) to address the risk area in question.  Similarly, risk assessment surveys should (as many, but not all, currently do) target regional or business-line based employee populations that may be setting a bad example for other member employees. Additionally, one should – for the purposes of identifying conformity-biased based risks – consider whether for some employee populations the most relevant in-group is defined less by the culture in your organization but rather  by that of members of their industry, as industries (as much as companies or geographies) can have unethical cultures (as suggested most recently in this Wall Street Journal story on the LIBOR manipulation scandal).

More broadly, just as the sufficiency of internal controls (policies, procedures, etc.) need to be assessed in any analysis of risk, so do “inner controls,”   which is another way of thinking about how various behavioral ethics related factors diminish or enhance the risk of C&E violations. That is, the weaker the inner controls (based not only on conformity bias but other risk causing phenomena, behaviorist or otherwise), the greater the need for traditional internal controls.

A second such type of measure – which also is potentially broad – is in the realm of training and communications, and specifically finding ways to highlight the connections employees may have to those who otherwise are likely to be viewed as out-group members.   The good news here, as Prentice writes, is that “[a]mong the most interesting findings in this entire line of research is how little it takes for us to view someone as part of our in-group, or of an out-group.”

At least in theory, this seems to underscore the benefits of a broad “stakeholder” approach of C&E. Ultimately, however, what may be needed here is less the skills of those who draft codes of conduct than of those can reach us on a deeper level regarding how we should really view our “group” membership – as was perhaps most famously done by Charles Dickens.

 

Insider trading, behavioral ethics and effective “inner controls”

Late last week the U.S. Securities and Exchange Commission announced that it had reached a settlement with a hedge fund involving the largest penalty ever imposed for insider trading. But it is a fair bet that even this record breaking fine will do little to deter future insider trading, because of the unique compliance challenges raised by this area of the law.

One challenge is that insider trading can be enormously difficult to detect.   This is particularly so where the individual misusing the information is neither an insider herself nor tied to one in an obvious way. Insider trading is sometimes described as a “perfect crime” and, sad to say, in many instances it doubtless proves to be just that. (Part of the way we know this is that “numerous academic studies [have] …. [u]ncover[ed] indicators like spikes in a stock’s trading volume just before key information, such as quarterly earnings, is made public…” which suggest that there is a fair bit of insider trading going on –  yet the number of actual prosecutions in this area is relatively low. )

The other challenge  (which is germane to the behavioral ethics aspect of this blog) is that the opportunity for insider trading may fail to trigger the sorts of “inner controls” – meaning an individual’s moral restraints – that the prospect of committing various other crimes typically does.  Part of the reason for this is that while in most instances (at least under U.S. law) insider trading involves a breach of fiduciary duty – i.e., improper disclosure of a corporate secret –  often the individual benefitting from that transgression is several steps removed from the original wrongdoing (due to the information being passed along or “tipped”).  Per several behavioral ethics experiments, “distance” between the wrongful act itself and the beneficiary of the transgression increases the likelihood of wrongdoing, and in insider trading that distance can be significant indeed.

A related problem is that the specific victims of insider trading – typically anonymous market participants – are not evident to would-be violators. Per  other behavioral research, this second type of distance also tends to diminish internal moral restraints.  Moreover, this sense that insider trading is harmless is, in my view, exacerbated by the arguments of some commentators that such conduct should actually be lawful, to make markets more efficient.

Can any of this be remedied?  I’ll leave the detection issue to those others, but on the behavioral ethics side I think it is imperative that these two types of distance be addressed by, among other things, imagining what things would be like if insider trading was not in fact a crime. Using this sort of “what if?” approach – as we did earlier with conflicts of interest generally  – one can envision a world in which businesses are reluctant to engage in transactions that require confidentiality to be successful, which would hurt productivity in many ways.  This thought experiment also suggests that individuals and organizations would be reluctant to invest in capital markets that they fear may be rigged by insiders, which, in turn, substantially raises the cost of equity to businesses.

Like “conflict of interest world,” insider trading world “is a place of needlessly diminished lives, resources and opportunities.” In my view, effective deterrence in this area requires greater recognition of these harms so that they can fully inform the operation of our inner controls.

Values, culture and effective compliance communications – the role of behavioral ethics

Compliance-related communications constitute a large part of the day-to-day work of many compliance-and-ethics departments.  But is this work being done in the most effective manner reasonably possible?

“Modeling the Message: Communicating Compliance through Organizational Values and Culture,” – published last fall by attorney  Scott Killingsworth in The Georgetown Journal of Legal Ethics  – provides a thoughtful examination of what we can learn about compliance  communications from various findings of behavioral science.  The article critiques the traditional approach to compliance communications – which focuses on avoidance of personal risks  – as being premised on a  “rational actor” theory that in recent years has been seriously undermined by the results of behavioral economics/ethics research. In this regard, Killingsworth argues: “Instead of conveying the message that compliance is non-negotiable, [the personal risk versus reward approach] implies that it may be negotiable if the price is right.”  An additional source of concern is that this way of communicating may send the implicit message “that management does not trust employees. Potential side effects of this message range from resentment, to an ‘us-versus-them’ attitude towards management, to a reverse-Pygmalion effect in which employees may tend to ‘live down’ to the low expectations that are projected upon them.”

As an alternative, Killingsworth draws upon the behaviorist concept of “framing” to suggest that communications framed in terms of values and ethics are more likely to be effective in reducing wrongdoing than are traditional compliance communications. In that connection, he describes a study showing “that over eighty percent of compliance choices [in the workplace] were motivated by internal perceptions of the legitimacy of the employer’s authority and by a sense of right and wrong, while less than twenty percent were driven by fear of punishment or expectation of reward.” A second benefit to the values-based approach is that it can better serve as “a source of internal guidance in novel situations” than does the traditional alternative.   Third, communications framed from the former perspective may enhance companies’ efforts to promote internal reporting of violations (obviously an important consideration in the Dodd-Frank era),  a contention that he bases on a study which showed that “the reporting of compliance violations encountered dramatically different effects depending on whether the subjects considered a particular infraction morally repugnant or not.”

As well as discussing communications per se, Killingsworth’s piece examines “the messages implicit in key company behaviors, which can either reinforce, undermine, or obliterate explicit compliance messages.”   So, while explicit communications are important, C&E officers must also “reach across functional boundaries to executive management and the human resources group and, if necessary, educate them about the principles of employee engagement and the value of consistent explicit and behavioral messaging that activates the employees’ values and brings out their [employees’] better natures.” The piece concludes with a list of other practical recommendations – concerning, among other things, culture assessments and communications strategies – for making all these good things happen.

Finally, I should emphasize that this posting only scratches the surface of what is in “Modeling the Message: Communicating Compliance through Organizational Values and Culture,” and I strongly encourage both C&E professionals seeking to up their respective companies’ communications efforts and behavioral scientists seeking to learn more about how their work can be put to practical use in compliance programs to read the piece in full.

How many ways can “behavioral ethics” help improve compliance programs?

CSj_Cover_FEB (2)This cover story from the February issue of CSj – a corporate governance journal published in Hong Kong –  counts the ones that seem important to me.  They include ideas for enhancing how companies address conflicts of interest, assess C&E  risk, provide training and communications and engage in investigations and discipline.

But it is only a start.

What would you add to the list?

 

An ethical duty of open-mindedness

In his latest post on the Ethics Unwrapped web site,    Professor Robert Prentice   of the University of Texas’ McCombs School of Business addresses the increasing polarization in   U.S. politics from the perspective of some important emerging areas of psychology.  He discusses The Righteous Mind, by Jonathan Haidt of NYU’s Stern School of Business, a favorite of the COI Blog (see posts herehere, and here ), and a paper by Dan M. Kahan of Yale Law School,  which I look forward to reading.

Near the end of the piece, Prentice proposes “that it is the ethical obligation of citizens of any democracy to seek out factual evidence on both sides of important policy debates, to study it carefully, and to evaluate it as objectively as possible,”  and asks readers if they agree with this.  I enthusiastically second the motion.

Of course, getting there from here is a daunting prospect.  Indeed, while enlightening, Haidt’s showing how much our current moral standards have their roots in evolution can also be discouraging, given how long evolutionary change can take and how pressing are some of the problems that can only be addressed through open-mindedness (particularly with respect to climate change and our national debt). So, an ethical crusade for objectivity needs all the help it can get.

As odd as it will sound to some, I do think that business ethics can offer a helping hand to political ethics.  That is, through the enforcement of standards relating to discrimination, conflicts of interest and other areas of law and ethics, individuals in the business world are routinely pushed and pulled toward non-biased thinking, to the point that it becomes a habit for many.

I don’t for a moment suggest that the mission of those standards has been accomplished.  Indeed, the COI Blog has had numerous posts on the forms of bias that underpin behavioral ethics in business settings and has also suggested that when it comes to one ethical duty – that of honesty – the world of work might benefit from the habits of mind developed at home.

But  compared to the realm of politics, the business world is a veritable temple of reason and objectivity.  And, the everyday experience of countless individuals in the workplace may provide a foundation upon which to build the important duty of open-mindedness that Prentice proposes.

 

Should dentists and lawyers be rotated, like auditors?

This is another post in our ongoing series on “behavioral ethics.” Prior posts in the series can be found here.

As a general matter, the professional relationships that expert service providers (e.g., doctors, lawyers, accountants) have with those they serve carry the potential for conflicts of interest, at least, where the provider’s advice can impact how she is paid.  Of course, we rely on providers’ professional standards of conduct to mitigate those conflicts.  Beyond this, we tend to think that having a good personal relationship with an expert provider will serve as a useful “inner control,” and steel the provider against the potential for COIs inherent in the very economic nature of the relationship.

Too bad that it apparently doesn’t work.

Among the many fascinating experiments discussed in Dan Ariely’s recently published The Honest Truth About Dishonesty: How We Lie To Everyone – Especially Ourselves is one that he conducted with Janet Schwartz and Mary Frances Luce showing that in the health care context long-term relationships between expert providers and patients actually contribute to more expensive – but not necessarily better – treatment.  (The study itself is available for purchase here.)    As Ariely says, this finding is surprising: “After all, if we have known our advisors for many years, wouldn’t they start caring about us more?”  But, he notes: “Another possibility… is that as the relationship extends and grows, our paid advisers – intentionally or not – become more comfortable recommending treatments that are in their own best interest.” And, on top of this, the study shows that “long-term patients are more likely to accept the [provider’s] advice based on the trust that their relationship has engendered,” diminishing a key external control that can protect against conflicts in such settings.

While the part of the study that is the focus on the discussion in the book concerned dentists, the logic of it would seem applicable as well to other expert providers, including – I have to assume – lawyers (like me).  Indeed, a study published by the Corporate Executive Board just a few weeks ago   about what corporations spend on law firms found “that while many general counsel assume that more work or longevity with a single law firm will earn them a discount, the opposite is true… as time passes they often pay higher rates to the law firms they work with most or have used the longest.”

From the perspective of the patient or client, these two studies confirm the wisdom of seeking second opinions in long-term relationships with expert service providers.  And providers – at least, those seeking to act in good faith (and in accordance with their professional standards) – should more closely scrutinize recommendations made to long-time clients/patients.

Other people’s conflict of interest standards

The Wall Street Journal yesterday reported on the results of a fascinating experiment in which different groups of professionals were asked to assess the necessity of conflict of interest standards of conduct both for other professions and their own.  As recounted in the piece:

Doctors participating in [in the study] tended to think those strictures sounded pretty reasonable [when applied to financial planners]. However, when “financial planners” was replaced by “doctors,” and “investment companies” by “pharmaceutical companies,” the doctors started to raise objections — that the supposed conflicts were hypothetical, for example, and that no one’s views about which drugs to prescribe could ever be swayed by a coffee mug. And investment managers surveyed by the researchers reacted similarly: The rules for doctors sounded fine to them, but the ones for investment professionals seemed petty and unnecessary.

This is an example of “motivational bias,”  which is one of  the cornerstones of behavioral ethics.  As related to compliance and ethics programs,  in addition to bearing on the issue of who should create ethical standards, it is also relevant to the enforcement of such standards.