Edited by Jeff Kaplan
Conflict of Interest Blog
A wonderful bit of dialogue sometimes attributed to George Bernard Shaw and sometimes to Winston Churchill goes as follows:
Shaw: Madam, would you sleep with me for a million pounds? Actress: My goodness, Well, I’d certainly think about it. Shaw: Would you sleep with me for a pound? Actress: Certainly not! What kind of woman do you think I am?! Shaw: Madam, we’ve already established that. Now we are haggling about the price.
I thought of this when reading about a study published today in JAMA Internal Medicine: Pharmaceutical Industry–Sponsored Meals and Physician Prescribing Patterns for Medicare Beneficiaries, by Colette DeJong and others (sent to me by friend of the blog Scott Killingsworth). As noted in the introduction, by way of background: “Physician-industry relationships—including sponsored meals and promotional speaking fees—are at the center of an international debate, intensified by recent transparency efforts in the United States and the European Union. In the United States, in the last 5 months of 2013, 4.3 million industry payments totaling $3.4 billion were made to more than 470 000 physicians and 1000 teaching hospitals. Although some argue that industry-sponsored meals and payments facilitate the discussion of novel treatments, others have raised concerns about their potential to influence prescribing behavior” (citations omitted).
The study was based on “Cross-sectional analysis of industry payment data from the federal Open Payments Program for August 1 through December 31, 2013, and prescribing data for individual physicians from Medicare Part D, for all of 2013” and the results were stunning: “As compared with the receipt of no industry-sponsored meals, we found that receipt of a single industry-sponsored meal, with a mean value of less than $20, was associated with prescription of the promoted brand-name drug at significantly higher rates to Medicare beneficiaries.”
Of course, this study has powerful ramifications for the life sciences industry. But, the implications presumably are relevant to any conflict of interest regime (as Scott noted in his email to me).
As well, these results are further proof of the oft-cited (at least in this blog) behavioral ethics learning that should be part of C&E messaging generally: we are not as ethical as we think. Indeed, there may be no better proof of it than this study.
Ethical Systems has posted videos of the sessions at the recent conference on “Ethics by design.”
Here’s the link to the panel with Donald Langevoort of Georgetown’s law school, Carsten Tams of Bertlesmann, Serina Vash of NYU’s program on Corporate Compliance and Enforcement and me.
And here’s a link to the conference web site, from which you can get to videos of the other sessions.
Last week, the Ethical Systems initiative, together with the Behavioral Science Policy Association, held a fascinating and well-received conference at NYU’s Stern School of Business on “Ethics By Design,” some of which was devoted to a favorite topic of this blog – “behavioral ethics and compliance.” While still in its infancy, this field increasingly has the potential (in my view) to contribute significantly to society’s effort to reduce wrongdoing by business organizations, and indeed ultimately to change how we think of good citizen companies. The Ethical Systems conference provides a fitting occasion to look at this area from a big picture perspective and to consider its eco-system.
Behavioral E&C rests on three pillars: recognizing and articulating E&C needs (e.g., protecting whistleblowers); conducting scientific research that can help companies meet those needs, even if the purpose of the research is not E&C focused; and using field-based knowledge to apply the scientific knowledge in an effective manner, i.e., to make it as relevant as possible to E&C. The first of these is to a large extent the realm of the government, at least the “C” part of E&C; the second is clearly the domain of scholars; and the third is the territory of E&C practitioners and also scholars, (e.g., “Behavioral Ethics, Behavioral Compliance,” by Professor Donald C. Langevoort of the Georgetown University Law Center).
To take one example, the government has for a long time indicated that not only those engaged in active wrongdoing but also supervisors who culpably failed to prevent or detect such wrongdoing should be disciplined. This need was first articulated in 1991,when the Federal Sentencing Guidelines for Organizations went into effect and has been repeated as recently as April in a pilot policy of the Fraud Division, which stated that E&C programs should include: “Appropriate discipline of employees, including those identified by the corporation as responsible for the misconduct, and a system that provides for the possibility of disciplining others with oversight of the responsible individuals, and considers how compensation is affected by both disciplinary infractions and failure to supervise adequately…” (emphasis added). The work of behavioral ethics scholars helps companies address this need through the notion of “motivated blindness.” As described by Max Bazerman and Ann Tenbrunsel in a piece from the Harvard Business Review Blog Network : “mounting research shows that we often fail to notice others’ unethical behavior if it’s in our interest not to notice. This failure of oversight — called ‘motivated blindness’ — is unconscious and common.” Note that what behavioral ethics research is doing here is not so much telling companies the specifics of how to meet the need in question (indeed, the authors’ focus was not on disciplinary policies at all) but, rather, making a more general point which helps show that however this is done will require strong medicine. The role of E&C practitioners in this example is to identify what that medicine should be, as described in this earlier post: “To meet this important expectation, companies may wish to take the following measures: build the notion of supervisory accountability into their policies – e.g., in the managers’ duties section of a code of conduct; speak forcefully to the issue in C&E training and other communications for managers; train investigators on the notion of managerial accountability and address it in the forms they use so that they are required to determine in all inquiries if a manager’s being asleep at the switch led to the violation in question; publicize (in an appropriate way) that managers have in fact been disciplined for supervisory lapses; and have auditors take these requirements into account in their audits of investigative and disciplinary records.” Of course, companies could take these steps without the contribution of behavioral ethics, particularly as some recommended measures are obvious. But many do not because, in my view, they do not believe that they need “strong medicine.” Behavioral ethics can help E&C officers make the sale to skeptical managers.
In a somewhat different type of example behavioral ethics researchers not only identify the need for strong medicine generally but also help companies identify the specific type of medicine needed to address the government’s expectation in question. For instance, a series of experiments (briefly described in this post) helped show that having employees sign an ethics-related certification just before (rather than after) engaging in risk-related activity can help reduce the likelihood of wrongdoing, a phenomena sometimes called “just-in-time” compliance.” But even in these sort of examples there is a role for E&C professionals in finding opportunities to extend the core behavioral insight to other settings. E.g., and as described in the above post: “Opportunities for new or enhanced just-in-time communications exist for many C&E areas including (but definitely not limited to): anti-corruption – before interactions with government officials and third-party intermediaries; competition law – before meetings with competitors (e.g., at trade association events); insider trading/Reg FD – during key transactions, before preparing earnings reports; protection of confidential information – when receiving such information from third parties pursuant to an NDA; conflicts of interest – around procurement decisions; accuracy of sales/marketing – in connection with developing advertising, making pitches; and employment law – while conducting performance reviews.”
Both the “strong medicine” and the “specific medicine” types of behavioral ethics knowledge are important to E&C programs. Note that there are many more examples of the first type than the second. Hopefully, through the work of Ethical Systems collaborators and others, that imbalance will begin to be addressed.
A final point about the “three pillars,” which is that in the above examples the identification and articulation of need comes from the government and the researchers and E&C practitioners are essentially in a responsive posture vis a vis such needs. But part of the promise of behavioral E&C is that the “flow” could begin to run the other way too. For instance, viewing E&C as not just about catching criminals but encouraging pro-social behavior – an important behavioral ethics idea (see sources here) – could, in my opinion, significantly improve the field. I wouldn’t expect this to happen in the short term, but as the three pillars begin to work more closely together in support of their many current common goals it certainly seems possible in the medium run. This, as much as the cases of strong and specific medicine, may be the best chance for behavioral E&C to change the world.
In his 2008 book Experiments in Ethics, Anthony Appiah made a strong and important case that behavioral science ideas and information should be used to address ethical challenges. But for me the most compelling ethics-related experiment of modern times comes from the realm of political – rather than behavioral – science: the experiment that began in 1991 with the advent of the Federal Sentencing Guidelines for Organizations and which continues to this day.
Although we have become accustomed to living in an “Age of Compliance,” the Guidelines were initially considered “developmental,” as the then Chair of the Sentencing Commission put it. The notion of government providing businesses with incentives for C&E programs and direction on how to make such programs effective was largely new and untested at the time. Of interesting historical note to behavioral ethics aficionados: before the Sentencing Commission chose its current C&E-program-based approach to preventing corporate crime it considered applying an “Optimal Penalties” strategy. The Commission’s ultimate rejection of that approach – which was premised on a hyper-rational (“Chicago School”) view of how business crime occurs – in favor of one that promotes strong C&E programs can be seen as an early (albeit presumably intuitive) official endorsement of the behavioral science based view of human nature.
A quarter of a century later, it is fair to ask: has the Guidelines experiment been a success?
It would be hard to prove or disprove success using traditional tools of measurement, since the Guidelines are, of course, a policy interacting with a wide range of real-world factors in an uncontrolled way, not a true self-contained experiment. But if the results were not positive to a significant degree then it is hard to imagine that other governmental bodies – in the U.S. and increasingly around the world – would have followed suit to the significant degree that they have. While “success breeds imitation” is not an iron-clad rule, it is a pretty good description of what happens much of the time including, I think, in this instance.
Another way to think about success here is to imagine a “counterfactual” world where C&E wasn’t as important as it has become under the Guidelines approach. Would we be better off with little or no sexual harassment training or protection of whistleblowers in corporations? Would we want to work for or do business with a company that made little or no effort to prevent its employees and agents from engaging in corruption, bid rigging or fraud? Indeed, one doesn’t have to strain one’s imagination to picture these counterfactual possibilities: they are the way things used to be before the Guidelines, at least in many companies.
Looking forward, while a compliance-based strategy to business crime prevention no longer faces a serious threat from the Optimal Penalties view of the world, one does hear what are occasional critiques of the C&E approach from a behavioral science perspective (which is somewhat ironic, given the above-described history). The argument goes that C&E programs – by treating employees with suspicion, and thereby making employees resentful – can actually spawn wrongdoing.
As described in an earlier post, this does not ring true to me, at least not insofar as it concerns serious offenses. Although there is no question that some companies engage in overkill with aspects of their C&E programs, employees should not (and I think do not) feel resentful that their employers try to help keep them safe from the risk of being sent to prison and having their careers destroyed. And even if there is some resentment, that is presumably a small price to pay for preventing serious harm to company, employees and others.
Finally, I am very aware that my musings are themselves not scientific, and hope that the next 25 years scholars and practitioners will find ways of assessing the efficacy of the many different strategies and tools for having C&E programs. There is lots of room for improvement in this area – and experimentation. At least to me, that’s much of what makes the field exciting to be part of.
But as to the basic notion of C&E itself – I think that’s here to stay, not so much as a matter of proof but of logic. On this point I give the last word to Joe Murphy – the visionary lawyer who (together with Jay Sigler of Rutgers) first wrote about what was ultimately to become the Guidelines approach: “For those who ask ‘does compliance work,’ my response is to ask them, ‘does management work?’ One question makes as much sense as the other. C&E is a management commitment to do the right thing and management steps to make that happen. If you do not use management steps to do something in an organization, how on earth do you do so?”
How can a board of directors (or board committee) effectively oversee a C&E program without crossing the line into program management?
In my latest column in Compliance & Ethics Professional (see page 2 of PDF) I suggest six C&E areas on which boards should focus – to meet their fiduciary duties and leverage their power to promote program efficacy.
I hope you find it useful.
While in the more than four years of its existence the COI Blog has been devoted primarily to examining conflicts of interest it has also run more than fifty posts on what behavioral ethics might mean for corporate compliance and ethics programs. Below is an updated version of a topical index to these latter posts. Note that a) to keep this list to a reasonable length I’ve put each post under only one topic, but many in fact relate to multiple topics (particularly the risk assessment ones); and b) there is some overlap between various of the articles. Also, on June 3 I’ll be speaking at a conference on behavioral ethics at NYU’s business school (see program agenda here) and will do a post summarizing compliance-related aspects of the program shortly thereafter. Finally, in 4Q 2016 I hope to flesh some of these ideas out into a Behavioral Ethics & Compliance Handbook.
– Business ethics research for your whole company (with Jon Haidt)
– Overview of the need for behavioral ethics and compliance
– Behavioral C&E and its limits
– Behavioral compliance: the will and the way
BEHAVIORAL ETHICS AND COMPLIANCE PROGRAM COMPONENTS
– Too big for ethical failure?
– “Inner controls”
– Is the Road to Risk Paved with Good Intentions?
– Slippery slopes
– Senior managers
– Long-term relationships
– How does your compliance and ethics program deal with “conformity bias”?
– Money and morals: Can behavioral ethics help “Mister Green” behave himself?
– Risk assessment and “morality science”
– Advanced tone at the top
Communications and training
– “Point of risk” compliance
– Publishing annual C&E reports
– Behavioral ethics and just-in-time communications
– Values, culture and effective compliance communications
– Behavioral ethics teaching and training
– Moral intuitionism and ethics training
Positioning the C&E office
– What can be done about “framing” risks
– Behavioral Ethics and Management Accountability for Compliance and Ethics Failures
– Redrawing corporate fault lines using behavioral ethics
– The “inner voice” telling us that someone may be watching
– Include me out: whistle-blowing and a “larger loyalty”
– Hiring, promotions and other personnel measures for ethical organizations
Board oversight of compliance
– Behavioral ethics and C-Suite behavior
– Behavioral ethics and compliance: what the board of directors should ask
– Is Wall Street a bad ethical neighborhood?
– Too close to the line: a convergence of culture, law and behavioral ethics
Values-based approach to C&E
– Values, structural compliance, behavioral ethics …and Dilbert
Appropriate responses to violations
– Exemplary ethical recoveries
BEHAVIORAL ETHICS AND SUBSTANTIVE AREAS OF COMPLIANCE RISK
Conflicts of interest/corruption
– Does disclosure really mitigate conflicts of interest?
– Disclosure and COIs (Part Two)
– Other people’s COI standards
– Gifts, entertainment and “soft-core” corruption
– The science of disclosure gets more interesting – and useful for C&E programs
– Gamblers, strippers, loss aversion and conflicts of interest
– COIs and “magical thinking”
– Inherent conflicts of interest
– Insider trading, behavioral ethics and effective “inner controls”
– Insider trading, private corruption and behavioral ethics
– Using behavioral ethics to reduce legal ethics risks
OTHER POSTS ABOUT BEHAVIORAL ETHICS AND COMPLIANCE
– New proof that good ethics is good business
–How ethically confident should we be?
– An ethical duty of open-mindedness?
– How many ways can behavioral ethics improve compliance?
– Meet “Homo Duplex” – a new ethics super-hero?
– Behavioral ethics and reality-based law
A long time ago, I learned of a company at which – I was told – an individual had been hired into a compliance role precisely because he would be unlikely to notice (and thus stop) the crimes in which the company was engaged. I could not then tell if this explanation – which seemed to be based more on informed speculation than hard fact – was true. Still, years later the company and its executives were prosecuted but the compliance person was not – perhaps because the government concluded that he had in fact been in the dark. (Presumably this wouldn’t be featured on a resume, but it beats going to prison.)
C&E programs are not machines that run by themselves. It takes the involvement of many – and, on some level, all – employees to make a program truly effective. But in any company the quality of the C&E officer is central to the effort. Oddly, however, this aspect of efficacy has historically not been part of the principal official definitions of an effective C&E program.
Last week, the Department of Justice issued a new pilot policy to encourage self-reporting of FCPA violations. Here is a link to the announcement. Much will doubtless be written about the self-reporting aspects of the policy but for me of greatest interest is the definition of an effective C&E program.
While containing various items that are typical for lists of this sort (such as compliance culture, risk assessment, auditing and discipline for violations) it also includes the following two elements that are now to be considered by the government in assessing C&E programs:
– The quality and experience of the compliance personnel such that they can understand and identify the transactions identified as posing a potential risk.
– How a company’s compliance personnel are compensated and promoted compared to other employees.
The inclusion of these items on this list is – to my mind – a big step forward for the C&E profession.
Of course, the ways in which they are assessed by the government in investigations remains to be seen – and note that the one about promotions will be hard to apply to small organizations. But, as a general matter, placing these items on the assessment agenda should lead to companies having more top-notch programs by recruiting and retaining top-notch people.
I’m very happy to announce that Ethical Systems has chosen me as their featured collaborator for April. Here is an interview in which, among other I discuss with “Eth Sys” various research projects I’ve worked on (or am planning) and briefly look back at aspects of the C&E field’s early history.
I hope you enjoy it.
On April 12 at noon ECI will present a web cast in which Steve Priest and I answer questions – on ethical culture, effective programs and other topics – submitted in advance by attendees. We will follow the free-flowing (and at times edgy) approach taken in our recently issued e-book. And, the web cast will use an audience response voting mechanism, which we’ll deploy for bench-marking some of the questions among attendees.
The web cast is free to all.
More information about it (including a link for registering) and the e-book (which is also free) can be found here.
We hope you can attend.
On April 6 Rebecca Walker and I will be presenting a webinar for PLI on assessing C&E programs.
More information is available here.
We hope you can attend.