Edited by Jeff Kaplan
|
Conflict of Interest Blog
|
While COI risks can exist at any level of an organization, as a general matter the risks increase (often sharply) as one proceeds up the org chart. Most obviously, this is because a) senior managers tend to have greater opportunities for COI-fraught relationships and activities than do other employees; and b) COIs at higher levels of an organization are likely to be more harmful than are lower-level conflicts. Given these and other factors, training managers on COIs can be an essential risk mitigant for many organizations.
COI training for managers can (and often should) be part of broader C&E training covering other significant areas of risk, as well as the roles of managers in the operation of the C&E program. Such training typically has two dimensions: an individual one – to help managers avoid having COIs themselves, and an organizational one – to assist managers in preventing//detecting/addressing COIs by colleagues and third parties.
More specifically, one might i) start the COI part of the training with an attention-getting hypothetical (or actual) case, perhaps showing how harmful even well-meant COIs can be; ii) identify generally the types of COIs most relevant to the entity (individual COIs for all, organizational ones for some), as well as any special COI issues (such as, for certain types of entities, the need to avoid contributing to a COI by a third party); iii) describe the legal and business imperatives for strong C&E efforts in these areas; iv) discuss how employee perceptions of COIs by managers can undermine faith in the C&E program as a whole (an aspect of “normative compliance,” v) review applicable company policies and procedures regarding COIs, perhaps using a hypothetical case or cases to illustrate how they should work – and what the risks of familiar might be; vi) examine particular compliance challenges for this risk area, including the tendency of individuals to rationalize conflicts-driven decision making (a facet of behavioral ethics) and the frequent difficulty of challenging individuals on matters that have a sensitive personal dimension (which COIs often do); vii) explain what a manager’s specific role is to ensure COI-related compliance; viii) identify COI-related “red flags” to help them meet those responsibilities; and ix) connect COI issues to other risk areas of significance – such as corruption, fraud and insider trading/confidential information.
|
|
|
Many years ago, I heard federal judge J. Skelly Wright tell this wonderful story. He was presiding over a trial in Louisiana and asked one of the attorneys why the attorney referred to all the witnesses as “Colonel,” to which the attorney responded: That don’t mean anything – it’s like when I call you Your Honor. “
There are lots of things wrong with painting with too board a brush. This comes up in a variety of contexts. But it seems particularly problematic when the setting is compliance.
Last week, as reported in the NY Times, the “Supreme Court [gave] Goldman Sachs a Do-Over in Securities Fraud Suit. The justices said the bank may renew its arguments that its statements about honesty and integrity were too generic to support a class action for billions of dollars.” (The decision can be found here.)
I’m not sure what lessons should be drawn from this. However, some companies may be tempted to draft compliance policies and other documents that are little more than generic professions of ethicality. But while this approach might get past the Supreme Court I doubt it would do so at the Department of Justice.
|
|
|
In his 2008 book Experiments in Ethics, Anthony Appiah made a strong and important case that behavioral science ideas and information should be used to address ethical challenges. But for me the most compelling ethics-related experiment of modern times comes from the realm of political – rather than behavioral – science: the experiment that began in 1991 with the advent of the Federal Sentencing Guidelines for Organizations and which continues to this day.
Although we have become accustomed to living in an “Age of Compliance,” the Guidelines were initially considered “developmental,” as the then Chair of the Sentencing Commission put it. The notion of government providing businesses with incentives for C&E programs and direction on how to make such programs effective was largely new and untested at the time. Of interesting historical note to behavioral ethics aficionados: before the Sentencing Commission chose its current C&E-program-based approach to preventing corporate crime it considered applying an “Optimal Penalties” strategy. The Commission’s ultimate rejection of that approach – which was premised on a hyper-rational (“Chicago School”) view of how business crime occurs – in favor of one that promotes strong C&E programs can be seen as an early (albeit presumably intuitive) official endorsement of the behavioral science based view of human nature.
Thirty years later, it is fair to ask: has the Guidelines experiment been a success?
It would be hard to prove or disprove success using traditional tools of measurement, since the Guidelines are, of course, a policy interacting with a wide range of real-world factors in an uncontrolled way, not a true self-contained experiment. But if the results were not positive to a significant degree then it is hard to imagine that other governmental bodies – in the U.S. and increasingly around the world – would have followed suit to the significant degree that they have. While “success breeds imitation” is not an iron-clad rule, it is a pretty good description of what happens much of the time including, I think, in this instance.
Another way to think about success here is to imagine a “counterfactual” world where C&E wasn’t as important as it has become under the Guidelines approach. Would we be better off with little or no sexual harassment training or protection of whistleblowers in corporations? Would we want to work for or do business with a company that made little or no effort to prevent its employees and agents from engaging in corruption, bid rigging or fraud? Indeed, one doesn’t have to strain one’s imagination to picture these counterfactual possibilities: they are the way things used to be before the Guidelines, at least in many companies.
Looking forward, while a compliance-based strategy to business crime prevention no longer faces a serious threat from the Optimal Penalties view of the world, one does hear what are occasional critiques of the C&E approach from a behavioral science perspective (which is somewhat ironic, given the above-described history). The argument goes that C&E programs – by treating employees with suspicion, and thereby making employees resentful – can actually spawn wrongdoing.
As described in an earlier post, this does not ring true to me, at least not insofar as it concerns serious offenses. Although there is no question that some companies engage in overkill with aspects of their C&E programs, employees should not (and I think do not) feel resentful that their employers try to help keep them safe from the risk of being sent to prison and having their careers destroyed. And even if there is some resentment, that is presumably a small price to pay for preventing serious harm to company, employees and others.
Finally, I am very aware that my musings are themselves not scientific, and hope that the next 30 years scholars and practitioners will find ways of assessing the efficacy of the many different strategies and tools for having C&E programs. There is lots of room for improvement in this area – and experimentation. At least to me, that’s much of what makes the field exciting to be part of.
But as to the basic notion of C&E itself – I think that’s here to stay, not so much as a matter of proof but of logic. On this point I give the last word to Joe Murphy – the visionary lawyer who (together with Jay Sigler of Rutgers) first wrote about what was ultimately to become the Guidelines approach: “For those who ask ‘does compliance work,’ my response is to ask them, ‘does management work?’ One question makes as much sense as the other. C&E is a management commitment to do the right thing and management steps to make that happen. If you do not use management steps to do something in an organization, how on earth do you do so?”
|
|
|
C&E program assessments sometimes have a general scope and sometimes are focused on a single substantive risk area – such as corruption or competition law. (Still others have elements of both approaches, i.e., general assessments and deep dives.)
For some companies it makes sense to do such a targeted/deep dive assessment for conflicts of interests. This is particularly so for those responding to a significant COI violation or “near miss,” but it is also the case where the likelihood of COI risks is heightened due to geographic, organizational or industry cultural considerations.
The scope and approach of such assessments for any given company at any given time should vary based on a variety of circumstances. However, for many companies the effort should not be time consuming or intrusive.
What does one look for in a COI program assessment? Hopefully, the following questions/comments could be helpful to some organizations seeking to determine whether/how to go down this road – and if so, how far.
– Risk Assessment. Has the company assessed COI risk? If so, has it done so in a documented way? Has it used the results of the assessment(s) in designing and implementing other aspects of the COI program? Beyond this, does the company have a good sense of its areas of jeopardy from what might be called “the risk assessment of everyday life”?
– Governance. Have the respective COI oversight roles of the board of directors and senior management been formalized? Do they receive appropriate reports of COI program activity? Are there sufficient escalation provisions regarding COIs?
– Culture. Are COI rules truly followed or are there double standards? What is the sense of “organizational justice” vis a vis COIs? Same question re: the “tone at the top.” Do employees – particularly senior ones – understand the harm that COIs could cause the company?
– Policies. Presumably nearly every business organization has a COI provision in its code of conduct. But there are also many that need but do not have a standalone policy as well. Is your company in this category? Also, is your COI policy well known and readily accessible? Is it reviewed periodically by the C&E officer?
– Procedures. Are disclosure and related COI procedures clear, easy to use and well known? Do those tasked with reviewing COIs have enough knowledge and independence for the job? Are the reviews sufficiently documented?
– Training/other communication. Is there enough training given relevant COI risks (which tend to be high for senior managers/board members and in certain functions, like procurement)? Is training reinforced through other communications, particularly from senior managers? Does the training/other communication use the learning from “actual cases”?
– Auditing and monitoring. Are the COI disclosure practice and other aspects of the program audited? Same question for monitoring (of conditionally approved COIs).
– Responding to allegations/request for guidance. Do employees feel comfortable seeking guidance on possible COIs? Are investigations truly independent? Are violations of the COI policy treated with sufficient seriousness? Does the company conduct a “lessons learned” analysis of significant COI failures?
Of course, there is much more that could be included in a COI self-assessment (and I encourage you to browse the blog for ideas in this regard). But hopefully the above will be a useful foundation for starting.
|
|
|
President Biden recently received well-deserved attention for declaring the fight against foreign corruption to be a national security priority. Should conflicts of interest be viewed in the same manner?
In particular, how much does it matter that organizations, individuals and governments pay close attention to identifying and mitigating conflicts of interest? One way to answer this question is to consider – as I used to ask students in my business school ethics class to do – what the world would look like without such focus and sensitivity. Below are some of the observations that I have heard from them over the years.
In “Conflict of Interest World,”
– Individuals might be reluctant to take the medicines that their doctors recommend for fear that those recommendations are motivated more by the doctors’ financial relationships with pharma companies than by the patients’ well-being.
– Individuals and organizations might not use financial advisors for fear that the advice they receive is driven by hidden, adverse interests – and would instead devote otherwise productive time to trying to become their own financial experts, resulting in a significant misallocation of capital as well as time.
– Organizations could hesitate to take a wide range of everyday actions for which they need to trust their employees and agents to do what’s right by the organizations – or would proceed only with highly intrusive and costly surveillance-like measures in place.
In short, Conflict of Interest World is a place of needlessly diminished lives, resources and opportunities.
Bottom line: a short visit to this unhappy imaginary world – a place of “all against all” – is a reminder of the vital role that sufficient attention to COIs play in our very real world. To my mind, this well deserves to be seen as a national priority.
|
|
|
Does offering financial incentives for reporting misconduct to the government in fact work? Apparently, there is not much data on this point, as noted in a post on the Harvard corporate governance blog by Aiyesha Dey, Jonas Heese, and Gerardo Pérez Cavazos, all of the Harvard Business School . However, they set out to fill this gap.
“To examine the effects of financial incentives on whistleblowing, we exploit staggered decisions taken by U.S. Courts of Appeals that increase the financial incentives for whistleblowing under the [False Claims Act] specific judicial districts. We find the following three effects. First, we find that whistleblowers file a greater number of lawsuits in district courts following decisions that increase financial incentives for whistleblowing. However, we do not find a reduction in the fraction of allegations reported internally before the filing of a lawsuit. Second, we find that the DOJ increases the investigation length for allegations filed in treated courts. Third, we find an increase in the percentage of DOJ-intervened lawsuits and the percentage of settled lawsuits. In sum, these findings support the view that cash-for-information programs help to expose misconduct. Our findings show that whistleblowers respond to financial incentives by filing additional lawsuits, which the DOJ investigates for a longer period and that are more likely to result in a settlement. These findings are inconsistent with the critics’ view that greater financial incentives for whistleblowers primarily trigger meritless lawsuits.” (Emphasis added.) (Note: they also report on some interesting data on the financial impact of whistleblowing on the whistleblowers themselves.).
So, good news for supporters of incentives for whistleblowing.
But if offering financial incentives works for the government shouldn’t companies implement reward programs for internal reporting by employees?
To my knowledge the first business to try this was Bear Stearns, many years ago. The firm has long since gone out of business and I don’t recall whether this practice was seen as successful in its time. In any event, not many have followed since.
I have indeed cautioned clients against going this route. My primary concern is that paying for information can be seen as inconsistent with the important – indeed solemn – obligation that employees already have to protect their company. This obligation goes well beyond reporting suspected wrongdoing, touching the many ways employees can support and promote the efficacy of a C&E program.
|
|
|
Samuel Johnson once said: “It is more from carelessness about truth than from intentionally lying that there is so much falsehood in the world.” And carelessness is obviously at the root of many other types of wrongdoing too. But what does this have to do with the liability of boards of directors in connection with ESG failures?
In a recent posting in the Harvard Law School Forum on Corporate Governance the Wachtell Lipton law firm argues: “the Caremark doctrine—which requires directors to monitor enterprise-level risk and is newly invigorated by recent Delaware court rulings—is the likely tool of choice for plaintiffs complaining about board inaction in the face of climate-related exposure.”
How should companies and boards mitigate the risk of Caremark liability? Per the Wachtell memo: “Firms throughout the economy—anyone who manufactures, sells, or finances products that are implicated in environmental harm—should be preparing today for governance, regulatory, and litigation challenges. Thus, among other steps: Companies should focus on robust disclosure of climate-related economic and business risks. Management and boards should consider new playbooks and strategiesfor engaging with institutional shareholders, asset owners, and even activist investors focused on climate and other ESG-related issues. Boards should ensure regular consideration of climate-related risk, oversight structures, and robust documentation of risk-management and monitoring efforts. Companies that take these steps, and then tailor bespoke responses to any remaining climate-related risks, will earn goodwill with regulators and investors and be better prepared to weather the climate-litigation and climate-activism storm.”
This is sound advice from the perspective of companies, officers and directors. But is the underlying legal regime – particularly the Caremark doctrine – up to the gravely important task of protecting society as a whole from the ravages of climate change?
In A Simple Model of Corporate Fiduciary Duties: With an Application to Corporate Compliance WC Bunting of Temple’s business school notes that compliance failures by boards are currently cognizable under Delaware law based on the duty of loyalty, not the duty of care — – even though the latter would make more sense. He writes: “the optimal judicial approach would define the duty to monitor as a subset of due care–and not loyalty.”
The reason he suggests this is that “that compliance is fundamentally about inducing effort” by managers more than it is about honesty. Lack of effort seems more a matter of failure of care than it is failure to be loyal.
Does this matter? To draw again from the Samuel Johnson treasure trove of memorable sayings, the change of law proposed by Bunting, could help focus the minds of directors on a subject that is truly life or death.
|
|
|
We are entering an era of unprecedented Environmental, Social and Governance (“ESG”) imperatives This could be hugely beneficial to millions of people in many ways. But those involved in ESG efforts (and others) need to be aware of the dangers of “moral licensing.”
As described in Rational Wiki: “Moral licensing or self licensing is a cognitive bias that occurs when a person uses their prior ‘good’ behavior to justify later bad behavior, often without explicitly using that logic. The effect has been demonstrated in numerous psychological studies.[1]“
For instance, as noted in an article in the Irish Times: “One experimental study found people ‘are more likely to cheat and steal’ after purchasing green products than after purchasing conventional products.”
I should stress that this is not a reason to do less when it comes to ESG efforts. But it is worth knowing about from a risk assessment perspective and may be worth mentioning in training.
Also, moral licensing is relevant not only to ESG-related work but also to the work of governmental bodies, charities and other non-profits.
Finally, here is an earlier post on this issue.
|
|
|
Learned Hand – considered by many to be the greatest of all US judges – once famously said: “The spirit of liberty is the spirit which is not too sure that it is right.” This is a spirit which sadly seems as distant from us today as it ever has been before.
Of course, Hand’s primary concern was the realm of politics/governance, not business ethics. But, as discussed in prior posts the various spheres in which ethics operates – not just political and business, but also personal – can overlap with and support each other, at least to some degree. They can also undercut each other, when not done right.
I believe that – at least for some companies – humility should be a core value. (I do see it at some companies, but not many.) As noted in an earlier post:
First, humility is a logical and arguably inevitable response to the vast body of behavioral ethics research showing “we are not as ethical as we think.” Thinking and acting with humility is indeed a way of operationalizing behavioral ethics. (For a list of behavioral ethics and compliance posts click here l
Second, humility is well suited for addressing ethical challenges that are based not on the purposeful failure to be honest but on the less well-appreciated dangers of being careless. Recognizing the limits of one’s abilities – which is part of being humble – should help underscore the need for carefulness.
Finally, humility has the potential to resonate deeply in our political, as well as business, culture. By this I mean humility can help form part of a broader mutually supporting relationship between business ethics and ethics in other realms..
|
|
|