Conflict of Interest Blog

A culture of care

Samuel Johnson once said: “It is more from carelessness about truth than from intentionally lying that there is so much falsehood in the world.” And carelessness is obviously at the root of many other types of wrongdoing too.

In a keynote speech at the just-concluded SCCE  10th annual Compliance and Ethics Institute, FBI director James Comey spoke of the need for companies to have a “culture of care” when it comes to cyber security.  (Unfortunately the speech is not yet published on the FBI web site, so I can’t link to the text.)  While focusing on cyber security, Comey did indicate that the concept of a culture of care might have broader application to the world of compliance and ethics.

I think the concept is indeed potentially quite useful for C&E professionals.  But what might be included in such a culture?

One example is suggested by a presentation – Beyond Agency Theory: The Hidden and Heretofore Inaccessible Power of Integrity, by Michael Jensen and Werner Erhard – discussed in this earlier post. The authors argue that honesty requires more than sincerity: “When giving their word, most people do not consider fully what it will take to keep that word.  That is, people do not do a cost/benefit analysis on giving their word.  In effect, when giving their word, most people are merely sincere (well-meaning) or placating someone, and don’t even think about what it will take to keep their word. This failure to do a cost/benefit analysis on giving one’s word is irresponsible.”    This argument makes sense to me – and I think it would to Samuel Johnson  and James Comey as well.

And, as noted above, the need for carefulness goes beyond being honest.  More broadly, a culture of care would help shape an organization’s values, policies, procedures, risk assessment, approach to incentives and  C&E training and communications.  As well, carelessness would be addressed sufficiently through the investigations and disciplinary policy/process – something that too few companies do, as discussed here.

Finally, I asked Steve Priest, a true master at diagnosing and shaping corporate cultures, what he thinks about the “culture of care” concept.  He said “Emphasizing a ‘culture of care’ makes great sense. However for many who do not understand the full sense in which James Comey used the phrase, it will seem soft. It isn’t soft, but to balance it I encourage organizations to aim for these three in your culture: care, competence and courage. Organizations and leaders that demonstrate care, competence and courage may not win every sprint, but they will win most marathons.”

I agree with Steve that care alone cannot a culture make.  And, as with virtually any part of a C&E program, one has to guard against overdoing it.   In this connection, nearly 20 years ago, I was concerned that my then eight-year-old daughter was running out into the street without checking for traffic – and so to help make her more careful I tried to get her to keep a “safety journal.”  I’m proud (in retrospect) to say that she refused, and this story from Kaplan family compliance history helps remind me that over-cautiousness has its own downsides.

 

Conflicts of interest, compliance programs and “magical thinking”

An article earlier this week in the New York Times takes on the issue of “Doctors’ Magical Thinking about Conflicts of Interest.”  The piece was prompted by a just-published study  which examined “the voting behavior and financial interests of almost 1,400 F.D.A. advisory committee members who took part in decisions for the Center for Drug and Evaluation Research from 1997 to 2011” and found a powerful correlation between a committee member having a  financial interest (e.g., a consulting relationship or ownership interest ) in a drug company whose product was up for review and the member’s voting in favor of the company – at least in circumstances where the member did not also have interests in the company’s competitors.

Of course, this is hardly a surprise, and the Times piece also recounts the findings of earlier studies showing strong correlations between financial connections (e.g., receiving gifts, entertainment or  travel from a pharma company) and professional decision making (e.g., prescribing that company’s drug). Nonetheless, some physicians “believe that they should be responsible for regulating themselves.”

However, such self regulation can’t work, the article notes,  because “our thinking about conflicts of interest isn’t always rational. A study of radiation oncologists  found that only 5 percent thought that they might be affected by gifts. But a third of them thought that other radiation oncologists would be affected.  Another study asked medical residents similar questions. More than 60 percent of them said that gifts could not influence their behavior; only 16 percent believed that other residents could remain uninfluenced. This ‘magical thinking’ that somehow we, ourselves, are immune to what we are sure will influence others is why conflict of interest regulations exist in the first place. We simply cannot be accurate judges of what’s affecting us.”

While the findings of these and similar studies are, of course, most relevant to conflicts involving doctors and life science companies, there is a broader learning here which, I think, is vitally important to C&E programs generally.  That is, they help to show that “we are not as ethical as we think” – a condition hardly limited to the field of medicine or to conflicts of interest, as has been discussed in various prior postings on this blog.

One of the overarching implications of this body of knowledge is that we humans need structures – for business organizations this means  C&E programs, but more broadly these have been called “ethical systems” – to help save us from falling victim to our seemingly innate sense of ethical over-confidence.  So, to make that case, C&E professionals should – in training or otherwise communicating with employees (particularly managers) and directors  - address the issue of “magical thinking” head-on.

Moreover, using the example of COIs to prove the larger point here may be an effective strategy, because employees are more likely to have experience with ethical challenges in this area  than with other major risks, such as corruption, competition law or fraud – which indeed may be so scary as to be largely unimaginable to many employees.  I.e., these and other “hard-core” C&E risk areas might be subject to an even greater amount of magical thinking than is done regarding COIs.  So, at least in some companies,  discussing COIs might offer the most accessible “gateway” to addressing the larger topic of ethical over-confidence.

The conflict of interest case of the year

With less than four months to go, the corruption case again the governor of Virginia and his wife seems destined for 2014 COI case of the year honors.  But while much of the press revolved around the Governor’s unsavory – and unsuccessful – trial strategy of throwing his wife/co-defendant “under the bus,” for COI aficionados what is noteworthy about the prosecution lies elsewhere.

First, on the public policy level, it highlights – as much as any case has in recent memory – the need for strong government ethics laws at the state level.    Perhaps states like Virginia (and NJ, where I live, which is infamous for its culture of corruption) will now look for guidance to those states that have been successful on this front, such as ethics front-runner Oregon.  

Second, on a law enforcement level, the case is precedent setting.  As described in this Washington Post article : “[L]egal experts say the case — especially if it survives an appeal — could encourage prosecutors to pursue similar charges against officials who take not-so-obviously significant actions on behalf of their alleged bribers and make it easier for them to win convictions. ‘I think the case clearly pushes the boundary of ‘official act’ out a bit farther, and I think that’s quite potentially important,’ said Patrick O’Donnell, a white-collar criminal defense lawyer at Harris, Wiltshire & Grannis. ‘It’s striking that here, McDonnell was not convicted on any traditional exercise of gubernatorial power. It wasn’t about a budget or a bill or a veto or appointment or a regulation.’ [Rather,] ‘[t]he McDonnells stand convicted of conspiring to lend the prestige of the governor’s office to Richmond businessman Jonnie R. Williams … by arranging meetings for him with state officials, allowing him to throw an event at the Virginia governor’s mansion and gently advocating for state studies of a product that Williams’s company sold.”

Third, and most relevant to C&E professionals, the case appears to be a striking example of the behaviorist learning, “we are not as ethical as we think” – a principle that helps underscores the need for strong C&E programs in organizations of all kinds.  That is,  based on McDonnell’s testimony,  there seemed to me a real possibility that he genuinely believed that he was not corrupted by the gifts and loans from Williams, and there is indeed some indication that the jurors found him sincere, at least generally.  But believing yourself to be unaffected by a conflict of interest doesn’t make it true – given the results of various behavioral ethics studies showing that COIs impact us considerably more than we appreciate.  (Posts relating to some of these studies are collected here.)    Perhaps this makes the McDonnell case – although more about conflicts in government than in business – a teachable moment for C&E practitioners in all settings.

Prosecutors, massive fines and moral hazard

Many years ago, I lived next door to a young police officer and his family who, while presumably paid a modest salary, drove a pretty expensive car.   He was able to do this, I learned, because his department seized autos (and other property) of various suspected offenders and then let its officers drive the vehicles for their personal use.  Although he seemed in every respect like an honorable young man, the impact that this practice could have – and also appear to have – on law enforcement decisions left me feeling uneasy.

The latest issue of The Economist has a sweeping indictment of the US system of business law enforcement.  There are many components to this assault, including that: large fines are, in effect, extorted from companies, but the guilty individuals often go free (which, in my view, is quite true); settlements of these cases often obscure facts that should be made known to the public (with which I also agree); US laws are so numerous and complicated that companies face a grave risk of prosecution for conduct that they never could have suspected was wrongful (with which I agree only slightly); and part of the cost of this system is that “[e]normous amounts of time and money are now being put into compliance programmes that may placate judges, prosecutors, regulators and monitors but undermine innovation and customer services” (which I also think is an overstatement,  but also is true enough for companies to be careful not to go overboard in their compliance programs).   But the critique that interested me the most concerned the view that the prospect of recovering large fines influences law enforcement decisions, i.e., a corporate variation on the story in the first paragraph of this post.

This part of The Economist article relied in part on a paper in the January 2014 Harvard Law Review – “For-Profit Public Enforcement,” by Margaret H. Lemos (Professor, Duke University School of Law)   and Max Minzner, (Professor, University of New Mexico School of Law), in which the authors seek to show “that public enforcers often seek large monetary awards for self-interested reasons divorced from the public interest in deterrence. The incentives are strongest when enforcement agencies are permitted to retain all or some of the proceeds of enforcement – an institutional arrangement that is common at the state level and beginning to crop up in federal law. Yet even when public enforcers must turn over their winnings to the general treasury, they may have reputational incentives to focus their efforts on measurable units like dollars earned. Financially motivated public enforcers are likely to…undertake more enforcement actions [and] focus on maximizing financial recoveries rather than securing injunctive relief,… Those effects will often be undesirable, particularly in circumstances where the risk of over-enforcement is high.”

I don’t know if it is quite right to call this a conflict of interest, but it does seem close to a moral hazard, in that those with power to reduce risks (prosecutors) may have interests that are not well aligned with those who bear the consequences of their actions (the public).  Moreover, and independent of this concern, prosecutors sacrificing tomorrow’s interests (as the benefits of deterrence take place entirely in the future ) for a quick buck today – the very trade-off for for which guilty companies are often castigated  - itself can be harmful because, as Justice Brandeis famously said: “Our government is the potent, the omnipresent teacher. For good or for ill, it teaches the whole people by its example.”  

(For more on moral hazard see the posts collected here. And here is a post on implications for risk assessment of the government’s seeking large financial recoveries from corporate defendants.)

“The inner voice that warns us somebody may be looking”

Within the treasure trove of H.L. Mencken’s sayings, this definition of “conscience” may be my favorite.  And, various studies have indeed shown that the sense that somebody may be watching can help promote ethical behavior.  Among these are  experiments exposing individuals to “eyespots” –  drawings which create a vague sense of being watched, even among those who know as a factual matter that they aren’t being seen. (See, e.g., this study, showing that exposure to eyespots can promote generosity.)

While actually deploying eyespots around the workplace is hardly a viable option for most companies, various technological advances offer not only the appearance of being watched but the actuality of it.  Such monitoring technologies can be particularly promising for promoting compliance by parts of a workforce for whom supervision is relatively remote – which is often the case for sales people.

For two other risk-related reasons, sales people can be a logical choice for C&E monitoring:

- Their incentives may not align well with those of their respective companies – a “moral hazard” condition.  (Indeed, in a risk assessment interview I conducted last week, the interviewee responded to a question about conflicts of interest by saying – only somewhat in jest – that the whole company sales force had such conflicts.)

- Sales people tend to be in a position to cause legal/ethical violations – e.g., corruption, collusion and fraud – much more than the average employee at a company.

But, while the case for monitoring sales people is strong as a general matter, obviously not all monitoring strategies are equally effective.  According to a paper published in the September 2014 issue of the Journal of Business Research, “Does transparency influence the ethical behavior of salespeople?” John E. Cicala, Alan J. Bush, Daniel L. Sherrell and George D. Deitz (rentable on Deep Dyve): “it is not the perception of visibility that drives sales persons behavior, but rather the perception of the likelihood of negative consequences resulting from management use of knowledge and information gained from technologically increased visibility.”

Of course, these results – based on an on-line survey which is described in the paper – presumably won’t surprise any C&E professionals. (Nor, likely, would they have impressed Mencken, who also said: “A professor must have a theory as a dog must have fleas” – although I should add that that’s just another chance to quote the great man – not a reflection of my view of this paper.) But, as with much of the social science research discussed in this blog, having data to back up what is intuitively known may be useful, particularly when seeking to make C&E reforms in a company that are being resisted.

Most relevant here is the often-contentious issue of how open a company is with its discipline for violations (meaning not just of sales persons but any employee).  While C&E professionals typically understand that true “public hangings” – i.e., full identification of individual transgressions and transgressors – can be undesirable for all sorts of reasons, there is still a lot that their respective companies can do in a general way to show that   negative consequences do exist for breaches of C&E  standards. Hopefully, this new research can help C&E professionals make such a case.

Liability for faking compliance – a new-fashioned type of deterrence?

I have long felt that C&E programs should do more to appeal to the better angels of our nature. (For more information on how “pro-social” qualities can be built on to promote more ethical workplaces, see this research page from the Ethical Systems web site.) But at the end of the day there will always be a place for good old-fashioned deterrence.

Deterrence, in the business realm, traditionally operates by punishing those who engage in conduct that harms others (e.g., corruption, collusion, pollution). But as C&E program expectations themselves become more central to promoting responsible behavior by companies,  it is inevitable that a more “upstream” form of deterrence should emerge – in which faking compliance is itself the punishable (or otherwise addressable) wrong.  Indeed, this could be considered “new-fashioned” type of deterrence.

The COI Blog has previously discussed two cases of this sort – one involving Goldman Sachs , the other S&P  – both having to do with allegedly false claims by the defendant firms that they had taken strong compliance measures against conflicts of interest.  And at the end of last month, another case was brought in which faking compliance was itself found to be a punishable wrong.

The case – In the Matter of Mark Sherman — can be found here, but readers may find more useful a post about it on the Harvard corporate governance blog by attorneys from the Ropes & Gray law firm.  As they note:

“On July 30, 2014, the Securities and Exchange Commission (“SEC”) advanced a novel theory of fraud against the former CEO (Marc Sherman) and CFO (Edward Cummings) of Quality Services Group, Inc. …, a Florida-based computer equipment company that filed for bankruptcy in 2009. The SEC alleged that the CEO misrepresented the extent of his involvement in evaluating internal controls and that the CEO and CFO knew of significant internal controls issues with the company’s inventory practices that they failed to disclose to investors and internal auditors. This case did not involve any restatement of financial statements or allegations of accounting fraud, merely disclosure issues around internal controls and involvement in a review of the same by senior management. The SEC’s approach has the potential to broaden practical exposure to liability for corporate officers who sign financial statements and certifications required under Section 302 of the Sarbanes-Oxley Act (‘SOX’). By advancing a theory of fraud premised on internal controls issues without establishing an actionable accounting misstatement, the SEC is continuing to demonstrate that it will extend the range of conduct for which it has historically pursued fraud claims against corporate officers.” (Emphasis added.)

Of course, there is much more that could be said about the various connections that the legal systems draws between violations of law and poor compliance than what’s in this and the other two cases mentioned above.  (See, for instance, this prior post about the SAC insider trading case brought last year - where the weakness of the company’s compliance program was used as a basis for finding corporate liability for insider trading by individual employees.) And, the notion of punishing fake (or otherwise weak) compliance efforts has long been part of enforcement strategies in highly regulated areas (e.g., broker-dealer compliance). But the Sherman case seems especially important, as it can be utilized in training corporate officers in public companies of all kinds on the need to be careful in executing their S-Ox certifications which, in turn, should lead them to have a greater appreciation of the value of strong compliance generally.

Finally, the Ropes & Gray post concludes with the following observation: “this case, which includes fraud charges in an accounting case without any restatement of financials, seems to represent an application of SEC’s ‘Broken Windows’ strategy first announced by Robert Khuzami and reiterated by Mary Jo White—to pursue small infractions on the theory that minor violations lead to larger ones—to the public company disclosure and accounting space.”  To this I would add that a “Broken Windows” strategy to preventing wrongdoing is also supported by behavioral ethics research (see this post ), and the Sherman case should also be a reminder for C&E officers to review whether their own companies’ deterrence systems  take this approach into account to a sufficient degree.

 

 

New proof that good ethics is good business

In a simpler economic time, the tangible rewards to oneself from doing good for others were fairly self-evident. A memorable articulation of this (from a chronicler of Eskimo life who is quoted in Robert Wright’s book  Nonzero: the Logic of Human Destiny): “’the best place for [an Eskimo] to store his surplus is in someone’s else’s stomach.’”  But as we have  progressed from hunter-gatherer societies – where it was clear that sharing food today could lead to life-saving reciprocation tomorrow – to the modern world of complex capital markets more is now required to make the economic case for helping others.

That need, as described in a post earlier this year,  arises in part “because of the enduring  influence of a free-market critique of business ethics associated with Milton Friedman’s 1970 article ‘The Social Responsibility of Business is to Increase Profits.’   While I do not agree with his view, I understand its appeal:  it has the virtue of simplicity – and hence being easy to apply; and, particularly with respect to public companies – where managers act as stewards of other people’s money – it can certainly be seen as fairness based.” Indeed, Friedman’s critique has special relevance to the COI Blog, as it suggests that managers acting in a socially responsible way may in fact constitute a conflict of interest vis a vis their shareholders.

However, like many business ethics issues generally and COI issues in particular, resolving this one is less a matter of drawing from philosophy than social science, as Friedman’s view is based largely on an essentially zero-sum notion that a company’s acting ethically tends to disadvantage its shareholders economically.  But, what if that premise were factually questionable? Indeed, as also noted in the above-referenced prior post, a then just-published study – looking at promoting integrity values, a different but related aspect of business ethics than corporate social responsibility (“CSR”) – had helped to show that “’high levels of perceived integrity are positively correlated with good outcomes, in terms of higher productivity, profitability, better industrial relations, and higher level of attractiveness to prospective job applicants,’” thereby undermining at least partly the view that good ethics is bad for business. Still, given how complex, contentious and consequential it is, this issue calls out for more research.

So, it is good news that another study – this one focused on CSR itself – has recently been added to the relevant literature in this area: “Socially Responsible Firms,” which is published by the European Corporate Governance Institute (ECGI) and authored by Allen Ferrell of Harvard University and ECGI, Hao Liang  of Tilburg University and Luc Renneboog of Tilburg University and ECGI .  It is available on SSRN  and a summary of it can be found on the Harvard Law School Forum on Corporate Governance and Financial Regulation.

As noted in that summary, the authors’ focus was on the area of agency and particularly the Friedman-inspired critique that “socially responsible firms tend to suffer from agency problems which enable managers to engage in CSR that benefits themselves at the expense of shareholders.  Furthermore [the critique posits] managers engaged in time-consuming CSR activities may lose focus on their core managerial responsibilities… Overall, according to the agency view, CSR is generally not in the interests of shareholders.” Using “a rich and partly proprietary CSR dataset with global coverage across a large number of countries and covering thousands of the largest global companies, [the study’s authors] test [both this agency view and its opposite – which argues that CSR in fact is value enhancing for companies] by examining whether traditional corporate finance proxies for firm agency problems, such as capital spending cash flows, dividend payouts and leverage, are associated with increased CSR. [They also test] the relationship between CSR and managerial pay-for-performance.”

As noted in the Harvard blog summary, the findings from this research help support the notion that good ethics – in this particular instance, CSR – is good business: “We do not find empirical evidence that CSR is associated with ex ante agency concerns, such as abundance of cash and a weak connection between managerial pay and corporate performance. Rather, higher CSR performance is closely related to tighter cash—usually a proxy for better-disciplined managerial practice in the traditional corporate finance literature … and higher pay-for-performance sensitivity. In addition, firms in countries with better legal protection on shareholder rights receive higher CSR ratings…. Finally, we find that CSR can counterbalance the negative effects of managerial entrenchment, and lead to higher shareholder value…”

So, definitely more complicated than the adage about filling Eskimo tummies, but the bottom line is that these and other results of their research “suggest that good governance is associated with higher CSR, and that a firm’s CSR practice is consistent with shareholder wealth maximization.” While no one study could ever definitively make the case for strong CSR or other aspects of good business ethics (just as no one study could never disprove such a case), the work of Ferrell and his colleagues should enhance the comfort that managers and boards of directors feel in moving in this direction.

 

Is compliance anti-capitalistic?

In 1990, the dawning of what in retrospect can now be seen as an “age of compliance,” the senior partner in the law firm where I worked penned a note of dissent in an op-ed piece he published in the Wall Street Journal.  “Be a good corporate citizen,” he wrote, adding that by this he meant that companies should “fight the feds.” Although I saw great promise in the then-new notion of corporate compliance programs, I could also envision, as he did, the dangers in going overboard.

I still can.  Indeed, that is why – whether in my writing or advisory work – I promote a notion of “Goldilocks compliance.”

But a different issue is whether compliance should be seen broadly as anti-capitalistic.  This seems to be the gist of an argument against the Sunshine Act by libertarian commentator John Stossel who recently asked:  “[W]ithout government regulation, what prevents greedy doctors and greedy medical device makers or drug companies from colluding? ” His answer: “Market competition. Other scientists will try to replicate dramatic findings and debunk false claims and sloppy scientists. Companies worry about scandal, lawsuits, the FDA and recalls. They can’t get rich unless their reputation is good.”

I wish it were that easy, but also believe that the market in question is not as efficient as is suggested.   Rather, this seems to be an area of significant market failures – primarily “information deficiency” (but also public costs), meaning that information needed by patients, health care providers and manufacturers of pharmaceutical and medical device products has not always been readily available/understandable for the markets to work their magic. Indeed, the many prosecutions of life science companies for fraud are by definition cases of information deficiency, and the very purpose of the Sunshine Act is, at least in part, to remedy  deficiencies of this sort.  Also relevant here is the notion of moral hazard, and specifically the fact that for various reasons those who create COI risks in life science companies may not be the same individuals who bear the brunt of prosecution, scandal, etc., further diminishing the efficacy of the market in question.

Additionally, I don’t think that it in the interest of libertarians to broadly reject the notion of using a market failure analysis to help frame approaches to law or ethics (although I hasten to add that in his recent piece Mr. Stossel did not say that he was in fact doing this).   In that connection, I believe that part of the reason that public debt has reached the scandalous point that it has has to do with various conflicts of interest and other market failures, as discussed in this earlier post.  More broadly, there is nothing inherently politically left wing (let alone anti-capitalistic) about considering the impact of market failures.   Rather, a market failure analysis treats capitalism – appropriately – as an economic phenomenon, and not a theological imperative.

On the other hand,  care must always be taken that a market failure analysis doesn’t lead to compliance/ethics overkill.  To twist the words of Einstein a bit, market-failure-based interventions (whether legal or ethical) should be undertaken to the extent necessary, but not more so. At least as a general matter, I believe that Mr. Stossel and I would agree on this.

Finally, compliance generally and mitigation of  conflicts of interest in particular are not the only areas where business ethics can bump up against capitalism. For a look at this important and fascinating (at least to me) area through a broader lens I encourage you to read this recent post on “Three stories about capitalism”  by Jonathan Haidt on the Ethical Systems web site.

(For more on:

-  market failures and conflicts of interest generally see this post;

-  the Sunshine Act see this guest post by Bill Sacks and a recent post from another blog about how ”[t]he federal government has made financial disclosure very easy with the Sunshine Act.”

- the many ways that COIs in fact corrupt the behavior of business people, including well meaning professionals, see the various posts collected here

- moral hazard, and its meaning for ethics and compliance,  see posts collected here.)

How to “sell” a C&E program or risk assessment internally

C&E officers generally understand the need for risk and program assessments, but such understanding is less common in the top regions of the corporate org chart.

In our latest dialogue on ECOA Connects,  Steve Priest and I examine four commonly encountered hurdles for getting buy-in for both sorts of assessments – and “sales” strategies for surmounting such hurdles.

We hope you find it useful.

Conflicts of interest and “the social nature of humans”

Private supply chain auditing continues to serve an increasingly important role in compliance and ethics efforts worldwide.  A recent working paper from the Harvard Business School  – “Monitoring the Monitors: How Social Factors Influence Supply Chain Auditors,” by  Jodi Short, Professor of Law at the University of California Hastings College of the Law; Michael Toffel of the Technology and Operations Management Unit at the Harvard Business School; and Andrea Hugill of the Strategy Unit at the Harvard Business School – examines various factors that impact the efficacy of such audits.  The paper can be downloaded from SSRN and a summary of it can be found on the Harvard Corporate Governance web site.

For this study, the authors conducted a review of “data for thousands of code-of-conduct audits conducted in over 60 countries between 2004 and 2009 by one of the world’s largest social auditing companies, …”  They found that “auditors’ decisions are shaped not only by the financial conflicts of interest that have been the focus of research to date, but also by social factors, including auditors’ experience, professional training, and gender; the gender diversity of their teams; and their repeated interactions with those whom they audit.”  The authors state that this  “finer-grained picture suggests that audit designers should moderate potential bias and increase audit reliability by considering the auditors’ characteristics and relationships that we found significantly influencing their decisions,” and also that these findings “should likewise inform the broader literature on private gatekeepers such as accountants and credit rating agencies.”

Indeed, and beyond the scope of the paper, a focus on social – and not just economic – ties may be key to assessing various  independence issues regarding boards of directors.  In an important decision from 2003 involving a derivative action brought by shareholders of Oracle Corp., then Vice Chancellor Leo Strine noted: “Delaware law should not be based on a reductionist view of human nature that simplifies human motivations on the lines of the least sophisticated notions of the law and economics movement.  Homo sapiens is not merely homo economicus.  We may be thankful that an array of other motivations exist that influence human behavior; not all are any better than greed or avarice, think of envy, to name just one.  But also think of motives like love, friendship, and collegiality, think of those among us who direct their behavior as best they can on a guiding creed or set of moral values,” adding, “[n]or should our law ignore the social nature of humans.”

Finally, thanks to friend of the blog Scott Killingsworth for recently reminding me of the Oracle decision;  here’s an earlier post about the Oracle case, albeit with a different focus; and here is a post briefly discussing (and linking to) a paper by Jon Haidt and colleagues about business ethics implications of a model of human nature called “Homo Duplex,”  a term coined by the sociologist/psychologist/philosopher Emile Durkheim, which posits that we operate on (or shift between) two levels: a lower one – which he deemed “the profane,” in which we largely pursue individual interests; and a higher – more group-focused – level, which he called “the sacred.”