Conflicts of interest, compliance programs and “magical thinking”
An article earlier this week in the New York Times takes on the issue of “Doctors’ Magical Thinking about Conflicts of Interest.” The piece was prompted by a just-published study which examined “the voting behavior and financial interests of almost 1,400 F.D.A. advisory committee members who took part in decisions for the Center for Drug and Evaluation Research from 1997 to 2011” and found a powerful correlation between a committee member having a financial interest (e.g., a consulting relationship or ownership interest ) in a drug company whose product was up for review and the member’s voting in favor of the company – at least in circumstances where the member did not also have interests in the company’s competitors.
Of course, this is hardly a surprise, and the Times piece also recounts the findings of earlier studies showing strong correlations between financial connections (e.g., receiving gifts, entertainment or travel from a pharma company) and professional decision making (e.g., prescribing that company’s drug). Nonetheless, some physicians “believe that they should be responsible for regulating themselves.”
However, such self regulation can’t work, the article notes, because “our thinking about conflicts of interest isn’t always rational. A study of radiation oncologists found that only 5 percent thought that they might be affected by gifts. But a third of them thought that other radiation oncologists would be affected. Another study asked medical residents similar questions. More than 60 percent of them said that gifts could not influence their behavior; only 16 percent believed that other residents could remain uninfluenced. This ‘magical thinking’ that somehow we, ourselves, are immune to what we are sure will influence others is why conflict of interest regulations exist in the first place. We simply cannot be accurate judges of what’s affecting us.”
While the findings of these and similar studies are, of course, most relevant to conflicts involving doctors and life science companies, there is a broader learning here which, I think, is vitally important to C&E programs generally. That is, they help to show that “we are not as ethical as we think” – a condition hardly limited to the field of medicine or to conflicts of interest, as has been discussed in various prior postings on this blog.
One of the overarching implications of this body of knowledge is that we humans need structures – for business organizations this means C&E programs, but more broadly these have been called “ethical systems” – to help save us from falling victim to our seemingly innate sense of ethical over-confidence. So, to make that case, C&E professionals should – in training or otherwise communicating with employees (particularly managers) and directors – address the issue of “magical thinking” head-on.
Moreover, using the example of COIs to prove the larger point here may be an effective strategy, because employees are more likely to have experience with ethical challenges in this area than with other major risks, such as corruption, competition law or fraud – which indeed may be so scary as to be largely unimaginable to many employees. I.e., these and other “hard-core” C&E risk areas might be subject to an even greater amount of magical thinking than is done regarding COIs. So, at least in some companies, discussing COIs might offer the most accessible “gateway” to addressing the larger topic of ethical over-confidence.