Checking

“Checking” – auditing, monitoring, certifications, self assessments and questions in exit interviews (among other things) – can play an essential role in nearly any COI compliance regime. See the several sub categories below for more information about some of the principal checking tools.

Assessing your conflict of interest compliance program

Under Department of Justice standards for the government’s evaluating compliance & ethics (C&E) programs companies should undertake program self-assessments from time to time.

What does this entail? At a minimum, it should include assessing the general components of the C&E program (e.g., compliance office, helpline, training) as well as corporate culture.  And, for many companies, a “deep dive” into substantive areas of high risk, such as anti-bribery and competition law, should be be within  the scope of the assessment.

Somewhat less common is companies assessing their conflict-of-interest (“COI”) compliance programs. This post will offer some ideas for use in conducting such an assessment.

Process

At the outset, I wish to stress that a COI program assessment need not be a standalone process. Rather, companies can – and in most instances, should – make it part of the larger program assessment.

Is COI included in your risk assessment?

Note that what this question asks is more than just whether there are actual COIs at the organization in question. Rather, the inquiry is about how likely and potentially impactful COI risks are.

As a practical matter this means:

– Determining how culture affects COI likelihood – as a matter of organizational, geographic and industry culture. Note that while the first two types of culture are commonly the focus of risk assessment, the third – industry culture – generally is not, but (in my view) should be,

– Determining what the opportunities for COIs are.  This is a matter of having adequate financial controls, of course, but also entails looking at the “supply side” of opportunities to enter into COIs,

Note that there is no particular formulae for this. What is required is an act of “informed imagination.”

Also, it is particularly important to ask the impact question with COIs, because such impacts are often dismissed as “harmless.” Focusing on impacts in a COI risk assessment can help show why that is not the case.

COI policies and procedures

Presumably almost all companies have COI provisions in their respective codes of conduct, but not all have standalone policies. The latter aren’t typically mandatory but are generally a good idea where the subject may be too complex for a code provision to cover completely.

The most important topic for COI policies and procedures often concerns disclosure/approval. As a general matter disclosure should be made to – and approval required of – compliance, legal or HR. Allowing approvals by line supervisors – if necessary – should still entail notice to compliance, law or HR.

Training and communications

These should be driven by the risk assessment, and there is clearly no one size that fits all when it comes to COI training and communications. However, a fairly typical approach for a medium risk company would entail:

– COI as a module in code of conduct training for all employees delivered every year or two.

– Other training on a risk-based basis (such for managers or procurement).

– Other communications on a risk-based basis (e.g., about gift giving – to be disseminated during the holidays).

Auditing and discipline

Companies often review COI case files as part of site audits.  Whether to do this – or other auditing – should be informed by the  risk assessment.

Finally, from an organizational justice perspective, it is important that COIs be handled in a fair way. While fairness is important  to how all C&E issues are resolved this is particularly so for COIs  – given that COIs have an obvious personal dimension, e.g., hiring  or promoting a relative arguably hurts other mployees more  than other offenses would.

Robots and conflicts of interest

From “Do You Have a Conflict of Interest? This Robotic Assistant May Find It First”  recently published in the NY Times:

What should science do about conflicts of interest? When they are identified, they become an obstacle to objectivity... Sometimes a conflict of interest is clear cut.But other cases are more subtle, and such conflicts can slip through the cracks, especially because the papers in many journals are edited by small teams and peer-reviewed by volunteer scientists who perform the task as a service to their discipline.

The Times  piece further notes: With such problems in mind, one publisher of open-access journals is providing an assistant to help its editors spot such problems before papers are released. But it’s not a human. Software named the Artificial Intelligence Review Assistant, or AIRA, checks for potential conflicts of interest by flagging whether the authors of a manuscript, the editors dealing with it or the peer reviewers refereeing it have been co-authors on papers in the past…(Note: prior coauthoring of an article by itself would not constitute a COI, but could be an indication of one.)  The tool cannot detect all forms of conflict of interest, such as undisclosed funding sources or affiliations. But it aims to add a guard rail against situations where authors, editors and peer reviewers fail to self-police their prior interactions.

Note that the use of data mining for COIs is not new. Indeed, for many years, auditors have looked for matches between the addresses of employees and vendors. And  anti-corruption compliance programs increasingly involve data mining, as is true of competition law compliance too

Moreover, the specifics of efforts like these will vary by industry. (E.g., the co-author relationships of the type referenced above would presumably  be relevant only to businesses where publishing plays an important role.)

But for any company it is worth considering – based upon the company’s risk profile – whether there  are any opportunities of this sort.

 

 

 

Assessing conflict of interest compliance programs

Here is a just-published article in Corporate Compliance Insights by Rebecca Walker and me on conducting assessments of conflict of interest compliance programs.

We hope you find it useful.

This is a test

In Testing Compliance, (published on the Harvard corporate governance web site, with the full paper available at SSRN), Brandon L. Garrett. Professor of Law at Duke Law School, and Gregory Mitchell, Professor of Law at the University of Virginia School of Law, note that “what makes the compliance enterprise deeply uncertain and problematic is that the information generated by compliance efforts is simultaneously useful and dangerous. However, documenting problematic behaviors creates a record that may be used against the corporation in future administrative, criminal or civil proceedings, or may become the subject of a media exposé. Officers and directors, and the in-house compliance team, may sincerely hope compliance programs are effective, but they may quite rationally avoid testing that hope. The end result will often be rational ignorance with respect to the effectiveness of corporate compliance programs. This dynamic—the hope that greater attention to compliance will reap benefits drives more resources toward compliance efforts, yet fears about what examining the effects of those efforts might reveal hinders validation of compliance programs—creates a ‘compliance trap’ that can ensnare corporations and regulators alike.” The authors  “explore ways out of this trap.”

Among other things:

– They argue for government policies to promote more information sharing by companies about what works and what doesn’t in terms of C&E. While there is already some such sharing via compliance conferences and though various professional organizations there is clearly room for improvement here.

– They also note, based on compliance information published by Fortune 100 companies, that if such companies “are measuring the effectiveness of their compliance programs, they are not sharing it. It is also possible that what we see is what we get: active educational efforts focused on employee training and assessments of that training using employee surveys and reactive compliance efforts relying on whistleblower reporting and investigation of those reports. The public record reveals few active efforts to detect and remedy weaknesses within internal compliance systems.” I agree that sharing of this kind could be a powerful force in promoting strong C&E.

– They propose instituting a “legal mandate that organizations regularly test their compliance systems for effectiveness. But to incentivize companies to put in place strong compliance programs and audit those programs rigorously, the mandated reports should not increase their litigation exposure. ” I think implementing legislation to help companies avoid the “compliance trap” in this way would be very beneficial, though getting to such a safe place would – in my view – be a lengthy and difficult journey.

– They note: “Companies need to proactively test whether their employees, when given the chance to misbehave, really do. Such testing need not involve comprehensive data collection or expensive analytics, although firms increasingly use such tools, and consultants may market AI approaches to compliance. Rather, experiments, relying on blind performance testing of randomly sampled employees, can quite inexpensively measure whether employees comply in realistic work situations.” I note (as do the authors) that some this already happens but think there needs to be more of it. However, one must be careful to avoid the perception that employees are being treated as the subject of experiments.

Finally, there is much more to this piece and I encourage you to read it in its entirety.

 

What is in a name?

A recent letter to the editor of Nature argues:

Transparency about competing interests is essential when reporting scientific data. However, use of the term ‘conflict of interests’ for such declarations can be misleading in some biomedical papers. A genuine example of a conflict of interest is when academic researchers are financially rewarded for their work by commercial partners. The situation can be more nuanced for reports of biomedical discoveries that could be applied in clinical situations. After all, developing such treatments for patients is a moral obligation for academic researchers, both to their funders and to society — even though it can mean working with biotechnology or pharmaceutical companies. Disclosing a financial arrangement as a ‘conflict of interest’ under such circumstances implies that engagement with for-profit companies is a nefarious activity, potentially at odds with what society expects from biomedical scientists. In that context, a ‘declaration of interest’ would be a more accurate term for a mandatory and transparent disclosure of financial relationships. A ‘conflict of interest’ should instead be reserved for authors who cannot document efforts to translate their discoveries to the clinic. (The author of the letter is  René Bernards of the Netherlands Cancer Institute.)

I do not know enough about biomedical research conflicts of interest to gauge the merits of this suggestion, but I would be surprised if some scientists couldn’t  still have a conflict of interest even if they translated their discoveries to the clinic. However, the larger point about a COI disclosure seeming to unfairly suggest nefarious conduct basically seems sound.

More generally, I believe that in organizations of all kind,  policies, training and disclosure documents should communicate that not all ostensibly conflicting interests are  wrongful. (This point is sometimes made, but  in my view  not often enough.) The alternative may be to discourage desirable conduct and to drive other conduct underground.

Conflict of interest self assessments

C&E program assessments sometimes have a general scope and sometimes are focused on a single substantive risk area – such as corruption or competition law. For some companies it makes sense to do such a targeted assessment for conflicts of interests – particularly those responding to a significant COI violation or “near miss.”

The scope and approach of such assessments for any given company at any given time should vary based on a variety of circumstances. Hopefully, however, the following questions/comments can be helpful to some organizations seeking to determine whether/how to go down this road.

Risk Assessment. Has the company assessed COI risk? If so, has it used the results of the assessment(s) in designing and implementing other aspects of the COI program?

Governance. Have the respective COI oversight roles of the board of directors and senior management been formalized? Do they receive appropriate reports of COI program activity? Are there sufficient escalation provisions regarding COIs?

Culture. Are COI rules followed or are there double standards? What is the sense of “organizational justice” vis a vis COIs?

Policies. Presumably nearly every business organization has a COI provision in its code of conduct – but there are also many that need but do not have a standalone policy as well.

Procedures. Are disclosure procedures clear, easy to use and well known? Do those tasked with reviewing COIs have sufficient knowledge and independence for the job?

Training/other communication. Is there enough training given relevant COI risks (which tend to be high for senior managers/board members and in certain functions). Is training reinforced through other communications?

Auditing and monitoring. Is the COI disclosure practice audited? Same question for monitoring (of conditionally approved COIs)..

Responding to allegations/request for guidance. Do employees feel comfortable seeking guidance on possible COIs? Are investigations truly independent? Are violations of the COI policy treated with sufficient seriousness? Does the company conduct a “lessons learned” analysis of significant COI failures?

Of course, there is much more that could be included in a COI self-assessment (and I encourage you to browse the blog for ideas in this regard). But hopefully the above will be a useful foundation for starting.

 

 

“Point-of-risk” compliance

Marketers have long known that “point-of-sale” display of products can be a powerful advertising tool.  But can its logic be put to work for promoting compliance and ethics?

I was recently asked by a client to fill out a vendor information form and noticed that in addition to seeking information from vendors the form required the employee proposing the hiring to certify that any conflict of interest involving the vendor had been disclosed and okayed by management and the C&E officer.  While I know that many companies have some form of COI certifications (see prior posts collected here), I can’t recall having seen one on a vendor information form of this sort before – even though the common sense of such a “point-of-risk” compliance approach seems pretty obvious.  Indeed, it is hard to think of any reason why a company wouldn’t do this.

Moreover, such an approach  is supported by behavioral science, as described in this earlier post.  And, as also noted in that post, beyond the COI risk area there is no shortage of  other “point-of-risk” compliance opportunities for many companies: “anti-corruption – before interactions with government officials and third-party intermediaries;  competition law – before meetings with competitors  (e.g., at trade association events);  insider trading/Reg FD – during key transactions, before preparing earnings reports;  protection of confidential information – when receiving such information from third parties pursuant to an NDA;  …  accuracy of sales/marketing – in connection with developing advertising, making pitches; and employment law – while conducting performance reviews…” (Note: in the earlier post I refer to this approach as “just-in-time” compliance, but on reflection think that “point of risk” is closer to the mark.)  Doubtless there are many others too.

I should stress that this suggestion does not imply an increase in the total amount of C&E education, which for some companies would be a non-starter.  Rather, a robust “point-of-risk” strategy might allow a company to decrease its use of less impactful communications, meaning principally those that  lack immediacy and context.

Thinking more broadly, a “point of risk” C&E communication strategies might work for teaching ethics in business schools and colleges. Writing last week in the Huffington Post,  William Steiger of the University of Central Florida’s College of Business Administration  argued that: “Business schools should use examples of ethical practices and decision-making throughout the curriculum, not just in the ethics class.” I agree (and indeed when I was teaching business ethics years ago made a similar proposal; I hope Steiger has more success with this than  I did).

Whether it is in the workplace or classroom, there is a growing need to  find ways to better communicate and otherwise support ethical expectations.  For many businesses and schools, a point-of-risk approach may be a good place to start.

The complicated and consequential world of compliance “checking”

Over time, companies should devote an increasingly greater amount of C&E program effort/resources to “checking” – auditing, monitoring and other forms of self assessment.  More than two decades after C&E checking became the law of the land, one can imagine how little sympathy the government would have for a company that tries to get “credit” for its C&E program but which had taken insufficient steps to determine if that program was in fact fit for purpose.

However, if the need for checking is clear, where to start  (or what step to take next) may not be. Both as a conceptual and practical matter, this can be a daunting area to tackle given the many types and dimensions of checking available.

In a complimentary web cast sponsored by The Network on January 20, 2015 at 1:00 pm Eastern, I’ll try to survey the world of C&E checking, describing relevant legal expectations and best practices that apply to both the risk area and the general program dimensions.  I’ll also discuss practical measures that companies can take to begin or improve a regime of C&E checking – in effect, a needs assessment for one’s C&E auditing, monitoring, program assessment and risk assessment.  Finally, I’ll consider what the impact of “behavioral ethics” should be on C&E checking.

Postscript:  more than 500 C&E folks attended the web  cast live and another 400 are getting the recorded version.   If you’d just like the slides, please click here.

“The inner voice that warns us somebody may be looking”

Within the treasure trove of H.L. Mencken’s sayings, this definition of “conscience” may be my favorite.  And, various studies have indeed shown that the sense that somebody may be watching can help promote ethical behavior.  Among these are  experiments exposing individuals to “eyespots” –  drawings which create a vague sense of being watched, even among those who know as a factual matter that they aren’t being seen. (See, e.g., this study, showing that exposure to eyespots can promote generosity.)

While actually deploying eyespots around the workplace is hardly a viable option for most companies, various technological advances offer not only the appearance of being watched but the actuality of it.  Such monitoring technologies can be particularly promising for promoting compliance by parts of a workforce for whom supervision is relatively remote – which is often the case for sales people.

For two other risk-related reasons, sales people can be a logical choice for C&E monitoring:

– Their incentives may not align well with those of their respective companies – a “moral hazard” condition.  (Indeed, in a risk assessment interview I conducted last week, the interviewee responded to a question about conflicts of interest by saying – only somewhat in jest – that the whole company sales force had such conflicts.)

– Sales people tend to be in a position to cause legal/ethical violations – e.g., corruption, collusion and fraud – much more than the average employee at a company.

But, while the case for monitoring sales people is strong as a general matter, obviously not all monitoring strategies are equally effective.  According to a paper published in the September 2014 issue of the Journal of Business Research, “Does transparency influence the ethical behavior of salespeople?” John E. Cicala, Alan J. Bush, Daniel L. Sherrell and George D. Deitz (rentable on Deep Dyve): “it is not the perception of visibility that drives sales persons behavior, but rather the perception of the likelihood of negative consequences resulting from management use of knowledge and information gained from technologically increased visibility.”

Of course, these results – based on an on-line survey which is described in the paper – presumably won’t surprise any C&E professionals. (Nor, likely, would they have impressed Mencken, who also said: “A professor must have a theory as a dog must have fleas” – although I should add that that’s just another chance to quote the great man – not a reflection of my view of this paper.) But, as with much of the social science research discussed in this blog, having data to back up what is intuitively known may be useful, particularly when seeking to make C&E reforms in a company that are being resisted.

Most relevant here is the often-contentious issue of how open a company is with its discipline for violations (meaning not just of sales persons but any employee).  While C&E professionals typically understand that true “public hangings” – i.e., full identification of individual transgressions and transgressors – can be undesirable for all sorts of reasons, there is still a lot that their respective companies can do in a general way to show that   negative consequences do exist for breaches of C&E  standards. Hopefully, this new research can help C&E professionals make such a case.

An important real-world conflict of interest experiment

In today’s NY Times, Michael Greenstone, an economics professor at MIT, writes about a study on auditor COIs that he –  together with Esther Duflo of M.I.T.;  and Rohini Pande  and Nicholas Ryan, both of Harvard – recently published.   The study was conducted in Gujarat, India, where industrial plants with high pollution risks are required  “to hire and pay auditors to check air and water pollution levels three times annually and then submit a yearly report to” a governmental body. In the study, for a randomly selected set of companies, but not for a control group, “auditors were paid a fixed fee from a central pool of money, a subset of the audits was chosen to have its findings re-examined, and auditors received payments for accurate reports, judged by comparisons with the re-examinations. The control group continued under the status quo system in which auditors were chosen and paid by the plants they were auditing.”

The results of this real-world experiment  powerfully demonstrate the impact on the ethicality of conduct that financial incentives can have – even on the judgment of individuals who, by virtue of their professional norms, are supposed to be resistant to COIs.  That is: “While many of the plants violated the pollution standards, few of the auditors in the control group reported these violations. In the case of particulate matter, an especially harmful air pollutant, auditors reported that only 7 percent of industrial plants violated the pollution standard. In reality, 59 percent of plants exceeded it.” However, “[t]he rules changes [in the experiment] caused the auditors to report more truthfully. In the restructured market, auditors were 80 percent less likely to falsely report a pollution reading as in compliance, and their reported pollution readings were 50 to 70 percent higher than when they were working in the status quo system. This difference was as large even when comparing reports of auditors working simultaneously under the two systems. Finally, and most important, the plants that were required to use the new auditing system significantly reduced their emissions of air and water pollution, relative to the plants operating in the status quo system. Presumably, this was because the plants’ operators understood that the regulators were receiving more accurate information and would follow up on it.”

Three comments on this important study.

First, while most directly relevant to auditors, these results can, I believe, be broadly applicable to COIs generally.  That is, if professionals who are trained to rise above COIs fare this poorly, one can only imagine the impact of COIs on the rest of us.

Second, the more important compliance and ethics program efforts become to society, the greater the need for not just C&E auditing but other forms of checking – such as monitoring, as was discussed in a piece in Corporate Compliance Insights.   But monitoring  (as a general matter) is even less independent than is auditing, so this recent study underscores  the considerable  challenges for making forms of checking beyond auditing effective.

Third, research to determine “what works”   is vitally important for the C&E field to mature and realize its full promise,  and real-world studies such as this one can be particularly valuable in that regard.  Interestingly, another article in today’s NY Times describes how in the UK there is now an government-run effort (headed by a “Behavioral Insights Team”) to use research to determine what works with respect to various public policies, including some compliance-related ones. I hope that the US and other countries will follow the UK’s lead here.

Finally, here is a prior post on auditor COIs