Investment Committee Discusses Due Diligence at Board Meeting

An investment committee member questions one of the components of the scorecard being used to rank managers.

Qualitative assessment methods can be incredibly powerful, and of just as much value as quantitative considerations in manager selection. However, they are much harder to validate and check than quantitative metrics. As a result, they are more difficult to implement correctly, and often lead to sub-optimal decisions – which in turn cause disappointing performance.

There are many reasons qualitative analysis is difficult to do well, but let’s start with the most fundamental: often, it’s not even entirely clear what conclusion should be drawn from a qualitative process.

A Recommendation to the Investment Committee

Let’s consider an example summary report from a due diligence process that will sound familiar to many institutional investors:

The Manager X Core Equity Strategy received a ranking of Very High along dimensions 1 and 2; a ranking of High along dimensions 3, 4 and 5; and a ranking of Below Average along dimension 6. As a result, Manager X received a score of 12 out of a possible 15 on our evaluation scorecard.

The final score was adjusted to 12.5 to take into account the manager’s specialized expertise in South Asian infrastructure, which would be valuable to our private equity team. This is the highest score on the shortlist, followed by Manager Y with a score of 11 and Manager Z with 9.5.

We also note that Managers X and Y are rated “Buy” by our investment consultant, whereas Manager Z has a rating of “Hold”.

Our consultant arranged for all three managers to present their strategies to the Investment Committee at the most recent meeting, for 45 minutes each. Overall, the Committee had the most positive feedback regarding managers X and Z, and mixed feedback on Manager Y. However, the Portfolio Manager of Y was unable to attend in person. The committee noted that this may have been why the presentation seemed less clear, but also questioned her level of interest in receiving the mandate.

We recommend awarding the mandate to Manager X, based on its highest score vs. peers, its “Buy” rating, and the positive feedback from the committee.

While this write-up sounds reasonable, the rationale used to arrive at Manager X raises numerous questions; let’s start with the high-level ones about the methodology:

  • How were the dimensions of evaluation determined?
  • How were the cutoffs determined (Very High, High, etc.) for each dimension?
  • What controls are in place to ensure that the qualitative ratings are being applied consistently between managers and between internal team members?
  • How do each of the dimensions translate into the performance and non-financial objectives for the mandate?
  • What was the screening process used to narrow down the choice set to these 3 managers?

Moving to the optimality of the process and the interpretation of its results:

  • Has this scorecard been validated? Can you produce a scatter plot of historical strategy ratings and subsequent ex-post performance? Does it indicate a statistically significant relationship between rating and objectives?
  • What is the difference between a score of 12.5 and 11 in the scale of expected net-of-fees excess returns?
  • What is the estimated value of the manager’s infrastructure expertise in terms of expected performance boost in the private equity portfolio? Does this equate to the value of “0.5” assigned?
  • Is this process superior to alternative possible processes? For example, does it outperform quantitative solutions such as Empirically’s, which have the benefits of greater speed and transparency, and have been rigorously validated?

Regarding the investment consultant:

  • All of the same questions discussed above apply to the investment consultant’s ratings. Most importantly, by how much are Buys expected to outperform holds, and by what margin have they done so in the past?
  • Is an audited track record available which measures the value of the consultant’s recommendations?
  • How much weight is assigned to the scorecard relative to the opinion of the investment consultant?

And finally, with respect to the manager interviews:

  • Is it reasonable to assume that Y is less interested in the mandate because she was unable to attend? If so, does the manager’s level of interest in being awarded the mandate have any bearing on the value it will provide if awarded?
  • How should the investment committee’s feedback regarding Manager Y be weighted? What level of feedback would cause Y (with a score of 11) to rise above X (with 12.5) or below Y (with 9.5 and a “Hold”)?

Inside the Investment Committee Boardroom

Let’s now fast forward to the next investment committee meeting:

The board considered the proposal to award the mandate to the Manager X Core Equity Strategy. The CIO recapped the findings of the rating scorecard, and the investment consultant reviewed the three options and the process for reaching the conclusion that Manager X would be the best fit for the portfolio.

After reviewing the scorecard, committee member Mr. A expressed surprise at the relatively low weight being attributed to past performance. The CIO explained that experience has shown that past performance is an unreliable predictor of future returns, and that the other elements in the scorecard were designed to capture future performance potential.

While acknowledging these points, committee member Mr. B expressed similar concerns, and upon reviewing the performance, also noted that it seemed counter-intuitive that Manager Z had the highest 3-year excess return, but had received the lowest score. The consultant added further color about the reasons for Manager Z’s strong prior year performance, which were not judged to be repeatable.

Committee member Ms. C then noted that while the proposed Manager X had a solid long-term track record, relative performance had slipped to the third quartile year-to-date. She questioned whether this indicated a potential issue developing with the investment process.

Mr. A remarked that on the other hand, perhaps the short-term “hiccup” meant that it was a good time to buy into Manager X.

After further discussion and debate, the committee ultimately gained comfort with the recommendation and voted unanimously to award the mandate to Manager X. “I don’t know about this one, but you guys are the experts,” joked Mr. B to the CIO and the consultant. Other matters were reviewed, and the meeting was adjourned.

Exiting the meeting, the CIO felt uncomfortable. The board meeting had caused her to realize that multiple committee members had doubts about the reliability of the internal scorecard being used to evaluate managers, and did not necessarily buy in to or even understand its methodology.

Furthermore, because the committee did not seem to trust the analysis as presented, they had made it clear that they were relying on her personal judgment in voting to appoint the manager. She had dismissed Manager X’s year-to-date underperformance as not meaningful, but what if it continued? The board would be closely monitoring this core allocation, and so for the sake of her reputation, it had now become critical that Manager X turn things around over the next year.

The Takeaways

This fictitious selection process exemplifies some of the issues that can arise when a manager due diligence process lacks sufficient rigor, objective rationale, and quantitative validation. In this case, because they were not presented with a compelling evidence-based argument for Manager X according to metrics they recognized as predictive of future outcomes, the committee did not have the confidence to trust the official process.

Instead, they based their decision on their trust in the judgment of the CIO and their external investment consultant. This is clearly a much weaker basis for a decision, and if performance falls short of expectations, this trust is liable to quickly erode. In that case, the committee is likely to attribute the majority of the blame to the people, not the process – which could result in their replacement.

At their worst, qualitative methods are sometimes used as a smokescreen to conceal low conviction and faulty logic, or to “hedge” one’s bet against multiple potential future outcomes. But even at their best, qualitative methods require enormous care to execute correctly (see, for example, our Insight regarding ESG questionnaires).

Upgrade to a Bulletproof Fund Selection Process

Schedule a Demo for a walk-through of how our predictive analytics and decision exploration tools can provide your investment committee with a clear, reliable basis for making complex choices.

×

This is why in weighing the costs and benefits, we’ve determined that a rigorously-constructed quantitative approach to manager due diligence outperforms qualitative approaches in practice. While there are significant limitations in what information quantitative measures are able to capture, they are still rich enough to construct powerful predictive analytics, and they make up for their limitations in the form of repeatability, transparency, reduced bias, and cost-efficiency.

Regardless of the mixture of qualitative and quantitative methods employed, each element of a decision process should have an evidence-based rationale. Being able to answer the “Why?” and “How do we know?” questions for each step of a due diligence exercise provides powerful support for the process, and aligns decision makers around a consistent, shared methodology.  It also greatly improves fiduciaries’ ability to justify and defend their actions if they are subjected to scrutiny in the future.


Author Information: Jordan Boslego is a Partner at Empirically.

Updated August 2020.