Whoever said research was glamorous? OK, maybe it wasn’t you. It was probably a geek like me. But people can get Nobel Prizes for it, at least if they come up with something interesting enough. Lately, though, glory and honors seem to have been taking back seats to concerns about whether this stuff is worthwhile, or even worse, whether it’s trustworthy.

This is about all kinds of research – medical, psychological, scientific, etc. – and with particular focus on studies published in high-end journals that tend to prefer submissions that claim to discover new ideas (as opposed to refuting hypotheses) often submitted by researchers who need to get published if they expect to advance professionally.

Finance is very much part of the controversy. On August 17, 2015, the web site Adviser Perspectives published an article entitled Why You Shouldn’t Trust Most Financial Research,and in its Summer 2015 issue, Journal of Portfolio Management published a set of recommended improvementsto enhance the credibility of empirical financial research.

It’s Important To Everybody

Obviously, anything that purports to teach investors how to invest more successfully is crucial, and good research does that. The 1920s-30s is a great example of what the financial markets can look like if participants just wing it, and 2000-02 reminds us what can happen if we allow ourselves to feel too hip to worry about what we can learn from the ivory-tower crowd. On the other hand, 2008 stands as a sad reminder of what can happen if we go to the other extreme and act as if all research is good research.

Now, with venture capital pouring billions into robo-investing (expecting, obviously, fees paid by you to supply their returns) and with robo marketers trotting out big-pedigree research names and fancy “white papers” to demonstrate the credibility of their offerings, you have an especially important stake in knowing how to differentiate the terrific from the inept. But don’t fret. Even if you don’t have advanced degrees in this field, you’ll still be able to recognize what’s what.

Your Polygraph

Self-protection hinges on two oft-repeated but incredibly valuable platitudes:

  •  If it sounds too good to be true, it probably is.
  •  Past performance does not assure future outcomes.
  • I know these sounds clichéd, especially the past-performance thing that’s been repeated constantly often under the it’s-not-on-me sections of documents labeled “Disclaimers.” Isn’t it wonderful how lawyers make excuses for providers before things even get started, much less waiting for things to actually go wrong. I get it. I’m also turned off by the boilerplate. But as irritating as it sounds, it’s really needed and I’ve written it many times. It’s for real. So please, please, please don’t get so jaded that you gloss over statements like these. To give you incentive to brush past the annoyance and take these saying seriously, consider this: The common element in bad research, the stuff you should avoid, is that it violates these principles.

    Too Good to be True

    We don’t know the future. Investing is risky. Sometimes we’re going to lose money. Period. If anybody even hints at anything to the contrary, that’s a clue you’re dealing with a lemon.

    Nowadays, you’re most likely to encounter we-got-this type rhetoric that uses words, acronyms and names like efficient frontier, optimization, MVO, asset allocation, Markwotiz, Black-Litterman, covariance matrix, volatility, correlation, and so forth. This is heady stuff. The goal, a very worthy goal, is to offer you a portfolio, tailor made for you, that will give you the most bang for the buck while losing as little sleep as possible. In financial jargon, this means balancing asset classes and/or securities in such a way as to combine the maximum expected return given your personal tolerable standard deviation (risk). The robos that offer this tend to do so via portfolios of ETFs with the percentage of money allocated to each determined by their models.