Interesting article in Nature on 20 weird tips "to improve policy-makers' understanding of the imperfect nature of science" and reduce belly fat. It's good although even better IMNSHO would be directions on how to choose between dueling experts. I'd guess they'd say just discount the experts that run afoul of the most tips.
First few tips are straightforward in theory if not always so in practice, and need no comment. Moving on, regression to the mean and extrapolating beyond a data range strike me as things that smart people don't always understand. Replication versus pseudoreplication is also good - a replication that repeats the same errors in the original study, e.g. not accounting for confounding factors, will just give the same bad results.
Separating no effect from nonsignifance can be huge, and appears to be what drove the stupidity over whether the Oregon Medicaid study showed insurance resulting or not resulting in improvements in health outcomes. I would also add to the tip "significance is significant", the coda "except when it's not." Just do enough studies and you'll eventually trip over the 5% threshold due to repeated dice-rolling. That's what gets us cancer clusters (or many of them, anyway), and over multiple decades may even result in global temperatures temporarily going below or above the 95% confidence levels.
Cherrypicking and risk perception also are pretty obvious, but such a huge problem that we really need tattoos.
Scientists are human and become biased, yes. The one trick I'd add from the legal profession is the declaration against interest - if one of the expert's statements goes against what the expert would like to conclude, it's more likely to be true.