The most frustrating aspect of working in computational probability/statistics is that it's basically impossible to construct algorithms that actually return well-defined probabilistic quantities and this results in no end of chaos.
At best algorithms can return approximations with quantifiable error, but to understand when approximations are useful you have to learn enough math to understand the exact result and how the algorithmic approximation relates to that exact result. Many do not do this.
More commonly programmers project the heuristics that prove successful in other computing problems -- pattern matching, type consistency, unit testing, relying on compiler errors, etc -- but these test only the algorithm and not the relevance of the algorithm to a stats problem.
Unaware of these subtleties many end up conceptually replacing the algorithm for the output being approximated, assuming that algorithmic properties are inherent and well-defined features of the underlying probabilistic/statistical system.
Needless to say this generalizes...poorly. Even worse: without formal knowledge of what is being approximated the poor generalization performance itself is easy to ignore and naive applications drift ever so steadily away from any well-defined mathematical objective.