I am not sure I said or even implied that stats should have zero credibility. All I was saying is - Stats cal lie if you let them. People should report data that improves the end result and not mask it.
I've bent analysis myself in the past.
I would test 10 to 20 phones every month and report typically 4 errors overall in every batch and then list them by category
1 Transmitter, 1 Receiver, 1 Microphone, 1 Keyboard. Netting a very low failure rate
No one would make any attempt to collect the phones to investigate. When I bunched ALL failures together and presented a larger failure rate - suddenly every group wanted to investigate their respective failures.
Result. LESS failures in he proceeding months.
When a new manager griped about the way it was presented I was told to change the results from back to #'s that in the end decreased the reliability of the phones.