It depends. Some things - critical biosensors, for example, are designed to give lots of false positives because they err on the side of caution (i.e. if a biosensor thinks it might detect a bioterrorism agent, it raises the alarm, because the alternative - a false negative - is a hell of a lot worse than all the false positives you can imagine). That makes a biosensor well suited for what it's supposed to do - sense bioweapons releases; but if you instead tried to use it as a medical diagnostic tool...well, then it would suck. But as they're used, 90% false positives is pretty damned good.
UAV's for border patrol...could be pretty much the same thing. A "false alarm" might be a "good" thing in the context. Or it might not. It sure as sh-- isn't a measure of reliability on its own, though.