There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no test that allows us to declare an arbitrary system or technique secure. This implies that claims of necessary conditions for security are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient can always be refuted, the claim that they are necessary cannot. Thus, we ratchet upward: there are many ways to argue countermeasures in, but no possible observation argues one out. Once we go wrong we stay wrong and errors accumulate. I show that attempts to evade this difficulty lead to dead-ends and then explore implications.