Feser obviously thinks “no”, but his arguments are as usual full of errors.
There are good reasons why falsification is important: the law of conservation of expected evidence tells us both that we can't have evidence for something unless it is also possible to have evidence against it; and also that when we have a precise theory, an expected result is only weak evidence for the theory, while an unexpected result is strong evidence against it.
Feser wants to categorize theology alongside mathematics and logic (and metaphysics) as things that don't need evidence. But this rather misses the point; we don't treat pure mathematics empirically not because it is prior to science but because it is the study of formal abstractions, and therefore “true” means something different for a mathematical theorem than it does for a scientific theory. One way to put this is that mathematics contains the same truths in every possible world, and therefore it can't tell you which possible world is the one that you're living in.
Applied mathematics, on the other hand, relies on picking out a relevant abstraction from the real world as a starting point; and once you do this, you start having to care about empirical consequences—because nothing else can tell you whether you picked the correct abstraction.
But metaphysics and theology purport (unlike pure mathematics) to tell us something non-trivial about the actual world we're living in. Accordingly they can't deny the applicability of empirical evidence.
Correlation and Causation
Feser also botches the correlation-vs-causation argument. He argues:
If it turned out that only five percent of people who smoke heavily over the course of many years ended up getting cancer, we could reasonably say that the claim had been falsified.
But it would be absurd, not to mention medically irresponsible, to conclude that the claim of a causal correlation between syphilis and paresis is falsified by the fact that actually developing paresis is rare. All the same, if there were on record only one or two cases, out of millions, of paresis following upon syphilis, it would—especially if no mechanism by which the one might lead to the other were proposed—be hard in that case to resist the conclusion that the claim of a causal correlation had been falsified.
Neither of these claims is even slightly correct. That only a tiny proportion of the people exposed to cause X develop condition Y does not falsify the theory that says that X causes Y (though it may make it difficult to establish the plausibility of the theory). What matters is not the value of P(Y | X), the conditional probability of seeing Y given that we saw X, but rather whether P(Y | do(X)), the conditional probability of seeing Y given that we force X to occur, is greater than P(Y). If we simply observe that 5% (or 50%) of smokers develop lung cancer, we don't know whether this is because the smoking caused the cancer or whether, for example, some confounding third factor caused both. On the other hand, if we were to establish a value for P(lung cancer | do(smoking)) and find that it's 5% while P(lung cancer) is say 1%, we have clear evidence of causation.
(Obviously in the case of smoking, finding P(cancer | do(smoking)) by experiment would be unethical to do on humans—it would mean forcing people to smoke. Instead we can get it either by indirect means—finding a third variable to condition on, or looking at changes in lung cancer rates when smoking frequency changes in a population.)
Other examples of rare causation would include Reye's syndrome, which has an occurrence of only a handful of cases per million children in the vulnerable age range, but for which the evidence of a link to aspirin use is strong enough (even though no causal mechanism is known) that aspirin is now officially contra-indicated for childhood fevers. If we were limited to Feser's ideas of causation, then we would find it impossible to conclude anything about cases like this.