This may be because your example doesn't take into account characteristics of the diagnostic tests used to measure whether or not someone got sick or did not - specifically, positive predictive value (PPV).
Here is the math you might be looking for:
I'm not a statistician, so I can't speak to the math, but conceptually I think you are taking for granted that the "sick" and "not sick" people in your population are correctly identified 100% of the time. We know that PPV increases with prevalence (NPV, opposite direction), which means that as prevalence goes up, a positive COVID test is more likely to be true positive, and a negative COVID test is more likely to be false negative (and vice-versa).
This is a feature of spectrum bias, which could also be used to describe other problems in comparing the two. E.g. given that more easily transmissible strains were likely circulating while testing the Janssen vax, you are likely to have more infections per unit time in both treatment and placebo groups vs. earlier studies, even if we hold prevalence to be the same
Again didn't get a PhD in stats, but some casual googling found this which explains it well:
More academic, highlight mine:Although theoretically sensitivity and specificity will remain constant as a disease prevalence changes, in reality this assumption frequently fails. The reason is that as a disease’s prevalence changes, so does its severity, and the severity of disease has a significant impact on the sensitivity and specificity of the tests we use to diagnose it.
For example, rheumatic arthritis is rare in family doctors’ offices, but relatively common in the offices of rheumatologists. This shift in prevalence should not affect the specificity of a test like hand inspection for joint deformity. However, the rheumatologists are also seeing sicker patients, which means the test is actually more specific in their hands.
Having defined "spectrum effect" as differences in the sensitivity or specificity of a diagnostic or screening test according to the patient's characteristics or to the features and severity of the disease, Goehring et al. showed that a "spectrum effect" can lead to a spectrum bias when subgroup variations in sensitivity or specificity also affect the likelihood ratios and thus post-test probabilities (see also [9,11,20]). Indeed, there are some situations for which subgroup analyses of sensitivity and specificity do not lead to the same conclusions as subgroup analyses for likelihood ratios. For example, conflicting results can be obtained when there is no variation in sensitivity and specificity between subgroups, but a higher prevalence of the disease in one subgroup than another.