Page 713 of 725 FirstFirst ... 213613663703711712713714715723 ... LastLast
Results 7,121 to 7,130 of 7244

Thread: Coronavirus thread

  1. #7121
    THE THIRST MUTILATOR Nephrology's Avatar
    Join Date
    Sep 2011
    Location
    West
    Quote Originally Posted by whomever View Post
    I'm still not grokking this.

    I fully agree that sample sizes can affect the power of the results; if you don't have enough data you can't draw conclusions. For a limiting case, if the disease prevalence is so low that none of the trial participants, vaccine or control, get sick, then you can't draw any conclusions :-)

    I also agree that different variants, etc, can make a difference.

    I am making a narrow, nitpicky point here - if you have two studies:

    Study1, 100k participants (50k each vacc/control) and 1k of the control and 500 of the vacc group got sick, and
    Study2, 100k participants (50k each vacc/control) and 2k of the control and 1000 of the vacc group got sick
    y
    because the disease prevalence during Study2 was twice what it was during Study1, that is exactly what I think the math would predict if the efficacy was the same for both trials. I'm trying to find the math behind the assertion (that I think I hear from the Vox video) that those are not actually the expected values you would see for equal efficacy, for non-extreme values of disease prevalence (i.e., neither of the control groups had close to 0% or close to 100% get sick.

    I really do get that the notion that you can have other kinds of differences between studies that can affect the results.

    I realize this is a narrow, nitpicky, theoretical point that may not be of general interest. But I went to the trouble to get a degree in this, and I don't understand the math behind the assertion, and I'd like to understand it, if it wasn't a mistake.
    This may be because your example doesn't take into account characteristics of the diagnostic tests used to measure whether or not someone got sick or did not - specifically, positive predictive value (PPV).

    Here is the math you might be looking for:



    I'm not a statistician, so I can't speak to the math, but conceptually I think you are taking for granted that the "sick" and "not sick" people in your population are correctly identified 100% of the time. We know that PPV increases with prevalence (NPV, opposite direction), which means that as prevalence goes up, a positive COVID test is more likely to be true positive, and a negative COVID test is more likely to be false negative (and vice-versa).

    This is a feature of spectrum bias, which could also be used to describe other problems in comparing the two. E.g. given that more easily transmissible strains were likely circulating while testing the Janssen vax, you are likely to have more infections per unit time in both treatment and placebo groups vs. earlier studies, even if we hold prevalence to be the same

    Again didn't get a PhD in stats, but some casual googling found this which explains it well:

    Although theoretically sensitivity and specificity will remain constant as a disease prevalence changes, in reality this assumption frequently fails. The reason is that as a disease’s prevalence changes, so does its severity, and the severity of disease has a significant impact on the sensitivity and specificity of the tests we use to diagnose it.

    For example, rheumatic arthritis is rare in family doctors’ offices, but relatively common in the offices of rheumatologists. This shift in prevalence should not affect the specificity of a test like hand inspection for joint deformity. However, the rheumatologists are also seeing sicker patients, which means the test is actually more specific in their hands.
    More academic, highlight mine:

    Having defined "spectrum effect" as differences in the sensitivity or specificity of a diagnostic or screening test according to the patient's characteristics or to the features and severity of the disease, Goehring et al. showed that a "spectrum effect" can lead to a spectrum bias when subgroup variations in sensitivity or specificity also affect the likelihood ratios and thus post-test probabilities (see also [9,11,20]). Indeed, there are some situations for which subgroup analyses of sensitivity and specificity do not lead to the same conclusions as subgroup analyses for likelihood ratios. For example, conflicting results can be obtained when there is no variation in sensitivity and specificity between subgroups, but a higher prevalence of the disease in one subgroup than another.

  2. #7122
    Quote Originally Posted by Nephrology View Post
    Anyway, just the latest piece of interesting COVID news I've read...
    Ahem.



    I do not like interesting right now. I could use less interesting.

  3. #7123
    Site Supporter
    Join Date
    Jan 2012
    Location
    Fort Worth, TX
    Quote Originally Posted by whomever View Post
    I'm still not grokking this.

    I fully agree that sample sizes can affect the power of the results; if you don't have enough data you can't draw conclusions. For a limiting case, if the disease prevalence is so low that none of the trial participants, vaccine or control, get sick, then you can't draw any conclusions :-)

    I also agree that different variants, etc, can make a difference.

    I am making a narrow, nitpicky point here - if you have two studies:

    Study1, 100k participants (50k each vacc/control) and 1k of the control and 500 of the vacc group got sick, and
    Study2, 100k participants (50k each vacc/control) and 2k of the control and 1000 of the vacc group got sick

    because the disease prevalence during Study2 was twice what it was during Study1, that is exactly what I think the math would predict if the efficacy was the same for both trials. I'm trying to find the math behind the assertion (that I think I hear from the Vox video) that those are not actually the expected values you would see for equal efficacy, for non-extreme values of disease prevalence (i.e., neither of the control groups had close to 0% or close to 100% get sick.

    I really do get that the notion that you can have other kinds of differences between studies that can affect the results.

    I realize this is a narrow, nitpicky, theoretical point that may not be of general interest. But I went to the trouble to get a degree in this, and I don't understand the math behind the assertion, and I'd like to understand it, if it wasn't a mistake.
    Your math is generally correct, but given the small number of positives involved, +/- 1 individual would have yielded a notably different result.
    Did we miss a few positives in the vax group because they were asymptomatic? Or because test failure?

    The statistics are way more reliable, IMO, if you have a big enough trial to get at least a few dozen vax recipients who turn up positive. Enough to get 100 is even better (reliability wise), because then if you miss 1 or 2 (1-2% difference) for whatever reason it won't skew the statistical calculations nearly as much as only 8 positives +/- 1 (12.5% difference).

    ... Not any kind of expert here... Lane assist is turned off.
    "No free man shall ever be debarred the use of arms." - Thomas Jefferson, Virginia Constitution, Draft 1, 1776

  4. #7124
    Site Supporter ccmdfd's Avatar
    Join Date
    Feb 2011
    Location
    Southeastern NC
    Biostatistics! Great!

    We've gone from ludicrous speed to full plaid!

    After my earlier post, 2 more post covid short of breaths. Not asthma in these two, still trying to figure it out.

  5. #7125
    Got my first round of Pfizer last week after consultation with my PCP. Thankfully no reaction (I’ve had anaphylaxis before from flu shots and an IV dose of a drug).

  6. #7126
    Thanks, Nephrology and RoyGBiv.

    The test vs. prevalence angle is possible. My gut says the prevalences weren't different enough to have a large effect, but guts can be wrong; I'd have to do the math to be sure.

    RoyGBiv, indeed, sample sizes matter. I think there is good news on that front. In WA state more than a million people have been fully vaccinated, and something like 5 million haven't been vaccinated at all.

    Of the fully vaccinated, 102 have tested positive for covid, including 8 that were hospitalized and perhaps 2 deaths. So that's the 'vaccinated' group.

    Using the 5 million unvaccinated as the 'control' group (yes, not a rigorous control group for all the reasons), and a lot of handwaving, I get very roughly 13K cases as a lower bound in the 5 million 'control' group (that's prorating new cases from 15Mar to 31Mar). So - and let me repeat this is just a 'Fermi estimate' kind of thing - that's an efficacy of 96ish%, so not far off the efficacy predicted by the smaller trials study.

    covid numbers: https://www.seattletimes.com/seattle...ive-for-covid/

    vaccinated covid numbers: https://www.seattletimes.com/seattle...ive-for-covid/

    (ooops, the 'not vaccinated' number was from a new article that I closed because it kept restarting annoying video while I was typing this)

  7. #7127
    Site Supporter
    Join Date
    Jul 2016
    Location
    Away, away, away, down.......
    Quote Originally Posted by ccmdfd View Post
    Biostatistics! Great!

    We've gone from ludicrous speed to full plaid!

    After my earlier post, 2 more post covid short of breaths. Not asthma in these two, still trying to figure it out.
    That like was for ludicrous speed post, and not that you’re seeing more patients with respiratory problems.

    In other news one of my friends brothers who is in residency to become a doc got covid last summer and had a pretty much asymptomatic case. However, he’s since developed a random loss or equilibrium/balance that will last from a few seconds to a minute plus before he can recover. He’s about thirty and was in great shape.
    im strong, i can run faster than train

  8. #7128
    THE THIRST MUTILATOR Nephrology's Avatar
    Join Date
    Sep 2011
    Location
    West
    Quote Originally Posted by jh9 View Post
    Ahem.



    I do not like interesting right now. I could use less interesting.
    For what it's worth I went through the paper again and I am convinced that they are making an appropriate comparison. Too lazy to re-write so here's an email I sent my ER preceptor (emailed back and forth about it with him this AM)

    Yes, presumably Moderna/Pfizer must have used different tests, at least if the study authors are correctly representing their findings of 100% seroconversion after dose 1. That said, now that you mention it, I’m looking through the Pfizer EUA packet and I’m not sure if I am seeing where they are getting that conclusion.

    The data are presented differently – Pfizer plotted the quantitative ELISA results as continuous data, vs. nominal “yes/no” of JAMA letter. Nevertheless, I don’t think I agree that the Pfizer results suggest 100% seroconversion rate after dose 1, though I may be interpreting their figures incorrectly.

    By my read, the results seem comparable to JAMA letter. It seems like at day 21 (presumably, just before or after dose #2), the antibody response in almost all groups is not substantially higher vs. day 1 (before/after dose #1). This is most striking in Fig 7, which may be a better comparison vs JAMA population, given its bias towards older individuals.

    See attached – let me know what you think.
    Basically I think the study is fatally flawed because they measured transplant patient antibody responses without using healthy matched controls. This would have been OK if their historical comparison (the Pfizer/Moderna data) measured and analyzed in the same fashion, but it was not.

    The FDA data used a "continuous" (i.e. numerical value) result to describe antibody presence (i.e. "how much antibody") while the JAMA letter used discrete data (i.e. antibodies are "detectable" or not). The problem is that in the FDA data, even the placebo patients had "detectable antibody" (the base value result of the instrument). Thus the question is, what are the limits that define "detectable" between the two tests, and is that an appropriate comparison?

    Anyway the letter isn't anything to get worked up over, I suspect that in reality the efficacy of vaccine in solid organ tpx patients like myself is likely much higher than the result suggests, but we won't know until better studies come out.

  9. #7129
    THE THIRST MUTILATOR Nephrology's Avatar
    Join Date
    Sep 2011
    Location
    West
    Quote Originally Posted by ccmdfd View Post
    After my earlier post, 2 more post covid short of breaths. Not asthma in these two, still trying to figure it out.
    Anecdotally it seems like post-COVID lung disease is a lot more common than all cause ARDS (I think we've talked about this and it jives with your experience), but I don't think it's been rigorously studied yet. I'm sort of curious to see how it shakes out long run.

    Same thing re COVID associated coagulopathy etc (different/same/more/less prevalent vs septic DIC?). Respiratory phenomenon seems increasingly unique to COVID but I do wonder about the rest of it, and how measurably different it is than other weird sequellae of critical illness

  10. #7130
    Site Supporter
    Join Date
    Jan 2012
    Location
    Fort Worth, TX
    Quote Originally Posted by whomever View Post
    RoyGBiv, indeed, sample sizes matter. I think there is good news on that front. In WA state more than a million people have been fully vaccinated, and something like 5 million haven't been vaccinated at all.

    Of the fully vaccinated, 102 have tested positive for covid, including 8 that were hospitalized and perhaps 2 deaths. So that's the 'vaccinated' group.

    Using the 5 million unvaccinated as the 'control' group (yes, not a rigorous control group for all the reasons), and a lot of handwaving, I get very roughly 13K cases as a lower bound in the 5 million 'control' group (that's prorating new cases from 15Mar to 31Mar). So - and let me repeat this is just a 'Fermi estimate' kind of thing - that's an efficacy of 96ish%, so not far off the efficacy predicted by the smaller trials study.
    I wonder what the elapsed time was between vaccination and diagnosis for the vaccine group, but, otherwise find it interesting that your backofthenapkin numbers match closely with the manufacturers claimed efficacy. Kinda neato.

    Thanks.
    "No free man shall ever be debarred the use of arms." - Thomas Jefferson, Virginia Constitution, Draft 1, 1776

User Tag List

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •