So we have now, in the USA, tested over 2M folks for Covid. In every test, we need some stats regarding accuracy and precision. Accuracy would involve the idea of getting a "right" answer, "precision" would involve the idea of not missing cases. Poor accuracy would involve, say, getting H1N1 or SARS or other Corona viruses giving a positive Covid test result where Covid is not actually the exact virus giving the positive result. Poor precision would involve on getting positive Covid results some fraction of the time where Covid is indeed present.
We know nothing about either of these parameters, and could only expect or presume some value or meaning in our testing. However, we have judgment, from experience generally, about all of our methods.
A diagnosis based on clinical observations..... acute pneumonia, fever, dry cough, little upper respiratory congestion..... might be 90% accurate/70% if not obviously explained by something else known to be a cause....
Any kind of "crude" positive antibody or antigen assay might be 95% accurate/90% precise.
A good test developed with positive and negative controls in the panel could do much better, but still miss a case or two in a thousand, and still pick up a positive from some other source in a hundred or so tests. There will be no such thing as an inerrant test.
That said, I note a propensity in human psychology for seeing what we are looking for at the moment, for erring on the side of amplification of our concerns. I have no means to quantify this phenomena, but to say, it's possible it's significant, maybe ten percent, maybe one percent, who knows.
But in 2M tests, we report 400k positives.
Considering the claim, with some supporting data, that as many as 80% of Covid cases are not serious enough to prompt a doctor visit..... and that tests are being done only on people who show up to ask for the test, sometimes for good reason..... And that we are reporting ALL the serious cases and deaths...… we can write an equation....
Tot#Covid = #Positive Test Results + Untested/Unknown Positives. The latter can be estimated from available data.
For reasons I discussed a few posts above, we are probably overstating #CovidDeaths, but this number does claim a positive Covid test result. Our stats on #Critical Covid Cases is likely understated, because some people just don't get that attention. And if our psycology is normal, and our tests are pretty good, we are likely overstating the #Positive Test Results by, perhaps.... in my judgment, aroung 2%. Not really a headliner there. But I will include it in the equation.
The next thing is to evaluate the probable extent of Covid cases out and about, walking around unknown and undetected. The best data for estimating this would be the percent of positives being found in persons who have no signs or symptoms of Covid who have been tested, or the rate of positives in the testing lines where, presumably, worried folks with some signs are hurrying to get tested. The first subset would be an underestimate, the second a fairly large overestimate....
Another useful test for evaluation of extent in the general public would be an antibody test rather than an antigen test like what we are now doing. The antibody result would tell us how many people have been exposed and lived to show it.