- How is the sensitivity of a test defined? What are highly sensitive tests used
for clinically?
Sensitivity is defined as the ability of a test to detect disease—mathematically, the number of
true positives divided by the number of people with the disease. Tests with high sensitivity
are used for disease screening. False positives occur, but the test does not miss many people
with the disease (low false-negative rate). - How is the specificity of a test defined? What are highly specific tests used for
clinically?
Specificity is defined as the ability of a test to detect health (or nondisease)—mathematically,
the number of true negatives divided by the number of people without the disease. Tests
with high specificity are used for disease confirmation. False negatives occur, but the test does
not call anyone sick who is actually healthy (low false-positive rate). The ideal confirmatory test
must have high sensitivity and high specificity; otherwise, people with the disease may be
called healthy. - Explain the concept of a trade-off between sensitivity and specificity.
The trade-off between sensitivity and specificity is a classic statistics question. For example,
you should understand how changing the cut-off glucose value in screening for diabetes
(or changing the value of any of several screening tests) will change the number of true- and
false-negative as well as true- and false-positive results. If the cut-off glucose value is
raised, fewer people will be called diabetic (more false negatives, fewer false positives),
whereas if the cut-off glucose value is lowered, more people will be called diabetic (fewer false
negatives, more false positives) - Define positive predictive value (PPV). On what does it depend?
When a test is positive for disease, the PPV measures how likely it is that the patient has
the disease (probability of having a condition, given a positive test). PPV is calculated
mathematically by dividing the number of true positives by the total number of people with a
positive test. PPV depends on the prevalence of a disease (the higher the prevalence, the
higher the PPV) and the sensitivity and specificity of the test (e.g., an overly sensitive test
that gives more false positives has a lower PPV). - Define negative predictive value (NPV). On what does it depend?
When a test comes back negative for disease, the NPV measures how likely it is that the
patient is healthy and does not have the disease (probability of not having a condition,
given a negative test). It is calculated mathematically by dividing the number of true negatives
by the total number of people with a negative test. NPV also depends on the prevalence of
the disease and the sensitivity and specificity of the test (the higher the prevalence, the
lower the NPV). In addition, an overly sensitive test with lots of false positives makes the
NPV higher. - Define attributable risk. How is it measured?
Attributable risk is the number of cases of a disease attributable to one risk factor (in other
words, the amount by which the incidence of a condition is expected to decrease if the risk
factor in question is removed). For example, if the incidence rate of lung cancer is 1/100 in the
general population and 10/100 in smokers, the attributable risk of smoking in causing lung
cancer is 9/100 (assuming a properly matched control). - Define relative risk. From what type of studies can it be calculated?
Relative risk compares the disease risk in people exposed to a certain factor with the disease
risk in people who have not been exposed to the factor in question. Relative risk can be
calculated only after prospective or experimental studies; it cannot be calculated from
retrospective data. If a Step 2 question asks you to calculate the relative risk from retrospective
data, the answer is “cannot be calculated” or “none of the above.” - What is a clinically significant value for relative risk?
Any value for relative risk other than 1 is clinically significant. For example, if the relative risk is
1.5, a person is 1.5 times more likely to develop the condition if exposed to the factor in
question. If the relative risk is 0.5, the person is only half as likely to develop the condition
when exposed to the factor; in other words, the factor protects the person from developing the
disease. - Define odds ratio. From what type of studies is it calculated?
Odds ratio attempts to estimate relative risk with retrospective studies (e.g., case control). An
odds ratio compares (the incidence of disease in persons exposed to the factor and the
incidence of nondisease in persons not exposed to the factor) with (the incidence of disease in
persons unexposed to the factor and the incidence of nondisease in persons exposed to
the factor) to see whether there is a difference between the two. As with relative risk, values
other than 1 are significant. The odds ratio is a less than perfect way to estimate relative
risk (which can be calculated only from prospective or experimental studies).