This is an archived version of the Handbook. For the current version, please go to or search for this chapter here.  Obtaining standard errors from confidence intervals and P values: absolute (difference) measures

If a 95% confidence interval is available for an absolute measure of intervention effect (e.g. SMD, risk difference, rate difference), then the standard error can be calculated as

SE = (upper limit – lower limit) / 3.92.

For 90% confidence intervals divide by 3.29 rather than 3.92; for 99% confidence intervals divide by 5.15.


Where exact P values are quoted alongside estimates of intervention effect, it is possible to estimate standard errors. While all tests of statistical significance produce P values, different tests use different mathematical approaches to obtain a P value. The method here assumes P values have been obtained through a particularly simple approach of dividing the effect estimate by its standard error and comparing the result (denoted Z) with a standard normal distribution (statisticians often refer to this as a Wald test). Where significance tests have used other mathematical approaches the estimated standard errors may not coincide exactly with the true standard errors.


The first step is to obtain the Z value corresponding to the reported P value from a table of the standard normal distribution. A standard error may then be calculated as

SE = intervention effect estimate / Z.

As an example, suppose a conference abstract presents an estimate of a risk difference of 0.03 (P = 0.008). The Z value that corresponds to a P value of 0.008 is Z = 2.652. This can be obtained from a table of the standard normal distribution or a computer (for example, by entering =abs(normsinv(0.008/2) into any cell in a Microsoft Excel spreadsheet). The standard error of the risk difference is obtained by dividing the risk difference (0.03) by the Z value (2.652), which gives 0.011.