When considering the results of a study that reports one treatment to be better than another, what the practicing ophthalmologist really wants to know is the magnitude of the difference between treatment groups. If this difference is large enough, we may wish to offer the new treatment to our own patients. Even in well executed studies, differences between the groups (the sample) may be due to chance alone. The "p" value is the probability that the difference observed between the groups could have occurred purely due to chance. For many ophthalmologists assessing this difference means a simple look this "p" value to convince ourselves that a statistically significant result has indeed been obtained. Unfortunately traditional interpretation of a study based on the "p" value at an arbitrary cut-off (of 0.05 or any other value) limits our ability to fully appreciate the clinical implications of the results. In this article we use simple examples to illustrate the use of "confidence intervals" in examining precision and the applicability of study results (means, proportions and their comparisons). An attempt is made to demonstrate that the use of "confidence intervals" enables more complete evaluation of study results than with the "p" value.