• Users Online: 256
  • Print this page
  • Email this page

 Table of Contents  
Year : 2018  |  Volume : 1  |  Issue : 1  |  Page : 25-26

Assumptions and numbers: The balancing act of statistics

Department of Anaesthesiology, Vijayanagar Institute of Medical Sciences, Bellary, Karnataka, India

Date of Web Publication11-Jan-2019

Correspondence Address:
Prof. S Bala Bhaskar
Department of Anaesthesiology, Vijayanagar Institute of Medical Sciences, Bellary - 583 104, Karnataka
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/ARWY.ARWY_10_18

Rights and Permissions

How to cite this article:
Bhaskar S B. Assumptions and numbers: The balancing act of statistics. Airway 2018;1:25-6

How to cite this URL:
Bhaskar S B. Assumptions and numbers: The balancing act of statistics. Airway [serial online] 2018 [cited 2023 Feb 1];1:25-6. Available from: https://www.arwy.org/text.asp?2018/1/1/25/250024

Evidence obtained from research guides modern medical practice. Anaesthesiologists and critical care physicians have to face practical dilemmas, especially with regard to acute situations. They are often expected to conduct scientific research on relevant topics for improving patient care. Most medical practitioners do not carry out research, but they are expected to be updated about the recent advances and evidence which can be applied to their practice. They should be able to critically appraise published material and decide its relevance and importance in patient care.

The main basis of medical research remains in testing a hypothesis (an assumption) in a patient population and determining the ability to reproduce the same results in the human population. This assumption itself may at times be wrong. Lack of knowledge regarding the methodology for scientific research has led to misreporting, misrepresenting and misinterpretation of results among young researchers. Statistics continues to be the Achilles' heel during research for most medical practitioners. Limited understanding of the definitions of data and their descriptions has further compounded this problem. Issues related to data collection and data compilation and cleaning also contribute to the poor quality of output and its acceptability.

The last straw is when output data are subjected to wrong statistical tests. Statistical tests are applied based on the assumptions. Deviations or confluences from the assumptions are assessed, depending on the type of sample, the type of data and the data distribution. Even when data collection is perfect, the output may be marred by application of wrong statistical tests. Hence, correct input to the statistician by the clinician in terms of the study parameters and the hypothesis is vital to get a valid statistical outcome. At the most basic level, the speciality of statistics is based on certain assumptions and abstract values, but majority of the tests have stood the test of time.

'There are three kinds of lies: lies, damned lies and statistics' (attributed to Benjamin Disraeli by Mark Twain) – this is an extreme immoderate expression of the role of statistics cited in many publications. Statistics cannot lie; ill-informed people may lie resulting in wrong statistical outcomes. Since the statistician is a specialist, a researcher who does not know the basics may be led down the garden path by the statistician, projecting positive outcomes which may never actually exist. If the statistician lies, it is very difficult for it to be detected by a novice researcher because the statistician is able to put it across in statistical jargon in a very convincing manner because he knows the subject. Thus, it is imperative that all researchers learn the basics of statistics. Before submitting the results of a study for statistical analysis, the methodology has to be perfect with the outcomes and the data identified in a manner to make them measurable in as precise (free of random errors) and accurate (free of systematic errors) manner as possible.

Data Manipulation: The trend in modern publishing philosophy is that researches with positive outcomes are more likely to be published than those with negative results.[1] Hence, there is a tendency for young researchers to seek a positive statistical outcome with the biostatisticians. Thus, they may by themselves, or with a little prodding of the statistician, misuse the data or torture it enough till they get the desired result.[2] An overall survey of the data obtained can give valuable information to an experienced analyst. At the other end of the spectrum, if too many tests are performed, a few may become significant by chance. A proper input to the statistician will help in applying the right test to the right data to get the correct and valid levels of significance.

Cherry picking is another act that should be deprecated in clinical research. It refers to suppressing evidence or projecting incomplete evidence, wherein individual cases or data that seem to confirm a particular outcome are preferably highlighted, and the contradictory parts related to the case or the data are omitted.

The statistical tests to be applied to the results should be decided in advance and not at the end of the study. Without stringent and strong methodology, statistics will neither be relevant nor useful. We have come a long way in modern research practice, wherein increasing knowledge and critiquing has led to a reduction in the misuse of statistics. Still, in a developing country such as India, even though the volume of research and publications has drastically increased in the last decade, there has been poor implementation of statistical tests including sample size calculations. It is disappointing, for example, to look at a submitted manuscript with a sample size as per the calculation of 60 (30 in each of 2 groups), with exactly 60 being assessed as per the inclusion criteria, the same number being recruited, randomised and subjected to the trial (all 60 accounted for in a CONSORT flow chart) and exactly the same number being subjected to the statistical analysis. A flow in this style raises suspicion about the quality of the study methodology. What it would imply is that all patients approached fitted into the inclusion criteria for the study; all agreed without hesitation to participate in the study with absolutely no dropout during the study period, and perfect data collection was possible from all!

Abandoning error and power-based sample size estimation in favour of 'convenience sample' or 'pilot-study based' sample size estimation is no longer acceptable. Sample size deduced on the basis of a previous publication is acceptable, provided the correct procedure is followed. The calculation for a randomised controlled trial is based on an effect size related to the primary outcome measure along with the α error (and the confidence interval) and the power of the study. These are vaguely mentioned or inadequately expressed in most of the research protocols. Sample size estimation for other types of studies (non-randomised controlled trials) also needs to be clearly discussed.

The P in Statistics: A statistical test looks at the likelihood that a certain result would have occurred based on the assumptions/hypothesis about the underlying population and outcomes being studied. However, a certain proportion of results favouring the hypothesis could occur by chancedespite the best methodology, and this is thePvalue – a measure of the effect of chance within a study. One should remember the old aphorism, 'Absence of evidence is not evidence of absence'. It is misinterpreted by the beginner as the probability that the result of the study is true or correct. Too much significance cannot be attached to thePvalue in certain clinical outcome assessments. For example, consider a study comparing time for successful intubation between two videolaryngoscopes. If the time to intubation (the numerical variable) is 22 s with the existing device and 15 s with a new device, would a P = 0.05 obtained from the statistical test be clinically significant? Definitely not in routine practice, but may be a factor in a patient with a severely compromised airway, where oxygen reserves are low, where there is greater demand for oxygen such as sepsis, or in a morbidly obese patient. Misinterpretation of P values has been elaborately discussed in an essay by Greenland and others.[3] Things have come to such a pass that some journals discourage the use of 'statistical significance' in the publications.[4] An ideal and the best outcome is the one that does not really require any statistical support to prove or disprove anything, as the results are obvious to eyeballing. However, due to limitations of resources, very small sample sizes are chosen that are insufficient to prove/disprove the hypothesis based on the statistical tests.

Statistics is a science of probabilities, not conclusions. Probabilities have to be analysed logically in clinical and epidemiological scenarios which are being studied and only then conclusions should be drawn. Love it or hate it, research is not complete without statistics.

In modern practice, many software packages are available allowing for descriptive statistics and the important inferential tests to be performed. The major software which are popular are Statistical Package for the Social Sciences (SPSS, developed by IBM Corporation, Armonk, USA), Statistical Analysis System (SAS, developed by SAS Institute North Carolina, United States of America), R (designed by Ross Ihaka and Robert Gentleman from R core team), Minitab (developed by Minitab Inc.), Stata (developed by Stata Corp.); MS Excel (developed by Microsoft) and GraphPad (developed by GraphPad Software, Inc., San Diego, USA).[5]

For the published literature, guidelines for reporting the basic statistical method details and results are available (SAMPL – Statistical Analyses and Methods in the Published Literature) so that reporting deficiencies are reduced and proper analysis is promoted. Such guidelines should be followed by authors.[6] Encouragingly, there has been an improvement in the mention, discussion and sophistication of statistical methods used in the published material.[7]

  References Top

Pocock SJ, Stone GW. The primary outcome is positive – Is that good enough? N Engl J Med 2016;375:971-9.  Back to cited text no. 1
Mills JL. Data torturing. N Engl J Med 1993;329:1196-9.  Back to cited text no. 2
Greenland S, Senn SJ, Rothman KJ, Carlin JB, Poole C, Goodman SN, et al. Statistical tests, Pvalues, confidence intervals, and power: A guide to misinterpretations. Eur J Epidemiol 2016;31:337-50.  Back to cited text no. 3
Lang JM, Rothman KJ, Cann CI. That confoundedPvalue. Epidemiology 1998;9:7-8.  Back to cited text no. 4
Ali Z, Bhaskar SB. Basic statistical tools in research and data analysis. Indian J Anaesth 2016;60:662-9.  Back to cited text no. 5
[PUBMED]  [Full text]  
Lang TA, Altman DG. Basic statistical reporting for articles published in biomedical journals: The “Statistical analyses and methods in the published literature” or the SAMPL guidelines. Int J Nurs Stud 2015;52:5-9.  Back to cited text no. 6
Horton NJ, Switzer SS. Statistical methods in the journal. N Engl J Med 2005;353:1977-9.  Back to cited text no. 7

Note from Editor in Chief, Airway
As part of an initiative from the Editorial Board of ′Airway′, a series of articles titled ′Interpreting Clinical Research Through Statistics′ will be published in the forthcoming issues of the journal. These articles will encompass basic statistical aspects of research and publications with special emphasis on airway related research. We thank Prof. S. Bala Bhaskar for bringing his vast experience into simplifying and making this rather complex subject of statistics understandable to our readers.


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article

 Article Access Statistics
    PDF Downloaded299    
    Comments [Add]    

Recommend this journal