Why is an alpha level of 0.05 commonly used by the researcher?

Common statistically significant levels are 5%, 1% and 0.1% depending on the analysis

Statistically significant results are required for many practical cases of experimentation in various branches of research. The concept of statistical significance can be understood by minimum level at which the null hypothesis can be rejected. This means if the researcher sets the statistical significance level at 5% and the probability that the results are a chance process is 3%, and then the researcher can claim that the null hypothesis can be rejected.
In this case, the researcher will call his results to be statistically significant. Lower the significance level, higher the confidence. The choice of the statistical significance level is influenced by a number of parameters and changes with different experiments. In most cases of practical consideration, the distribution of parameters or qualities follows a normal distribution. However, care should always be taken to account for other distributions within the given population. It is important to note that it is impossible to use statistics to prove that the difference in levels of two parameters is zero while determining significant results statistically. This means that the results of a significant analysis should not be interpreted as meaning there was no difference. The only thing that the statistical analysis can state is that the experiment failed to find any difference.

Although 5%, 1% and 0.1% are common significance levels, it is not clear cut which level to use in an actual study - it depends on the norms of the field, previous studies, and the amount of evidence needed. However, it is not recommended to have a higher significance level than 5% because it too often leads to Type I -errors.

Seeing as the alpha level is the probability of making a Type I error, it seems to make sense that we make this area as tiny as possible. For example, if we set the alpha level at 10% then there is large chance that we might incorrectly reject the null hypothesis, while an alpha level of 1% would make the area tiny. So why not use a tiny area instead of the standard 5%?

The smaller the alpha level, the smaller the area where you would reject the null hypothesis. So if you have a tiny area, there�s more of a chance that you will NOT reject the null, when in fact you should. This is a Type II error.
In other words, the more you try and avoid a Type I error, the more likely a Type II error could creep in. Scientists have found that an alpha level of 5% is a good balance between these two issues.


There are two approaches (at least) to conducting significance tests. In one (favored by R. Fisher), a significance test is conducted and the probability value reflects the strength of the evidence against the null hypothesis. If the probability is below 0.01, the data provide strong evidence that the null hypothesis is false. If the probability value is below 0.05 but larger than 0.01, then the null hypothesis is typically rejected, but not with as much confidence as it would be if the probability value were below 0.01. Probability values between 0.05 and 0.10 provide weak evidence against the null hypothesis and, by convention, are not considered low enough to justify rejecting it. Higher probabilities provide less evidence that the null hypothesis is false.


However, the probability that the process was simply a chance encounter can be calculated, and a minimum threshold of statistical significance can be set. If the results are obtained such that the probability that they are simply a chance process is less than this threshold of significance, then we can say the results are not due to chance.

Comments

Popular posts from this blog

Antibiotic Resistant Bacteria and US Meat

australianblog

Some Thoughts on our Global Food System