Significance Levels

In statistics, p-value and significance level are very important concepts in hypothesis testing. In the case of research, the researcher has to set a hypothesis in order to start with the analysis. This hypothesis is called the null hypothesis. The null hypothesis has to go through statistical hypothesis testing on the basis of pre-defined statistical examinations. When a statistician determines that some outcome is highly significant, that shows that the outcome has a high probability of being true.

Significance Level Definition

Significance levels, also known as alpha levels, are a statistical threshold used in hypothesis testing to determine whether the results of a study are statistically significant. The significance level represents the probability of rejecting the null hypothesis (i.e., the hypothesis that there is no difference or effect) when it is actually true.

In hypothesis testing, the significance level is typically set at 0.05 or 0.01, which corresponds to a 5% or 1% chance of making a Type I error, respectively. A Type I error occurs when the null hypothesis is rejected even though it is actually true.

To determine whether a result is statistically significant, researchers compare the p-value to the significance level. The p-value represents the probability of obtaining a result as extreme as or more extreme than the observed result, assuming that the null hypothesis is true. If the p-value is less than the significance level, the result is considered statistically significant, and the null hypothesis is rejected.

For example, if a researcher conducts a study and obtains a p-value of 0.03 with a significance level of 0.05, the result is considered statistically significant, and the null hypothesis is rejected. This means that the researcher can conclude that there is evidence of a difference or effect in the population.

However, it is important to note that statistical significance does not necessarily imply practical significance or importance. A result may be statistically significant but have little practical significance, or it may be practically significant but not statistically significant due to factors such as sample size or measurement error.

In summary, significance levels are a statistical threshold used to determine whether the results of a study are statistically significant, and they play a crucial role in hypothesis testing by controlling the risk of Type I errors.

Standard Significance Level

In many disciplines an α level of 0.05 is standard, for example in the social sciences. There are some situations when a higher or lower α level may be desirable. Pilot studies (smaller studies performed before a larger study) often use a higher α level because their purpose is to gain information about the data that may be collected in a larger study; pilot studies are not typically used to make important decisions.

Studies in which making a Type I error would be more dangerous than making a Type II error may use smaller α levels. For example, in medical research studies where making a Type I error could mean giving patients ineffective treatments, a smaller α level may be set in order to reduce the likelihood of such a negative consequence. Lower α levels mean that smaller p-values are needed to reject the null hypothesis; this makes it more difficult to reject the null hypothesis, but this also reduces the probability of committing a Type I error.

Type I and Type II Errors

Issues with Multiple Testing