Basic Statistics
- Data Science Essentials: 10 Statistical Concepts
- Cases, Variables, Types of variables
- Matrix and Frequency Table
- Graphs and shapes of Distributions
- Mode, Median and Mean
- Range, Interquartile Range and Box Plot
- Variance and Standard Deviation
- Z-score or Standardized Score
- Contingency Table, Scatterplot, Pearson’s r
- Basics of Regression
- Elementary Probability
- Random Variables and Probability Distributions
- Normal Distribution, Binomial Distribution & Poisson Distribution
10 Must-Know Statistical Concepts for Data Scientists
Essential Statistical Concepts for Mastering Data Science or Data Analytics
Basic statistics is the branch of mathematics that deals with the collection, analysis, interpretation, presentation, and organization of data. It includes a range of methods used to describe and summarize numerical data, including
- measures of central tendency (such as mean, median, and mode)
- measures of variability (such as range, variance, and standard deviation)
- graphical representations (such as histograms and box plots)
➣Measures of central tendency
Basic statistics is used in a wide variety of fields, including business, data sciences, medicine, engineering, and more. It provides a foundation for making decisions based on data and understanding the relationships between different variables.
Data science involves a wide range of statistical concepts and techniques that are used to analyze and interpret data. Some of the key statistical concepts required for data science include:
Central tendency is the central (or typical) value of a probability distribution. The most common measures of central tendency are mean, median, and mode.
- Mean is the average of the values in series.
- Median is the value in the middle when values are sorted in ascending or descending order.
- Mode is the value that appears most often.
➣Variance and standard deviation
These concepts measure the spread of data around the central tendency. Variance is the average squared difference from the mean, while standard deviation is the square root of the variance.
Variance = (1/n) * Σ (xi – μ)^2
where n is the number of data points, xi is the ith data point, μ is the mean of the data set.
The standard deviation is the square root of the variance and is expressed in the same units as the original data. It is a measure of how much the data deviates from the mean.
Standard deviation = √ Variance
➣Covariance and correlation
Covariance and correlation are two statistical measures that describe the relationship between two variables.
Covariance is a measure of how much two variables change together. Specifically, it measures the extent to which two variables vary together, or co-vary.
The formula for the covariance between two variables X and Y is:
Cov(X, Y) = E[(X – E[X])(Y – E[Y])]
where E[X] and E[Y] are the expected values of X and Y, respectively. Covariance can take on positive or negative values, indicating whether the variables tend to move in the same or opposite directions. A covariance of zero indicates that the variables are uncorrelated.
Correlation is a standardized version of covariance that measures the strength and direction of the linear relationship between two variables. Correlation ranges from -1 to 1, with -1 indicating a perfect negative linear relationship, 0 indicating no linear relationship, and 1 indicating a perfect positive linear relationship.
The formula for the correlation coefficient between two variables X and Y is:
r(X, Y) = Cov(X, Y) / (SD[X] * SD[Y])
where SD[X] and SD[Y] are the standard deviations of X and Y, respectively.
In summary, covariance measures how two variables vary together, while correlation measures the strength and direction of the linear relationship between two variables.
➣Central limit theorem
The Central Limit Theorem (CLT) is a fundamental concept in statistics that states that given a sufficiently large sample size from a population with a finite mean and variance, the distribution of the sample means will be approximately normal, regardless of the distribution of the original population.
In other words, the Central Limit Theorem states that as the sample size increases, the distribution of sample means approaches a normal distribution, even if the underlying population distribution is not normal. This theorem is particularly important because it allows statisticians to make inferences about a population based on a sample, as long as the sample is sufficiently large and randomly selected.
The Central Limit Theorem (CLT) is a fundamental concept in statistics that states that given a sufficiently large sample size from a population with a finite mean and variance, the distribution of the sample means will be approximately normal, regardless of the distribution of the original population.
In other words, the Central Limit Theorem states that as the sample size increases, the distribution of sample means approaches a normal distribution, even if the underlying population distribution is not normal. This theorem is particularly important because it allows statisticians to make inferences about a population based on a sample, as long as the sample is sufficiently large and randomly selected.
The Central Limit Theorem has many applications in fields such as finance, economics, and engineering. For example, it can be used to estimate the mean or standard deviation of a population from a sample, or to construct confidence intervals and perform hypothesis testing. It is considered one of the most important concepts in statistics, as it allows us to make reliable inferences about a population based on a sample.
➣P-value
P-value is a statistical measure used in hypothesis testing to determine the likelihood of observing a particular result, assuming that the null hypothesis is true. The null hypothesis is a statement that there is no significant difference between two groups or variables being compared.
The P-value represents the probability of obtaining a test statistic as extreme as, or more extreme than, the observed test statistic, assuming that the null hypothesis is true. A low P-value indicates that it is unlikely that the observed result was due to chance, and the null hypothesis can be rejected.
Typically, a significance level (alpha) is chosen prior to conducting the test, and if the P-value is lower than the significance level, the null hypothesis is rejected. The commonly used significance level is 0.05, which means that if the P-value is less than 0.05, the results are considered statistically significant.
➣Expected value of random variables
Expected value is a concept in probability theory that represents the long-run average of the values of a random variable over repeated trials. It is a measure of the central tendency of a probability distribution.
The expected value of a discrete random variable X can be calculated as:
E(X) = Σ x * P(X = x)
where x is the value of the random variable and P(X = x) is the probability of X taking on the value x.
For example, if we toss a fair coin, the random variable X can take on the values of 0 or 1, where 0 represents tails and 1 represents heads. The probability of X being 0 is 0.5, and the probability of X being 1 is 0.5. Therefore, the expected value of X is:
E(X) = 0 * 0.5 + 1 * 0.5 = 0.5
which means that we can expect the long-run average value of X to be 0.5.
For continuous random variables, the expected value can be calculated using integration:
E(X) = ∫ x * f(x) dx
where f(x) is the probability density function of X.
The expected value is a useful tool in probability theory, as it provides a single number that summarizes the central tendency of a probability distribution, and can be used to compare different distributions or to make predictions about the outcomes of random events.
➣Probability theory
Probability theory is the study of random events and the likelihood of their occurrence. Probability theory is a branch of mathematics that deals with the study of random events and their likelihood of occurrence. It provides a framework for analyzing and understanding the uncertain or random nature of many phenomena, from games of chance to weather patterns to stock market fluctuations.
Probability theory involves the use of mathematical tools to measure and quantify the likelihood or probability of events occurring. The basic building block of probability theory is the concept of probability, which is a numerical measure of the likelihood of an event occurring, expressed as a number between 0 and 1.
The principles of probability theory are used in many areas, including statistics, physics, engineering, finance, and computer science, Data Science etc.
➣Conditional probability
Conditional probability is a measure of the likelihood of an event occurring, given that another event has already occurred. It is denoted by P(A|B), which represents the probability of event A occurring, given that event B has occurred.
The formula for conditional probability is:
P(A|B) = P(A and B) / P(B)
where P(A and B) is the probability of both A and B occurring, and P(B) is the probability of event B occurring.
For example, suppose we have a bag containing 3 red balls and 2 green balls. We randomly select a ball from the bag and without replacing it, we randomly select another ball from the bag. The probability of selecting a red ball on the first draw is 3/5, and if we selected a red ball on the first draw, the probability of selecting a red ball on the second draw is 2/4 (since there are now 4 balls left in the bag, of which 2 are red).
Therefore, the conditional probability of selecting a red ball on the second draw, given that a red ball was selected on the first draw, is:
P(Red on 2nd Draw | Red on 1st Draw) = P(Red on 1st Draw and Red on 2nd Draw) / P(Red on 1st Draw)
= (3/5 * 2/4) / (3/5)
= 2/5
which means that the probability of selecting a red ball on the second draw, given that a red ball was selected on the first draw, is 2/5.
Bayes’ theorem
Bayes’ theorem is a formula used in probability theory to calculate the probability of an event A, given the occurrence of another event B. It is named after Thomas Bayes, an 18th-century English statistician who developed the theorem.
The formula for Bayes’ theorem is:
P(A|B) = P(B|A) * P(A) / P(B)
where P(A|B) is the probability of event A given that event B has occurred, P(B|A) is the probability of event B given that event A has occurred, P(A) is the prior probability of event A, and P(B) is the prior probability of event B.
Bayes’ theorem can be applied to many real-world situations, such as medical diagnosis, spam filtering, and stock market forecasting. For example, in medical diagnosis, Bayes’ theorem can be used to calculate the probability of a patient having a disease, given their symptoms and test results. The prior probability of the disease is based on the prevalence of the disease in the general population, and the likelihood of the symptoms and test results given the disease (i.e., P(B|A)) is based on medical research and clinical experience.
Bayes’ theorem is a powerful tool in probability theory and statistics, as it allows us to update our beliefs and predictions based on new evidence or information. It is widely used in machine learning, data analysis, and decision-making in various fields.
➣Descriptive statistics
Descriptive statistics is a branch of statistics that deals with the analysis and summary of data. It is used to describe and summarize the characteristics of a set of data by organizing, displaying and summarizing the data in a meaningful way.
Descriptive statistics involves various statistical measures such as measures of central tendency (mean, median, and mode), measures of variability (standard deviation, variance, and range), and measures of association (correlation). These measures provide a way to understand the distribution of the data, the spread of the data, and the relationship between different variables.
Descriptive statistics is widely used in various fields such as social sciences, business, and medicine to summarize and communicate data in a clear and concise manner. It is an essential tool for understanding the nature of data, identifying patterns and trends, and making informed decisions based on the data.
➣Inferential statistics
Inferential statistics is the use of statistical methods to draw conclusions about a population based on a sample of data. Inferential statistics is a branch of statistics that involves drawing conclusions and making inferences about a population based on a sample of data. It is used to make predictions or generalize findings beyond the specific data set that was analyzed.
The goal of inferential statistics is to provide a way to generalize findings from a sample to a larger population, while acknowledging the uncertainty and variability inherent in statistical analysis. This allows researchers and analysts to make informed decisions based on data, even when it is not possible to study an entire population directly.
➣Hypothesis testing
Hypothesis testing is a procedure for testing a hypothesis about a population parameter based on a sample of data.
Hypothesis testing is a statistical method used to determine whether there is enough evidence in a sample of data to support or reject a hypothesis about a population. It involves making a statement, or hypothesis, about a population parameter and then testing this hypothesis using sample data.
The hypothesis being tested is typically referred to as the null hypothesis, and the alternative hypothesis is the hypothesis that is being tested against the null hypothesis.
- The null hypothesis usually represents the status quo or a default position
- The alternative hypothesis represents a deviation from the null hypothesis that the researcher is interested in testing.
The process of hypothesis testing involves several steps
- First, the researcher formulates a null hypothesis and an alternative hypothesis.
- Then, a sample is collected, and a test statistic is calculated from the sample data.
- The test statistic is then compared to a critical value, which is derived from a probability distribution, to determine whether the null hypothesis can be rejected or not.
- If the test statistic falls outside the critical value, the null hypothesis is rejected, and the alternative hypothesis is accepted.
- If the test statistic falls within the critical value, the null hypothesis is not rejected, and no conclusion can be drawn about the alternative hypothesis.
Hypothesis testing is widely used in scientific research, social sciences, and business to make decisions based on data and to draw conclusions about populations based on sample data. It is an essential tool for evaluating the validity of hypotheses and for making decisions based on statistical evidence.
➣Regression analysis
Regression analysis is a statistical method used to model the relationship between a dependent variable and one or more independent variables. It is commonly used to predict or estimate the value of the dependent variable based on the values of the independent variables.
The goal of regression analysis is to find the best fit line or curve that can explain the relationship between the variables, such that the difference between the observed values and the predicted values is minimized.
Regression analysis can be used in various fields, including finance, economics, social sciences, engineering, and more. It is a powerful tool for modeling and predicting trends, patterns, and relationships between variables, and it is often used in decision-making processes and policy-making.
➣Time series analysis
Time series analysis is the study of data over time, including the identification of trends, seasonal patterns, and other features. In a time series, data points are collected at regular intervals over time, and the analysis involves identifying patterns and trends in the data. Time series analysis can be used to make predictions and forecasts about future values of the data based on past patterns and trends.
The goal of time series analysis is to model the underlying processes that generate the data, so that predictions can be made about future values. The analysis involves examining the data for trends, seasonality, cyclical patterns, and other features that may influence the values over time.
➣Multivariate analysis
Multivariate analysis is the analysis of data involving more than one variable, including methods such as principal component analysis and cluster analysis. It involves examining the relationships between two or more variables in a dataset to identify patterns, trends, and correlations among them. The goal of multivariate analysis is to understand the complex relationships between variables and to identify the underlying factors that influence them.
There are several types of multivariate analysis techniques, including principal component analysis, factor analysis, discriminant analysis, cluster analysis, and canonical correlation analysis. These techniques can be used to reduce the dimensionality of the data, identify the underlying factors that explain the patterns in the data, and classify or group the data based on similarities and differences.
➣Bayesian statistics
Bayesian statistics is a statistical approach that involves updating prior beliefs or probabilities based on new data. It provides a framework for updating our beliefs or assumptions about a hypothesis based on new evidence or data.
In Bayesian statistics, the probability of a hypothesis or event is not fixed, but instead is updated as new information becomes available. This updated probability is called a posterior probability, and it is based on both the prior probability and the likelihood of the observed data given the hypothesis.
Bayesian statistics is different from classical or frequentist statistics, which assumes that the probability of an event or hypothesis is fixed and only the data can change. In contrast, Bayesian statistics incorporates prior knowledge or beliefs about the hypothesis and updates them based on new evidence.
These are just some of the key statistical concepts required for data science. Other important concepts include data visualization, exploratory data analysis, and machine learning techniques such as classification and clustering.
In summary, this passage highlights that basic statistical concepts are fundamental for anyone working in or planning to work in data science. Although there is much more to learn about statistics, understanding these basics is a good starting point for further learning and development. With a solid foundation in statistical concepts, one can gradually advance to more advanced topics.