====== Regression Models ======
===== Introduction =====
While AI and especially Deep Learning techniques have advanced tremendously, the fundamental data analysis methods still provide a good and, in most cases, efficient way of solving many data analysis problems. Linear regression is one of those methods that provide at least a good starting point to have an informative and insightful understanding of the data. Linear regression models are relatively simple and do not require significant computing power in most cases, which makes them widely applied in different contexts.
The term regression towards a mean value of a population was widely promoted by Francis Galton, who introduced the term "correlation" in modern statistics((Everitt, B. S. (August 12, 2002). The Cambridge Dictionary of Statistics (2 ed.). Cambridge University Press. ISBN 978-0521810999.)) ((Upton, Graham; Cook, Ian (21 August 2008). Oxford Dictionary of Statistics. Oxford University Press. ISBN 978-0-19-954145-4.)) ((3. Stigler, Stephen M (1997). "Regression toward the mean, historically considered". Statistical Methods in Medical Research. 6 (2): 103-114. doi:10.1191/096228097676361431. PMID 9261910)).
===== Linear regression model =====
Linear regression is an algorithm that computes the linear relationship between the dependent variable and one or more independent features by fitting a linear equation to observed data. In its essence, linear regression allows the building of a linear function – a model that approximates a set of numerical data in a way that minimises the squared error between the model prediction and the actual data. Data consists of at least one independent variable (usually denoted by x) and the function or dependent variable (usually denoted by y).
If there is just one independent variable, then it is known as Simple Linear Regression, while in the case of more than one independent variable, it is called Multiple Linear Regression. In the same way, in the case of a single dependent variable, it is called Univariate Linear Regression. In contrast, in the case of many dependent variables, it is known as Multivariate Linear Regression.
For illustration purposes in the figure {{ref>Galton's_data_set}} below, a simple data set is illustrated that was used by F. Galton while studying relationships between parents and their children's heights. The data set might be found here: ((josephsalmon.eu/enseignement/TELECOM/MDI720/datasets/Galton.txt - Cited on 03.08.2024.))
If the fathers' heights are Y and their children's heights are X, the linear regression algorithm is looking for a linear function that, in the ideal case, will fit all the children's heights to their parent heights. So, the function would look like the following equation:
where:
* yi – ith child height
* xi – ith father height
* β0 and β1 y axis crossing and slope coefficients of the linear function correspondingly
Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' is an estimated or forecasted y value. At the same time, the distance between each y-y' pair is called an error. Since the error might be positive or negative, a squared error estimates the error.
It means that the following equation might describe the model:
where
* y'i – ith child height estimated by the model
* xi – ith father height
* Β’0 and β’1 y axis crossing and slope coefficients' estimates of the linear function correspondingly, which minimises the error term:
The estimated beta values might be calculated as follows:
where:
* Cor(X, Y) – Correlation between X and Y (capital letters mean vectors of individual x and y corresponding values)
* σx and σy – standard deviations of vectors X and Y
* µx and µy – mean values of the vectors X and Y
Most modern data processing packages possess dedicated functions for building linear regression models with few lines of code. The result is illustrated in the figure {{ref>Galton's_data_set_with_model}}:
===== Errors and their meaning =====
As discussed previously, an error in the context of the linear regression model represents a distance between the estimated dependent variable values and the estimate provided by the model, which the following equation might represent:
where,
* y'i – ith child height estimated by the model
* yi - ith childer height true values
* ei - error of the model's ith output
Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is typically distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure {{ref>Galton's_data_set_errors}} in red colour:
Unfortunately, knowing the following facts does not always provide enough information about the modelled process. In most cases, due to some dynamic features of the process, the distribution of the errors is as important as the model itself. For instance, a motor shaft wears out over time, and the fluctuations steadily increase from the centre of the rotation. To estimate the overall wearing of the shaft is enough to have just a max amplitude measurement. However, it is not enough to understand the dynamics of the wearing process.
Another important aspect is the order of magnitude of the errors compared to the measurements, which, in the case of small quantities, might be impossible to notice even if the modeller illustrated the model. The following figure {{ref>Error_distribution_example}} might illustrate such situation:
In figure {{ref>Error_distribution_example}} both small error quantities and progression dynamics are illustrated. Another example of cyclic error distribution is provided in the following figure {{ref>Error_distribution_example2}}:
From this discussion, a few essential notes have to be taken:
* Error distributions (around 0) should be treated as carefully as the models themselves;
* In most cases, error distribution is complex to notice even if the errors are illustrated;
* It is essential to look into the distribution to ensure that there are no regularities.
If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered.
In a more general way, the linear model might be described with the following equation:
Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered an indicator for further analysis. Unfortunately, the true value of sigma is not known; thus, its estimated value should be used:
Here, the variance estimated value's expected value equals the true variance value:
===== Multiple linear regression =====
In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model that doesn't seem very easy, but it is still a linear model of the following form:
During the application of the linear regression model, the error term to be minimised is described by the following equation:
Unfortunately, due to the number of factors (dimensions), the results of multiple linear regression cannot be visualised in the same way as those of a single linear regression. Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable.
===== Piecewise linear models =====
Piecewise linear models, as the name suggests, allow splitting the overall data sample into pieces and building a separate model for every piece, thus achieving better prediction for the data sample. The formal representation of the model is as follows:
As it might be noticed, the individual models are still linear and individually simple. However, the main difficulty is to set the threshold values b that splits the sample into pieces.
To illustrate the problem better, one might consider the following artificial data sample (figure {{ref>Complex_data_example}}):
Intuition suggests splitting the sample into two pieces and, with the boundary b around 0, fitting a linear model for each of the pieces separately (figure {{ref>Piecewise_linear_model_two}}):
Since we do not know the exact best split, it might seem logical to play with different numbers of splits at different positions. For instance, a random number of splits might generate the following result (figure {{ref>Piecewise_linear_model_many}}):
It is evident from the figure above that some of the individual linear models do not reflect the overall trends, i.e. the slope steepness and direction (positive or negative) seem to be incorrect. However, it is also apparent that those individual models might be better for the given limited sample split. This simple example brings a lot of confusion in selecting the number of splits and their boundaries.
Unfortunately, there is no simple answer, and the possible solution might be one of the following:
* Using contextual information, the model developer might select a particular number of splits and boundaries based on the context.
* Some additional methods might be used to find the best split automatically. In this case, software packages usually have tools for this. For Python developers, a very handy package mlinsights ((https://pypi.org/project/mlinsights/)) provides a set of such tools, including regression trees and other methods.