In most cases, data must be prepared before analysing or applying some processing methods. There might be different reasons for it, for instance, missing values, sensor malfunctioning, different time scales, different units, specific format needed for a given method or algorithm, and many more. Therefore, data preparation is as necessary as the analysis itself. While data preparation is usually very specific to a given problem, some common general cases and preprocessing tasks prove to be very useful. Data preprocessing also depends on the data's nature– preprocessing is usually very different for data, where the time dimension is essential (time series), or it is not like a log of discrete cases for classification, where there are no internal causal dependencies among entries. It must be emphasised that whatever the data preprocessing is done, it needs to be carefully noted, and the reasoning behind it must be explained to allow others to understand the results acquired during the analysis.
Some of the methods explained here might also be applied to time series but must be done with full awareness of possible implications. Usually, the data should be formatted as a table consisting of rows representing data entries or events and fields representing features of the event entry. For instance, a row might represent a room climate data entry, where fields or factors represent air temperature, humidity level, CO2 level and other vital measurements. For the sake of simplicity in this chapter, it is assumed that data is formatted as a table.
One of the most common situations is missing sensor measurements, which might be caused by communication channel issues, IoT node malfunctioning or other reasons. Since most of the data analysis methods require complete entries, it is necessary to ensure that all data fields are present before applying the analysis methods. Usually, there are some common approaches to deal with the missing values:
Scaling is a very often used method for continuous value numerical factors. The main reason is that different value intervals for different factors are observed. It is essential for methods like clustering, where a multi-dimensional Euclidian distance is used, where, in the case of different scales, one of the dimensions might overwhelm others just because of a higher order of the numerical values. Usually, scaling is performed by applying a linear transformation of the data with set min and max values, which mark the desired value interval. In most software packages, like Python Pandas [1], scaling is implemented as a simple-to-use function. However, it might be done manually if needed as well:
,where: Vold – the old measurement Vnew – the new – scaled measurement mmin – minimum value of the measurement interval mmax – maximum value of the measured interval Imin – minimum value of the desired interval Imax – maximum value of the desired interval
Normalisation is effective when the data distribution is unknown or known to be non-Gaussian (not following a bell curve of the Gaussian distribution). It is beneficial for data with varying scales, especially when using algorithms that do not assume any specific data distribution, such as k-nearest neighbours and artificial neural networks. Normalisation does not change the scale of the values but the distribution of the values to represent a Gaussian distribution. This technique is mainly used in machine learning and is performed with appropriate software packages due to the complexity of the calculations when compared to scaling.
Sometimes, it is necessary to emphasise a particular phenomenon in the data. For instance, it might be very helpful to bolden the changes in the factor value, i.e. those that are more distant from 0 should be even larger, but those closer should not be raised. In this case, applying the exponent function to the factor values – squaring or raising in power of 4 is a simple technique. If the negative values are present, uneven power values might be used. A variation of the technique might be summing up different factor values before or after applying the exponent. In this case, a group of similar values representing the same phenomenon emphasises it. Any other function can be applied to represent the specifics of the problem.
Time series usually represent the dynamics of some process, and therefore, the order of the data entries has to be preserved. This means that in most cases, all of the mentioned methods might be used as long as the data order remains the same. A time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are arranged in the order in which events occur/ recorded. Several important consequences follow from this simple fact:
Therefore, there are several questions that data analysis typically tries to answer:
Autocorrelation - A process is autocorrelated if the similarity of the values of a given observation is a function of the time between observations. In other words, the difference between the values of the observations depends on the interval between the observations. This does not mean that the process values are identical but that the difference between them is similar. The process can equally well be decaying or growing in the mean value or amplitude of the measurements, but the difference between subsequent measurements is always the same (or close).
Seasonality - The process is seasonal if the deviation from the average value is repeated periodically. This does not mean the values must match perfectly, but there must be a general tendency to deviate periodically from the average value. A perfect example is a sinusoid.
Stationarity - A process is stationary if its statistical properties do not change over time. Generally, the mean and variance over a period serve as good measures. In practice, a certain tolerance interval is used to tell whether a process is stationary since ideal cases (no noise) do not tend to occur in practice. For example, temperature measurements over several years are stationary and seasonal. It is not autocorrelated because temperatures are still relatively variable across days. Numerically, stationarity is evaluated with the so-called Dickey-Fuller test [2], which uses a linear regression model to measure change over time at a given time step. The model's t-test [3] indicates how statistically strong the hypothesis of process stationarity is.
In many cases, it is necessary to emphasise the main pattern of the time series while removing the “noise”. In general, there are two main techniques – decimation and smoothing. Both are widely used but need to be treated carefully.
The essence of the method is to obtain an average value within a certain time window, M, thereby giving inertia to the incoming signal reducing noise's impact on the overall analysis result. Different effects might be obtained depending on the size of the time window M.
, where SMAt - the new smoothed value at time instant t; Xi – ith measurement at a time instant i M – time window
The image below demonstrates the effects of a time window size of 10 and 100 measurements – an incoming signal from a freezer's thermometer.
The exponential moving average is widely used in noise filtering, for example, in the analysis of changes in stock markets. Its main idea is that each measurement's weight (influence) decreases exponentially as time increases. Thus, the evaluation takes more recent measurements and less considers older ones.
, where EMAt - the new smoothed value at time instant t; Xi – ith measurement at a time instant i Alpha - smoothing factor between 0 and 1, which reflects the weight of the last - the most recent measurement.
As seen in the picture below, the exponential moving average in the case of different weighting factor values preserves the shape of the initial signal. It has a minimal lag while removing the noise, which makes it a handy smoothing technique.
Decimation is a technique of excluding some entries from the initial time series to reduce the overwhelming or redundant data. As the name suggests, usually, to reduce the data by 10%, every tenth entry is excluded. It is a simple method that significantly benefits cases of over-measured processes with slow dynamics. With preserved time stamps, the data still allows the application of general time-series analysis techniques like forecasting.