Table of Contents

Data Preparation for Data Analysis

Introduction

In most cases, data must be prepared before analysing or applying some processing methods. There might be different reasons for this, such as missing values, sensor malfunctioning, different time scales, different units, the specific format needed for a given method or algorithm, and many more. Therefore, data preparation is as necessary as the analysis itself. While data preparation is usually particular to a given problem, some standard general cases and preprocessing tasks are beneficial. Data preprocessing also depends on the data's nature – preprocessing is usually very different for data, where the time dimension is essential (time series), or it is not like a log of discrete cases for classification, where there are no internal causal dependencies among entries. It must be emphasised that whatever data preprocessing is done needs to be carefully noted and the reasoning behind it explained so that others can understand the results acquired during the analysis.

"Static data"

Some of the methods explained here might also be applied to time series but must be done with full awareness of possible implications. Usually, the data should be formatted as a table consisting of rows representing data entries or events and fields representing features of the event entry. For instance, a row might represent a room climate data entry, where fields or factors represent air temperature, humidity level, CO2 level and other vital measurements. For the sake of simplicity in this chapter, it is assumed that data is formatted as a table.

Filling the missing data

One of the most common situations is missing sensor measurements, which communication channel issues, IoT node malfunctioning or other reasons might cause. Since most data analysis methods require complete entries, it is necessary to ensure that all data fields are present before applying the analysis methods. Usually, there are some common approaches to deal with the missing values:

Scaling

Scaling is a very often used method for continuous value numerical factors. The main reason is that different value intervals for other factors are observed. It is essential for methods like clustering, where a multi-dimensional Euclidian distance is used, where, in the case of different scales, one of the dimensions might overwhelm others just because of a higher order of the numerical values. Usually, scaling is performed by applying a linear transformation of the data with set min and max values, which mark the desired value interval. In most software packages, like Python Pandas [1], scaling is implemented as a simple-to-use function. However, it might be done manually if needed as well:

 Scaling
Figure 1: Scaling

where:
Vold – the old measurement
Vnew – the new – scaled measurement
mmin – minimum value of the measurement interval
mmax – maximum value of the measured interval
Imin – minimum value of the desired interval
Imax – maximum value of the desired interval

Normalisation

Normalisation is effective when the data distribution is unknown or known as non-Gaussian (not following a bell curve of the Gaussian distribution). It is beneficial for data with varying scales, especially when using algorithms that do not assume any specific data distribution, such as k-nearest neighbours and artificial neural networks. Normalisation does not change the scale of the values but the distribution of the values to represent a Gaussian distribution. This technique is mainly used in machine learning and is performed with appropriate software packages due to the complexity of the calculations when compared to scaling.

Adding dimensions

Sometimes, it is necessary to emphasise a particular phenomenon in the data. For instance, it might be very helpful to bolden the changes in the factor value, i.e., those that are more distant from 0 should be even larger, but those closer should not be raised. In this case, applying the exponent function to the factor values—squaring or raising to a power of 4—is a simple technique. If negative values are present, uneven power values might be used. A variation of the technique might be summing up different factor values before or after applying the exponent. In this case, a group of similar values representing the same phenomenon emphasises it. Any other function can be used to represent the specifics of the problem.

Time series

Time series usually represent the dynamics of some process, and therefore, the order of the data entries has to be preserved. This means that in most cases, all of the mentioned methods might be used as long as the data order remains the same. A time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are arranged in the order in which events occur/are recorded. Several significant consequences follow from this simple fact:

Time Series Analysis Questions

Therefore, there are several questions that data analysis typically tries to answer:

Some definitions

Autocorrelation - A process is autocorrelated if the similarity of the values of a given observation is a function of the time between observations. In other words, the difference between the values of the observations depends on the interval between the observations. This does not mean that the process values are identical but that their differences are similar. The process can equally well be decaying or growing in the mean value or amplitude of the measurements, but the difference between subsequent measurements is always the same (or close).

Seasonality - The process is seasonal if the deviation from the average value is repeated periodically. This does not mean the values must match perfectly, but there must be a general tendency to deviate from the average value regularly. A perfect example is a sinusoid.

Stationarity - A process is stationary if its statistical properties do not change over time. Generally, the mean and variance over a period serve as good measures. In practice, a certain tolerance interval is used to tell whether a process is stationary since ideal cases (no noise) do not tend to occur in practice. For example, temperature measurements over several years are stationary and seasonal. It is not autocorrelated because temperatures are still relatively variable across days. Numerically, stationarity is evaluated with the so-called Dickey-Fuller test [2], which uses a linear regression model to measure change over time at a given time step. The model's t-test [3] indicates how statistically strong the hypothesis of process stationarity is.

Time series modelling

In many cases, it is necessary to emphasise the main pattern of the time series while removing the “noise”. In general, there are two main techniques – decimation and smoothing. Both are widely used but need to be treated carefully.

Moving average (sliding average)

The essence of the method is to obtain an average value within a particular time window, M, thereby giving inertia to the incoming signal and reducing noise's impact on the overall analysis result. Different effects might be obtained depending on the size of the time window M.

 Moving Average
Figure 2: Moving Average

where:
SMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
M – time window

The image in figure 3 demonstrates the effects of a time window size of 10 and 100 measurements – an incoming signal from a freezer's thermometer.

:en:iot-reloaded:equationtwo.png?900 | Moving Average
Figure 3: Moving Average

Exponential moving average

The exponential moving average is widely used in noise filtering, for example, in analysing changes in stock markets. Its main idea is that each measurement's weight (influence) decreases exponentially as time increases. Thus, the evaluation takes more recent measurements and less considers older ones.

 Exponential Moving Average
Figure 4: Exponential Moving Average

where:
EMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
Alpha - smoothing factor between 0 and 1, which reflects the weight of the last - the most recent measurement.

As seen in the picture 5, the exponential moving average in the case of different weighting factor values preserves the shape of the initial signal. It has a minimal lag while removing the noise, which makes it a handy smoothing technique.

 Exponential Moving Average
Figure 5: Exponential Moving Average

Decimation

Decimation is a technique of excluding some entries from the initial time series to reduce the overwhelming or redundant data. As the name suggests, every tenth entry is usually excluded to reduce the data by 10%. It is a simple method that significantly benefits cases of over-measured processes with slow dynamics. With preserved time stamps, the data still allows the application of general time-series analysis techniques like forecasting.


[2] Dickey, D. A.; Fuller, W. A. (1979). “Distribution of the Estimators for Autoregressive Time Series with a Unit Root”. Journal of the American Statistical Association. 74 (366): 427–431. doi:10.1080/01621459.1979.10482531. JSTOR 2286348.
[3] Blair, R. Clifford; Higgins, James J. (1980). “A Comparison of the Power of Wilcoxon's Rank-Sum Statistic to That of Student's t Statistic Under Various Nonnormal Distributions”. Journal of Educational Statistics. 5 (4): 309–335. doi:10.2307/1164905. JSTOR 1164905.