Clustering Models

Introduction

Clustering is a methodology that belongs to the class of unsupervised machine learning. It allows for finding regularities in data when the group or class identifier or marker is absent. To do this, the data structure is used as a tool to find the regularities. Because of this powerful feature, clustering is often used as part of data analysis workflow prior to classification or other data analysis steps to find natural regularities or groups that may exist in data.

This provides very insightful information about the data's internal organisation, possible groups, their number and distribution, and other internal regularities that might help us better understand the data content. One might consider grouping customers by income estimate to explain the clustering better. It is natural to assume some threshold values of 1KEUR per month, 10KEUR per month, etc. However:

  • Do the groups reflect a natural distribution of customers by their behaviour?
  • For instance, does a customer with 10KEUR behave differently from the one with 11KEUR per month?

It is evident that, most probably, customers' behaviour depends on factors like occupation, age, total household income, and others. While the need for considering other factors is obvious, grouping is not – how exactly different factors interact to decide which group a given customer belongs to. That is where clustering exposes its strength – revealing natural internal structures of the data (customers in the provided example).

In this context, a cluster refers to a collection of data points aggregated together because of certain similarities [1]. Within this chapter, two different approaches to clustering are discussed:

  • Cluster centroid-based, where the main idea is to find an imaginary centroid point representing the “centre of mass” of the cluster or, in other words, the centroid represents a “typical” member of the cluster that, in most cases, is an imaginary point.
  • Cluster density-based, where the density of points around the given one determines the membership of a given point to the cluster. In other words, the main feature of the cluster is its density.

In both cases, a distance measure estimates the distance among points or objects and the density of points around the given. Therefore, all factors used should be numerical, assuming an Euclidian space.

Data preprocessing before clustering

Before starting clustering, several necessary steps have to be performed:

  • Check if the used data is metric: In clustering, the primary measure is Euclidian distance (in most cases), which requires numeric data. While it is possible to encode some arbitrary data using numerical values, they must maintain the semantics of numbers, i.e. 1 < 2 < 3. Good examples of natural metric data are temperature, exam assessments, and the like—bad examples are gender and colour.
  • Select the proper scale: For the same reasons as the distance measure, the values of each dimension should be on the same scale. For instance, customers' monthly incomes in euros and their credit ratios are typically at different scales – the incomes in thousands, while ratios between 0 and 1. If scales are not adjusted, the income dimension will dominate distance estimation among points, deforming the overall clustering results. A universal scale is usually applied to all dimensions to avoid this trap. For instance:
    • Unity interval: A minimal factor value is subtracted from the given point value and divided by the interval value, giving the result 0 to 1.
    • Z-scale: The factor's average value is subtracted from the original value of the given point and then divided by the factor's standard deviation, which provides results distributed around 0 with a standard deviation of 1.

Summary about clustering

  • There are many other clustering methods besides the discussed ones; however, all of them, including the discussed ones, require prior knowledge of the problem domain.
  • All clustering methods require setting some parameters that drive the algorithms. In most cases, the value setting might not be intuitive and require interesting fine-tuning.
  • Proper data coding in clustering may provide a significant value even in complex application domains, including medicine, customer behaviour analysis, and finetuning of other data analysis algorithms.
  • In data analysis, clustering is one of the first methods used to acquire the internal structure of the data before applying more informed methods.

To illustrate the mentioned algorithm groups, the following algorithms are discussed in detail:

  • K-Means - a widely used algorithm that uses distance as the main estimate to group objects;
  • DBSCAN - a good example of a density-based algorithm widely used in signal processing;

[1] Understanding K-means Clustering in Machine Learning | by Education Ecosystem (LEDU) | Towards Data Science https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1 – Cited 07.08.2024.
en/iot-reloaded/clustering_models.txt · Last modified: 2024/12/10 21:34 by pczekalski
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0