This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| en:iot-reloaded:k_means [2024/09/25 12:28] – created agrisnik | en:iot-reloaded:k_means [2024/12/10 21:36] (current) – pczekalski | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | adfasfd | + | ===== K-Means ===== |
| + | |||
| + | The first method discussed here is one of the most commonly used – K-means. K-means clustering is a method that splits the initial set of points (objects) into groups, using distance measure, representing a distance from the given point of the group to the group' | ||
| + | The algorithm steps schematically are represented in the following figure {{ref> | ||
| + | |||
| + | <figure K-means_steps> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | In the figure: | ||
| + | * **STEP 1: | ||
| + | * **STEP 2: | ||
| + | * **STEP 3:** For each point, the closest cluster centre, which is the point marker, is selected. | ||
| + | * **STEP 4: | ||
| + | * **STEP 5:** The initial cluster centre is being refined to minimise the average distance to it from each cluster point. As a result, cluster centres might no longer be physical points; instead, they become imaginary. | ||
| + | * **STEP 6: | ||
| + | |||
| + | Steps 4-6 are repeated until changes in cluster positions do not change or changes are insignificant. The distance is measured using Euclidian distance: | ||
| + | |||
| + | <figure Euclidian distance> | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | where: | ||
| + | * **Data points** - points {x< | ||
| + | * **K** – number of clusters set by the user. | ||
| + | * **r< | ||
| + | * **m< | ||
| + | * **D** – Squared Sum of all distances di to their assigned cluster centroids. | ||
| + | * **Goal** is to find such values of variables r< | ||
| + | |||
| + | Example of initial data and assigned cluster marks with cluster centres after running the K-means algorithm (figure {{ref> | ||
| + | |||
| + | <figure K-means_example_1> | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | Unfortunately, | ||
| + | Example of setting different numbers of cluster centres (figure {{ref> | ||
| + | |||
| + | <figure K-means_example_2> | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | ==== Elbow method ==== | ||
| + | |||
| + | In K-means clustering, a practical method – the Elbow method is used to select a particular number of clusters. The elbow method is based on finding the point at which adding more clusters does not significantly improve the model' | ||
| + | As explained, K-means clustering optimises the sum of squared errors (SSE) or squared distances between each point and its corresponding cluster centroid. Since the optimal number of clusters (NC) is not known initially, it is wise to increase the NCs iteratively. The SSE decreases as the number of clusters increases because the distances to the cluster centres also decrease. However, there is a point where the improvement in SSE diminishes significantly. This point is referred to as the " | ||
| + | |||
| + | Steps of the method: | ||
| + | - **Plot SSE against the number of clusters: | ||
| + | * Computing the SSE for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20). | ||
| + | * Plotting the SSE values on the y-axis and the number of clusters NC on the x-axis. | ||
| + | - **Observe the plot:** | ||
| + | * As the number of clusters NC increases, the SSE will decrease because clusters become more specialised. | ||
| + | * Initially, adding clusters will result in a significant drop in SSE. | ||
| + | * After a certain point, the reduction in SSE will slow down, not showing a significant drop in the SSE. | ||
| + | - **The " | ||
| + | * The point on the curve where the rate of decrease sharply levels off forms the " | ||
| + | * This is where adding more clusters beyond this point doesn' | ||
| + | - **Select optimal NC:** | ||
| + | * The value of NC at the elbow point is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance. | ||
| + | |||
| + | Since the method requires iteratively running the K-means algorithm, which might be resource-demanding, | ||
| + | |||
| + | Limitations: | ||
| + | * The elbow point is not always obvious; in some cases, the curve may not show a distinct " | ||
| + | * The elbow method is heuristic and might not always lead to the perfect number of clusters, especially if the data structure is complex. | ||
| + | * Other methods, like the Silhouette score, can complement the elbow method to help determine the optimal NC. | ||
| + | |||
| + | <figure Elbow_example> | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | The figure above (figure {{ref> | ||
| + | |||
| + | ==== Silhouette Score ==== | ||
| + | The Silhouette Score is a metric used to evaluate the quality of a clustering result. It measures how similar an object (point) is to its own cluster (cohesion) compared to other clusters (separation). The score ranges from −1 to +1, where higher values indicate better-defined clusters ((Peter J. Rousseeuw (1987). " | ||
| + | |||
| + | The Silhouette score considers two main factors for each data point: | ||
| + | |||
| + | * **Cohesion (a(i))** - The cohesion measure for the ith point is the average distance between the point and all other points in the same cluster. It measures the point' | ||
| + | * **Separation (b(i))** – The separation measure for the ith point estimates the average distance between the point and points in the nearest neighbouring cluster - the cluster that is not its own but is closest to it. A large value for b(i) indicates that the point is far away from the closest other cluster, meaning it is well-separated. | ||
| + | |||
| + | The silhouette score for a point i is then calculated as: | ||
| + | |||
| + | <figure Silhouette score > | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | where: | ||
| + | * s(i) is the silhouette score for point i. | ||
| + | * a(i) is the average distance from point i to all other points in the same cluster. | ||
| + | * b(i) is the average distance from point i to all points in the nearest other cluster. | ||
| + | * s(i) ≈+1 indicated that the point i is well clustered. | ||
| + | * s(i) around 0 indicates that the point lies close to the boundary between clusters. | ||
| + | * s(i) ≈-1 indicated that the point i was most probably wrongly assigned to the cluster. | ||
| + | |||
| + | Steps of the method: | ||
| + | - **Plot silhouette score (SC) against the number of clusters: | ||
| + | * Computing the SC for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20). | ||
| + | * Plotting the SC values on the y-axis and the number of clusters NC on the x-axis. | ||
| + | - **Observe the plot:** | ||
| + | * As the number of clusters NC increases, the SC shows different score values, which may or may not gradually decrease, as in the case of the " | ||
| + | * The main goal is to observe the maximum SC value and the corresponding NC value. | ||
| + | - **Select optimal NC:** | ||
| + | * The value of NC at the maximum SC value is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance. | ||
| + | |||
| + | |||
| + | Limitations: | ||
| + | * It may not perform well if the data does not have a clear structure or if the clusters are of very different densities or sizes. | ||
| + | * The Silhouette score might not always match intuitive or domain-specific clustering insights. | ||
| + | |||
| + | An example is provided in the following figure {{ref> | ||
| + | |||
| + | <figure Silhouette_example> | ||
| + | {{ {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | The user should look for the highest score, which in this case is for the 3-cluster option. | ||