Skip to content

Clustering Algorithms: K-Means, Hierarchical, and DBSCAN

Clustering algorthms Shamsher Haider Bigdata

Clustering is a key concept in unsupervised machine learning that involves grouping data points into clusters based on their similarities. This article provides a detailed comparison of three popular clustering algorithms: K-Means, Hierarchical Clustering, and DBSCAN. We analyze their mathematical principles, computational complexities, and effectiveness in different data scenarios.

1. K-Means Clustering: A Centroid-Guided Paradigm

K-Means clustering operates under a centroid-based paradigm, minimizing the within-cluster sum of squared distances (WCSS), a metric quantifying intra-cluster variance. Mathematically, the WCSS objective function can be formulated as:

WCSS = Σ ||x_i - c_k||^2  for all i in cluster k

where x_i denotes a data point, c_k represents the centroid of cluster k, and ||.||^2 signifies the squared Euclidean distance.

The K-Means algorithm follows an iterative refinement process:

  1. Initialization: Specify the desired number of clusters (k) and initialize k data points as initial centroids (often chosen randomly).
  2. Assignment: Assign each data point to the nearest centroid based on the Euclidean distance metric.
  3. Update: Recompute the centroids as the mean of the points assigned to each cluster, effectively relocating them to the center of their respective clusters.
  4. Repeat: Steps 2 and 3 until a convergence criterion is met (e.g., centroids stabilize, WCSS change falls below a threshold).


  • Computational Efficiency: K-Means boasts a linear time complexity (O(n * k * T)), where n represents the number of data points, k signifies the number of clusters, and T denotes the number of iterations, making it highly efficient for processing large datasets.
  • Scalability: The algorithm scales well with increasing data size due to its linear time complexity.
  • Interpretability: The centroids offer a straightforward interpretation of cluster centers, facilitating human comprehension of the clustering results.


  • Sensitivity to Initialization: K-Means is highly susceptible to the initial placement of centroids. Suboptimal initial centroids can lead to local minima and potentially compromise the quality of the final clusters.
  • Predefined Cluster Number: The user must pre-define the number of clusters (k), which can be challenging in scenarios where the inherent cluster structure of the data is not readily apparent.
  • Shape Bias: K-Means assumes spherical clusters and might struggle with data exhibiting elongated or irregular shapes. Data preprocessing techniques like Principal Component Analysis (PCA) can be employed to mitigate this limitation in some cases.

Illustrative Example: Consider a dataset representing customer purchase behavior. K-Means clustering, with an appropriate number of clusters (k) defined, could effectively segment customers into distinct purchasing groups based on their buying habits. However, if the customer behavior data exhibits non-spherical clusters (e.g., elongated purchasing patterns), K-Means might not capture the optimal cluster structure.

2. Hierarchical Clustering: Unveiling the Cluster Hierarchy

Hierarchical clustering constructs a hierarchy of clusters, either in a top-down (divisive) or bottom-up (agglomerative) fashion. Here’s a breakdown of the two main approaches:

  • Divisive: This approach starts with all data points in a single cluster. It then iteratively partitions the most dissimilar cluster into two sub-clusters based on a distance metric (e.g., Euclidean distance, Ward’s method) until a stopping criterion is met (e.g., desired number of clusters).
  • Agglomerative: This approach starts with each data point as an individual cluster. It then iteratively merges the two most similar clusters based on a distance metric, gradually building a hierarchy of clusters. The resulting hierarchy can be visualized as a dendrogram, a tree-like structure depicting the merging process.


  • No Predefined Cluster Number: Hierarchical clustering does not require pre-specifying the number of clusters.The dendrogram provides a visual representation of the hierarchical relationships between clusters, aiding in the selection of an appropriate number of clusters based on the desired level of granularity.
  • Flexibility: This approach can handle clusters of various shapes and sizes due to its non-parametric nature. It is not restricted by assumptions about the underlying cluster geometry.


  • Computational Cost: Agglomerative hierarchical clustering can be computationally expensive for large datasets due to its quadratic time complexity (O(n^2 * T)), where n represents the number of data points and T denotes the number of iterations. This complexity arises from the need to compute pairwise distances between all data points during the merging process.
  • Interpretability: While the dendrogram offers valuable insights into the hierarchical relationships between clusters, determining the optimal number of clusters from the dendrogram can be subjective.

3. DBSCAN: Density-Based Clustering for Discovering Arbitrary Shapes (Optional)

Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a clustering algorithm that identifies clusters of arbitrary shapes by focusing on data point density. Unlike K-Means and hierarchical clustering, DBSCAN does not require pre-defining the number of clusters or assuming specific cluster shapes.

Here’s a simplified overview of DBSCAN:

  • Core points: Data points with a minimum number of neighbors within a specified distance (epsilon).
  • Border points: Have fewer neighbors than the minimum threshold but are reachable from a core point through a chain of density-connected points.
  • Noise points: Neither core points nor border points, representing outliers or low-density regions.

DBSCAN iteratively explores the data, identifying core points and their connected neighbors. Border points are then associated with the nearest cluster, and noise points are left unclustered.


  • Flexibility: Handles clusters of arbitrary shapes and sizes effectively.
  • Robustness to Noise: Can identify clusters in the presence of noise points.
  • No Predefined Clusters: Does not necessitate specifying the number of clusters beforehand.


  • Curse of Dimensionality: Performance can deteriorate with high-dimensional data.
  • Parameter Sensitivity: Selecting appropriate values for epsilon and minimum neighbors can impact the clustering results.

By incorporating DBSCAN into your clustering toolbox, you gain the ability to uncover valuable insights from data exhibiting complex cluster structures and containing noise.