Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierar...

Full description

Bibliographic Details
Main Authors: Cohen-addad, Vincent (Author), Kanade, Varun (Author), Mallmann-Trenn, Frederik (Contributor), Mathieu, Claire (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2019-06-27T18:01:44Z.
Subjects:
Online Access:Get fulltext
LEADER 02203 am a22002053u 4500
001 121430
042 |a dc 
100 1 0 |a Cohen-addad, Vincent  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Mallmann-Trenn, Frederik  |e contributor 
700 1 0 |a Kanade, Varun  |e author 
700 1 0 |a Mallmann-Trenn, Frederik  |e author 
700 1 0 |a Mathieu, Claire  |e author 
245 0 0 |a Hierarchical Clustering: Objective Functions and Algorithms 
260 |b Association for Computing Machinery (ACM),   |c 2019-06-27T18:01:44Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/121430 
520 |a Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a "good" hierarchical clustering is one that minimizes a particular cost function [23]. He showed that this cost function has certain desirable properties: To achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy, and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining "good" objective functions for both similarity- and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a "natural" ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem and design algorithms for this scenario. 
546 |a en_US 
655 7 |a Article 
773 |t Journal of the ACM