Information theoretical clustering is hard to approximate
Autor: | Eduardo Sany Laber, Ferdinando Cicalese |
---|---|
Rok vydání: | 2018 |
Předmět: |
FOS: Computer and information sciences
Current (mathematics) 02 engineering and technology F.2.2 I.5.3 Library and Information Sciences Measure (mathematics) Clustering Combinatorics Entropy (classical thermodynamics) Computer Science - Data Structures and Algorithms 0202 electrical engineering electronic engineering information engineering Partition (number theory) Data Structures and Algorithms (cs.DS) Time complexity Physics computational complexity 020206 networking & telecommunications Function (mathematics) Computer Science Applications impurity measures Norm (mathematics) Probability distribution channel quantization mutual information maximization Information Systems |
DOI: | 10.48550/arxiv.1812.07075 |
Popis: | An impurity measures $I: \mathbb {R}^{d} \mapsto \mathbb {R}^{+}$ is a function that assigns a $d$ -dimensional vector $\mathbf {v}$ to a non-negative value $I(\mathbf {v})$ so that the more homogeneous $\mathbf {v}$ , with respect to the values of its coordinates, the larger its impurity. A well known example of impurity measures is the entropy impurity. We study the problem of clustering based on the entropy impurity measures. Let $V$ be a collection of $n$ many $d$ -dimensional vectors with non-negative components. Given $V$ and an impurity measure $I$ , the goal is to find a partition ${\mathcal{ P}}$ of $V$ into $k$ groups $V_{1},\ldots,V_{k}$ so as to minimize the sum of the impurities of the groups in ${\mathcal{ P}}$ , i.e., $I({\mathcal{ P}})= \sum _{i=1}^{k} I\left({\sum _{ \mathbf {v}\in V_{i}} \mathbf {v}}\right)$ . Impurity minimization has been widely used as quality assessment measure in probability distribution clustering (KL-divergence) as well as in categorical clustering. However, in contrast to the case of metric based clustering, the current knowledge of impurity measure based clustering in terms of approximation and inapproximability results is very limited. Here, we contribute to change this scenario by proving that the problem of finding a clustering that minimizes the Entropy impurity measure is APX-hard, i.e., there exists a constant $\epsilon > 0$ such that no polynomial time algorithm can guarantee $(1+\epsilon)$ -approximation under the standard complexity hypothesis $P \neq NP$ . The inapproximability holds even when all vectors have the same $\ell _{1}$ norm. This result provides theoretical limitations on the computational efficiency that can be achievable in the quantization of discrete memoryless channels, a problem that has recently attracted significant attention in the signal processing community. In addition, it also solve a question that remained open in previous work on this topic [Chaudhuri and McGregor COLT 08; Ackermann et. al. ECCC 11]. |
Databáze: | OpenAIRE |
Externí odkaz: |