Abstrakt: |
Since the representative capacity of graph-based clustering methods is usually limited by the graph constructed on the original features, it is attractive to find whether graph neural networks (GNNs), a strong extension of neural networks to graphs, can be applied to augment the capacity of graph-based clustering methods. The core problems mainly come from two aspects. On the one hand, the graph is unavailable in the most general clustering scenes so that how to construct graph on the non-graph data and the quality of graph is usually the most important part. On the other hand, given $n$n samples, the graph-based clustering methods usually consume at least $\mathcal {O}(n^{2})$O(n2) time to build graphs and the graph convolution requires nearly $\mathcal {O}(n^{2})$O(n2) for a dense graph and $\mathcal {O}(|\mathcal {E}|)$O(|E|) for a sparse one with $|\mathcal {E}|$|E| edges. Accordingly, both graph-based clustering and GNNs suffer from the severe inefficiency problem. To tackle these problems, we propose a novel clustering method, AnchorGAE, with the self-supervised estimation of graph and efficient graph convolution. We first show how to convert a non-graph dataset into a graph dataset, by introducing the generative graph model and anchors. A bipartite graph is built via generating anchors and estimating the connectivity distributions of original points and anchors. We then show that the constructed bipartite graph can reduce the computational complexity of graph convolution from $\mathcal {O}(n^{2})$O(n2) and $\mathcal {O}(|\mathcal {E}|)$O(|E|) to $\mathcal {O}(n)$O(n). The succeeding steps for clustering can be easily designed as $\mathcal {O}(n)$O(n) operations. Interestingly, the anchors naturally lead to siamese architecture with the help of the Markov process. Furthermore, the estimated bipartite graph is updated dynamically according to the features extracted by GNN modules, to promote the quality of the graph by exploiting the high-level information by GNNs. However, we theoretically prove that the self-supervised paradigm frequently results in a collapse that often occurs after 2-3 update iterations in experiments, especially when the model is well-trained. A specific strategy is accordingly designed to prevent the collapse. The experiments support the theoretical analysis and show the superiority of AnchorGAE. |