Can affinity propagation be viewed as just a good way to initialize standard methods, such as k-centers clustering?
Tricky question. If you plan on using k-centers clustering no matter what, an exact clustering algorithm would be a terrific way to initialize k-centers clustering! However, if the question is whether k-centers clustering or affinity propagation would be doing the bulk of the work, the answer is affinity propagation. One thing we’ve tried doing is using the output of affinity propagation after each iteration to initialize k-centers clustering. While k-centers clustering always increased the objective function a little bit for the data we looked at, the final objective achieved by affinity propagation was significantly higher than what was achieved by most of the k-centers clustering runs. This indicates that affinity propagation is solving the problem in a quite different way than k-centers clustering. Is it necessary to provide as input both s(i,j) and s(j,i) if it is always the case that s(i,j)=s(j,i), i.e. the similarities are symmetric? (Here, s(i,j) is the similarity of point i to
- When the clusters found by affinity propagation are fed into the standard k-centers clustering method, the net similarity sometimes increases (albeit only slightly). Why?
- Is there a connection between affinity propagation and other methods, like hierarchical agglomerative clustering and spectral clustering (normalized cuts)?
- Can affinity propagation be viewed as just a good way to initialize standard methods, such as k-centers clustering?