This study addresses an image-matching problem in challenging cases, such as large scene variations or textureless scenes. To gain robustness to such situations, most previous
studies have attempted to encode the global contexts of a scene via graph neural networks or transformers. However, these contexts do not explicitly represent high-level contextual
information, such as structural shapes or semantic instances; therefore, the encoded features are still not sufficiently discriminative in challenging scenes. We propose a novel image-matching method that applies a topic-modeling strategy to encode high-level contexts in images. The proposed method trains latent semantic instances called topics. It explicitly models an image as a multinomial distribution of topics, and then performs probabilistic feature matching. This approach improves the robustness of matching by focusing on the same semantic areas between the images. In addition, the inferred topics provide interpretability for matching the results, making our method explainable. Extensive experiments on outdoor and indoor datasets show that our method outperforms other state-of-the-art methods, particularly in challenging cases.
Related publications
1. K T Giang, S Song, S Jo, TopicFM+: Boosting Accuracy and Efficiency of Topic-Assisted Feature Matching, IEEE Transactions on Image Processing, 33: 6016-6028, Oct 2024 [LINK] [PDF]
2. KT Giang, S Song, S Jo, TopicFM: Robust and Interpretable Topic-Assisted Feature Matching, 37th AAAI Conference on Artificial Intelligence (AAAI-23), Accepted [PDF]