Volume 3, Issue 3
Fast Gradient Computation for Gromov-Wasserstein Distance

Wei Zhang, Zihao Wang, Jie Fan, Hao Wu & Yong Zhang

J. Mach. Learn. , 3 (2024), pp. 282-299.

Published online: 2024-09

[An open-access article; the PDF is free to any online user.]

Export citation
  • Abstract

The Gromov-Wasserstein distance is a notable extension of optimal transport. In contrast to the classic Wasserstein distance, it solves a quadratic assignment problem that minimizes the pair-wise distance distortion under the transportation of distributions and thus could apply to distributions in different spaces. These properties make Gromov-Wasserstein widely applicable to many fields, such as computer graphics and machine learning. However, the computation of the Gromov-Wasserstein distance and transport plan is expensive. The well-known Entropic Gromov-Wasserstein approach has a cubic complexity since the matrix multiplication operations need to be repeated in computing the gradient of Gromov-Wasserstein loss. This becomes a key bottleneck of the method. Currently, existing methods accelerate the computation focus on sampling and approximation, which leads to low accuracy or incomplete transport plans. In this work, we propose a novel method to accelerate accurate gradient computation by dynamic programming techniques, reducing the complexity from cubic to quadratic. In this way, the original computational bottleneck is broken and the new entropic solution can be obtained with total quadratic time, which is almost optimal complexity. Furthermore, it can be extended to some variants easily. Extensive experiments validate the efficiency and effectiveness of our method.

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{JML-3-282, author = {Zhang , WeiWang , ZihaoFan , JieWu , Hao and Zhang , Yong}, title = {Fast Gradient Computation for Gromov-Wasserstein Distance}, journal = {Journal of Machine Learning}, year = {2024}, volume = {3}, number = {3}, pages = {282--299}, abstract = {

The Gromov-Wasserstein distance is a notable extension of optimal transport. In contrast to the classic Wasserstein distance, it solves a quadratic assignment problem that minimizes the pair-wise distance distortion under the transportation of distributions and thus could apply to distributions in different spaces. These properties make Gromov-Wasserstein widely applicable to many fields, such as computer graphics and machine learning. However, the computation of the Gromov-Wasserstein distance and transport plan is expensive. The well-known Entropic Gromov-Wasserstein approach has a cubic complexity since the matrix multiplication operations need to be repeated in computing the gradient of Gromov-Wasserstein loss. This becomes a key bottleneck of the method. Currently, existing methods accelerate the computation focus on sampling and approximation, which leads to low accuracy or incomplete transport plans. In this work, we propose a novel method to accelerate accurate gradient computation by dynamic programming techniques, reducing the complexity from cubic to quadratic. In this way, the original computational bottleneck is broken and the new entropic solution can be obtained with total quadratic time, which is almost optimal complexity. Furthermore, it can be extended to some variants easily. Extensive experiments validate the efficiency and effectiveness of our method.

}, issn = {2790-2048}, doi = {https://doi.org/10.4208/jml.240416}, url = {http://global-sci.org/intro/article_detail/jml/23417.html} }
TY - JOUR T1 - Fast Gradient Computation for Gromov-Wasserstein Distance AU - Zhang , Wei AU - Wang , Zihao AU - Fan , Jie AU - Wu , Hao AU - Zhang , Yong JO - Journal of Machine Learning VL - 3 SP - 282 EP - 299 PY - 2024 DA - 2024/09 SN - 3 DO - http://doi.org/10.4208/jml.240416 UR - https://global-sci.org/intro/article_detail/jml/23417.html KW - Optimal transport, Gromov-Wasserstein distance, Fast gradient computation algorithm, Fast algorithm. AB -

The Gromov-Wasserstein distance is a notable extension of optimal transport. In contrast to the classic Wasserstein distance, it solves a quadratic assignment problem that minimizes the pair-wise distance distortion under the transportation of distributions and thus could apply to distributions in different spaces. These properties make Gromov-Wasserstein widely applicable to many fields, such as computer graphics and machine learning. However, the computation of the Gromov-Wasserstein distance and transport plan is expensive. The well-known Entropic Gromov-Wasserstein approach has a cubic complexity since the matrix multiplication operations need to be repeated in computing the gradient of Gromov-Wasserstein loss. This becomes a key bottleneck of the method. Currently, existing methods accelerate the computation focus on sampling and approximation, which leads to low accuracy or incomplete transport plans. In this work, we propose a novel method to accelerate accurate gradient computation by dynamic programming techniques, reducing the complexity from cubic to quadratic. In this way, the original computational bottleneck is broken and the new entropic solution can be obtained with total quadratic time, which is almost optimal complexity. Furthermore, it can be extended to some variants easily. Extensive experiments validate the efficiency and effectiveness of our method.

Wei Zhang, Zihao Wang, Jie Fan, Hao Wu & Yong Zhang. (2024). Fast Gradient Computation for Gromov-Wasserstein Distance. Journal of Machine Learning. 3 (3). 282-299. doi:10.4208/jml.240416
Copy to clipboard
The citation has been copied to your clipboard