Journal of Northeastern University(Natural Science) ›› 2023, Vol. 44 ›› Issue (3): 331-339.DOI: 10.12068/j.issn.1005-3026.2023.03.004

• Information & Control • Previous Articles     Next Articles

Two-Stage Inpainting Algorithm Based on U-net Edge Generation and Hypergraphs Convolution

LI Hai-yan1, XIONG Li-chang1, GUO Lei1, LI Hai-jiang2   

  1. -
  • Revised:2022-01-07 Accepted:2022-01-07 Published:2023-03-24
  • Contact: -
  • About author:-
  • Supported by:
    -

Abstract: In order to implement reasonable structure inpainting and fine texture reconstruction for large irregular missing areas with complex background, a two-stage network inpainting algorithm based on U-net edge generation and hypergraphs convolution is proposed. Firstly, the image to be repaired is fed into a coarse inpainting network based on U-net gated convolution where the context information of the image is propagated to a deeper layer through jump connection to obtain rich image detail information. Down-sampling is applied to extract the edge features of the missing area, and up-sampling is performed to restore the edge details of the missing area while hybrid dilated convolution is adopted to increase the information receptive field and further obtain image detailed texture information. Subsequently, the coarse inpainting results are inputted into the refine inpainting network with hypergraphs convolution to capture and learn the hypergraphs structure in the input image, and the cross-correlation matrix of spatial features is implemented to capture the spatial feature structure as well as to further improve the structural integrity and fine-grained details. Finally, the refined inpainting results are input into the discriminator for discrimination optimization to further improve the inpainting results. The experimental simulation is carried out on the internationally published dataset. The experimental results demonstrate that the proposed algorithm can generate a reasonable structure with good color consistency and abundant detail texture under the condition of large-area loss, and the visual effect, PSNR, SSIM and L1 loss are superior over those of the compared algorithms.

Key words: image inpainting; U-net edge generation; hypergraphs convolution; hybrid dilated convolution; two-stage network

CLC Number: