Generative Adversarial Network Based on Multi-scale Dense Feature Fusion for Image Dehazing
LIAN Jing1,2, CHEN Shi1, DING Kun3, LI Lin-hui1,2
1. School of Automotive Engineering, Dalian University of Technology, Dalian 116024, China; 2. State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024, China; 3. Applied Technology College, Dalian Ocean University, Dalian 116000, China.
LIAN Jing, CHEN Shi, DING Kun, LI Lin-hui. Generative Adversarial Network Based on Multi-scale Dense Feature Fusion for Image Dehazing[J]. Journal of Northeastern University(Natural Science), 2022, 43(11): 1591-1598.
[1]Sakaridis C,Dai D,Gool L V.Semantic foggy scene understanding with synthetic data[J].International Journal of Computer Vision,2018,126(9):973-992. [2]Dai D,Sakaridis C,Hecker S,et al.Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding[J].International Journal of Computer Vision,2020,128(5):1182-1204. [3]Tan R T.Visibility in bad weather from a single image[C]//IEEE Conference on Computer Vision and Pattern Recognition.Anchorage,2008:1-8. [4]He K,Sun J,Tang X.Single image haze removal using dark channel prior[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,33(12):2341-2353. [5]Fattal R.Single image dehazing[J].ACM Transactions on Graphics,2008,27(3):1-9. [6]Berman D,Avidan S.Non-local image dehazing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas,2016:1674-1682. [7]Zhang Y,Gao K,Wang J,et al.Single-image dehazing using extreme reflectance channel prior[J].IEEE Access,2021,9:87826-87838. [8]Cai B,Xu X,Jia K,et al.DehazeNet:an end-to-end system for single image haze removal[J].IEEE Transactions on Image Processing,2016,25(11):5187-5198. [9]Sakaridis C,Dai D,Van Gool L.Semantic foggy scene understanding with synthetic data[J].International Journal of Computer Vision,2018,126(9):973-992. [10]Zhang H,Patel V M.Densely connected pyramid dehazing network[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City,2018:3194-3203. [11]Ren W,Si L,Hua Z,et al.Single image dehazing via multi-scale convolutional neural networks[C]// European Conference on Computer Vision.Cham:Springer,2016:154-169. [12]Li B,Peng X,Wang Z,et al.Aod-net:all-in-one dehazing network[C]// Proceedings of the IEEE International Conference on Computer Vision.Venice,2017:4770-4778. [13]Wang N,Cui Z,Su Y,et al.RGNAM:recurrent grid network with an attention mechanism for single-image dehazing[J].Journal of Electronic Imaging,2021,30(3):033026. [14]Qin X,Wang Z,Bai Y,et al.FFA-Net:feature fusion attention network for single image dehazing[C]// Proceedings of the AAAI Conference on Artificial Intelligence.New York,2020:11908-11915. [15]Chen W T,Fang H Y,Ding J J,et al.PMHLD:patch map-based hybrid learning DehazeNet for single image haze removal[J].IEEE Transactions on Image Processing,2020,29:6773-6788. [16]Liu X,Ma Y,Shi Z,et al.Grid dehazenet:attention-based multi-scale network for image dehazing[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision.Seoul,2019:7314-7323. [17]Engin D,Genc A,Ekenel H K.Cycle-dehaze:enhanced cycleGAN for single image dehazing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.Salt Lake City,2018:825-833. [18]Dong Y,Liu Y,Zhang H,et al.FD-GAN:generative adversarial networks with fusion-discriminator for single image dehazing[C]// Proceedings of the AAAI Conference on Artificial Intelligence.New York,2020:10729-10736. [19]Chen D,He M,Fan Q,et al.Gated context aggregation network for image dehazing and deraining[C]// 2019 IEEE Winter Conference on Applications of Computer Vision(WACV).Waikoloa,2019:1375-1383. [20]Dong H,Pan J,Xiang L,et al.Multi-scale boosted dehazing network with dense feature fusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Seattle,2020:2157-2167. [21]Li B Y,Ren W Q,Fu D P,et al.Benchmarking single-image dehazing and beyond[J].IEEE Transactions on Image Processing,2019,28(1):492-505. [22]Dai S,Han M,Wu Y,et al.Bilateral back-projection for single image super resolution[C]// 2007 IEEE International Conference on Multimedia and Expo.Beijing,2007:1039-1042. [23]Wrenninge M,Unger J.Synscapes:a photorealistic synthetic dataset for street scene parsing[EB/OL][2018-12-01].arXiv preprint arXiv:1810.08705. [24]Hahner M,Dai D,Sakaridis C,et al.Semantic understanding of foggy scenes with purely synthetic data[C]// 2019 IEEE Intelligent Transportation Systems Conference(ITSC).Auckland,2019:3675-3681. [25]Liu L,Liu B,Huang H,et al.No-reference image quality assessment based on spatial and spectral entropies[J].Signal Processing:Image Communication,2014,29(8):856-863. [26]Mittal A,Soundararajan R,Bovik A C.Making a “completely blind” image quality analyzer[J].IEEE Signal Processing Letters,2012,20(3):209-212. [27]Mittal A,Moorthy A K,Bovik A C.No-reference image quality assessment in the spatial domain[J].IEEE Transactions on Image Processing,2012,21(12):4695-4708. [28]Moorthy A K,Bovik A C.A two-step framework for constructing blind image quality indices[J].IEEE Signal Processing Letters,2010,17(5):513-516.