UM
Affiliated with RCfalse
COMO: Efficient Deep Neural Networks Expansion With COnvolutional MaxOut
Zhao,Baoxin1; Xiong,Haoyi2; Bian,Jiang3; Guo,Zhishan4; Xu,Cheng Zhong5; Dou,Dejing6
2020-06-15
Source PublicationIEEE Transactions on Multimedia
ISSN1520-9210
Abstract

In this paper, we extend the classic MaxOut strategy, originally designed for Multiple Layer Preceptors (MLPs), into COnvolutional MaxOut (COMO) --- a new strategy making deep convolutional neural networks wider with parameter efficiency. Compared to the existing solutions, such as ResNeXt for ResNet or Inception for VGG-alikes, COMO works well on both linear architectures and the ones with skipped connections and residual blocks. More specifically, COMO adopts a novel split-transform-merge paradigm that extends the layers with spatial resolution reduction into multiple parallel splits. For the layer with COMO, each split passes the input feature maps through a 4D convolution operator with independent batch normalization operators for transformation, then merge into the aggregated output of the original sizes through max-pooling. Such a strategy is expected to tackle the potential classification accuracy degradation due to the spatial resolution reduction, by incorporating the multiple splits and max-pooling-based feature selection. Our experiment using a wide range of deep architectures shows that COMO can significantly improve the classification accuracy of ResNet/VGG-alike networks based on a large number of benchmark datasets. COMO further outperforms the existing solutions, e.g., Inceptions, ResNeXts, SE-ResNet, and Xception, that make networks wider, and it dominates in the comparison of accuracy versus parameter sizes.

KeywordComputer Architecture Convolution Convolutional Neural Networks Deep Learning Spatial Resolution Transforms
DOI10.1109/TMM.2020.3002614
URLView the original
Language英語English
Scopus ID2-s2.0-85087478813
Fulltext Access
Citation statistics
Cited Times [WOS]:0   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
CollectionUniversity of Macau
Affiliation1.Cloud Computing Center, Shenzhen Institutes of Advanced Technology Chinese Academy of Sciences, 85411 Shenzhen, Guangdong China (e-mail: bx.zhao@siat.ac.cn)
2.Big Data Lab, Baidu Inc, 438127 Beijing China 100085 (e-mail: xhyccc@gmail.com)
3.ECE, University of Central Florida, 6243 Orlando, Florida United States (e-mail: bjbj11111@knights.ucf.edu)
4.ECE, University of Central Florida, 6243 Orlando, Florida United States (e-mail: zsguo@ucf.edu)
5.CIS, University of Macau, 59193 Taipa, Macau Macao (e-mail: czxu@um.edu.mo)
6.Big Data Lab, Baidu Inc, 438127 Beijing China (e-mail: doudejing@baidu.com)
Recommended Citation
GB/T 7714
Zhao,Baoxin,Xiong,Haoyi,Bian,Jiang,et al. COMO: Efficient Deep Neural Networks Expansion With COnvolutional MaxOut[J]. IEEE Transactions on Multimedia,2020.
APA Zhao,Baoxin,Xiong,Haoyi,Bian,Jiang,Guo,Zhishan,Xu,Cheng Zhong,&Dou,Dejing.(2020).COMO: Efficient Deep Neural Networks Expansion With COnvolutional MaxOut.IEEE Transactions on Multimedia.
MLA Zhao,Baoxin,et al."COMO: Efficient Deep Neural Networks Expansion With COnvolutional MaxOut".IEEE Transactions on Multimedia (2020).
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhao,Baoxin]'s Articles
[Xiong,Haoyi]'s Articles
[Bian,Jiang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhao,Baoxin]'s Articles
[Xiong,Haoyi]'s Articles
[Bian,Jiang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhao,Baoxin]'s Articles
[Xiong,Haoyi]'s Articles
[Bian,Jiang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.