博客專欄

EEPW首頁 > 博客 > 為何Transformer在計算機(jī)視覺中如此受歡迎?(2)

為何Transformer在計算機(jī)視覺中如此受歡迎?(2)

發(fā)布人:MSRAsia 時間:2021-10-21 來源:工程師 發(fā)布文章

理由2:和卷積形成互補(bǔ)

卷積是一種局部操作,一個卷積層通常只會建模鄰域像素之間的關(guān)系。Transformer 則是全局操作,一個 Transformer 層能建模所有像素之間的關(guān)系,雙方可以很好地進(jìn)行互補(bǔ)。最早將這種互補(bǔ)性聯(lián)系起來的是非局部網(wǎng)絡(luò) [19],在這個工作中,少量 Transformer 自注意單元被插入到了原始網(wǎng)絡(luò)的幾個地方,作為卷積網(wǎng)絡(luò)的補(bǔ)充,并被證明其在物體檢測、語義分割和視頻動作識別等問題中廣泛有效。

此后,也有工作發(fā)現(xiàn)非局部網(wǎng)絡(luò)在視覺中很難真正學(xué)到像素和像素之間的二階關(guān)系 [28],為此,有研究員們也提出了一些針對這一模型的改進(jìn),例如解耦非局部網(wǎng)絡(luò) [29]。

理由3:更強(qiáng)的建模能力

卷積可以看作是一種模板匹配,圖像中不同位置采用相同的模板進(jìn)行濾波。而 Transformer 中的注意力單元則是一種自適應(yīng)濾波,模板權(quán)重由兩個像素的可組合性來決定,這種自適應(yīng)計算模塊具有更強(qiáng)的建模能力。

最早將 Transformer 這樣一種自適應(yīng)計算模塊應(yīng)用于視覺骨干網(wǎng)絡(luò)建模的方法是局部關(guān)系網(wǎng)絡(luò) LR-Net [30] 和 SASA [31],它們都將自注意的計算限制在一個局部的滑動窗口內(nèi),在相同理論計算復(fù)雜度的情況下取得了相比于 ResNet 更好的性能。然而,雖然理論上與 ResNet 的計算復(fù)雜度相同,但在實際使用中它們卻要慢得多。一個主要原因是不同的查詢(query)使用不同的關(guān)鍵字(key)集合,如圖2(左)所示,對內(nèi)存訪問不太友好。 

Swin Transformer 提出了一種新的局部窗口設(shè)計——移位窗口(shifted windows)。這一局部窗口方法將圖像劃分成不重疊的窗口,這樣在同一個窗口內(nèi)部,不同查詢使用的關(guān)鍵字集合將是相同的,進(jìn)而可以擁有更好的實際計算速度。在下一層中,窗口的配置會往右下移動半個窗口,從而構(gòu)造了前一層中不同窗口像素間的聯(lián)系。

理由4:對大模型和大數(shù)據(jù)的可擴(kuò)展性

在 NLP 領(lǐng)域,Transformer 模型在大模型和大數(shù)據(jù)方面展示了強(qiáng)大的可擴(kuò)展性。圖6中,藍(lán)色曲線顯示近年來 NLP 的模型大小迅速增加。大家都見證了大模型的驚人能力,例如微軟的 Turing 模型、谷歌的 T5 模型以及 OpenAI 的 GPT-3 模型。

視覺 Transformer 的出現(xiàn)為視覺模型的擴(kuò)大提供了重要的基礎(chǔ),目前最大的視覺模型是谷歌的150億參數(shù) ViT-MoE 模型 [32],這些大模型在 ImageNet-1K 分類上刷新了新的紀(jì)錄。

6.png

圖6:NLP 領(lǐng)域和計算機(jī)視覺領(lǐng)域模型大小的變遷

理由5:更好地連接視覺和語言

在以前的視覺問題中,科研人員通常只會處理幾十類或幾百類物體類別。例如 COCO 檢測任務(wù)中包含了80個物體類別,而 ADE20K 語義分割任務(wù)包含了150個類別。視覺 Transformer 模型的發(fā)明和發(fā)展,使視覺領(lǐng)域和 NLP 領(lǐng)域的模型趨同,有利于聯(lián)合視覺和 NLP 建模,從而將視覺任務(wù)與其所有概念聯(lián)系起來。這方面的先驅(qū)性工作主要有 OpenAI 的 CLIP [33] 和 DALL-E 模型 [34]。

考慮到上述的諸多優(yōu)點,相信視覺 Transformer 將開啟計算機(jī)視覺建模的新時代,我們也期待學(xué)術(shù)界和產(chǎn)業(yè)界共同努力,進(jìn)一步挖掘和探索這一新的建模方法給視覺領(lǐng)域帶來的全新機(jī)遇和挑戰(zhàn)。

參考文獻(xiàn):

[1] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR 2021

[2] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. ICCV 2021

[3] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu. Video Swin Transformer. Tech report 2021

[4] Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu. Self-Supervised Learning with Swin Transformers. Tech report 2021

[5] Chunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, Jianfeng Gao. Efficient Self-supervised Vision Transformers for Representation Learning. Tech report 2021

[6] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, Radu Timofte. SwinIR: Image Restoration Using Swin Transformer. Tech report 2021

[7] https://github.com/layumi/Person_reID_baseline_pytorch

[8] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, Manning Wang. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. Tech report 2021

[9] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. Training data-efficient image transformers & distillation through attention. Tech report 2021

[10] Yawei Li, Kai Zhang, Jiezhang Cao, Radu Timofte, Luc Van Gool. LocalViT: Bringing Locality to Vision Transformers. Tech report 2021

[11] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, Chunhua Shen. Twins: Revisiting the Design of Spatial Attention in Vision Transformers. Tech report 2021

[12] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. ICCV 2021

[13] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. Tech report 2021

[14] Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao. Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding. Tech report 2021

[15] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. CvT: Introducing Convolutions to Vision Transformers. ICCV 2021

[16] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo. CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows. Tech report 2021

[17] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, Jianfeng Gao. Focal Self-attention for Local-Global Interactions in Vision Transformers. Tech report 2021

[18] Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu. Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer. Tech report 2021

[19] Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He. Non-local Neural Networks. CVPR 2018

[20] Yuhui Yuan, Lang Huang, Jianyuan Guo, Chao Zhang, Xilin Chen, Jingdong Wang. OCNet: Object Context for Semantic Segmentation. IJCV 2021

[21] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, Yichen Wei. Relation Networks for Object Detection. CVPR 2018

[22] Jiarui Xu, Yue Cao, Zheng Zhang, Han Hu. Spatial-Temporal Relation Networks for Multi-Object Tracking. ICCV 2019

[23] Yihong Chen, Yue Cao, Han Hu, Liwei Wang. Memory Enhanced Global-Local Aggregation for Video Object Detection. CVPR 2020

[24] Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, and Tao Mei. Relation distillation networks for video object detection. ICCV 2019

[25] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. End-to-End Object Detection with Transformers. ECCV 2020

[26] Jiayuan Gu, Han Hu, Liwei Wang, Yichen Wei, Jifeng Dai. Learning Region Features for Object Detection. ECCV 2018

[27] Cheng Chi, Fangyun Wei, Han Hu. RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder. NeurIPS 2020

[28] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu. GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. ICCV workshop 2019

[29] Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, Han Hu. Disentangled Non-Local Neural Networks. ECCV 2020

[30] Han Hu, Zheng Zhang, Zhenda Xie, Stephen Lin. Local Relation Networks for Image Recognition. ICCV 2019

[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens. Stand-Alone Self-Attention in Vision Models. NeurIPS 2019

[32] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby. Scaling Vision with Sparse Mixture of Experts. Tech report 2021

[33] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. Learning Transferable Visual Models from Natural Language Supervision. Tech report 2021

[34] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever. Zero-Shot Text-to-Image Generation. Tech report 2021

*博客內(nèi)容為網(wǎng)友個人發(fā)布,僅代表博主個人觀點,如有侵權(quán)請聯(lián)系工作人員刪除。

電接點壓力表相關(guān)文章:電接點壓力表原理


關(guān)鍵詞: AI

相關(guān)推薦

技術(shù)專區(qū)

關(guān)閉