YOLO11改进|卷积篇|RFAConv创新空间注意力和标准卷积操作

news/2024/10/11 3:53:45/

在这里插入图片描述

目录

    • 一、RFAConv卷积
      • 1.1RFAConv卷积介绍
      • 1.2RFAConv核心代码
    • 五、添加RFAConv卷积
      • 5.1STEP1
      • 5.2STEP2
      • 5.3STEP3
      • 5.4STEP4
    • 六、yaml文件与运行
      • 6.1yaml文件
      • 6.2运行成功截图

一、RFAConv卷积

1.1RFAConv卷积介绍

在这里插入图片描述
RFAConv卷积操作提出了一种融合了空间注意力机制和标准卷积操作的新型卷积单元。它的设计旨在提升模型在捕捉空间上下文信息时的表现,同时保留传统卷积操作的高效性。
RFAConv 的工作原理

  1. 随机特征聚合(Random Feature Aggregation)
    RFAConv 在卷积操作中引入了随机特征聚合的概念。这意味着,卷积核在每次执行卷积操作时,不会像传统卷积那样固定地使用每个输入通道中的所有特征。相反,它会随机选择一部分输入特征图进行卷积,这种随机选择增强了模型对输入数据的多样性和随机变化的适应性。

  2. 空间注意力机制(Spatial Attention Mechanism)
    RFAConv 融合了空间注意力机制,旨在增强网络对空间位置的关注。传统的卷积操作是平等地处理所有输入特征位置,而 RFAConv 的空间注意力机制允许网络重点关注那些对当前任务(例如目标检测或分类)最为重要的位置。通过这种方式,RFAConv 能够有效过滤掉冗余信息,专注于更具信息量的特征区域。

  3. 标准卷积操作的改进
    RFAConv 依然保持了标准卷积操作的基本架构,但通过引入上述两种创新,提升了模型对空间关系的建模能力,同时不会大幅增加计算开销。这种结构既保持了传统卷积的效率,又增强了其在复杂场景下的表达能力。
    RFAConv卷积的结构图如下
    在这里插入图片描述

1.2RFAConv核心代码


import torch
from torch import nn
from einops import rearrangeclass RFAConv(nn.Module): # 基于Group Conv实现的RFAConvdef __init__(self,in_channel,out_channel,kernel_size=3,stride=1):super().__init__()self.kernel_size = kernel_sizeself.get_weight = nn.Sequential(nn.AvgPool2d(kernel_size=kernel_size, padding=kernel_size // 2, stride=stride),nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=1, groups=in_channel,bias=False))self.generate_feature = nn.Sequential(nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=kernel_size,padding=kernel_size//2,stride=stride, groups=in_channel, bias=False),nn.BatchNorm2d(in_channel * (kernel_size ** 2)),nn.ReLU())self.conv = nn.Sequential(nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, stride=kernel_size),nn.BatchNorm2d(out_channel),nn.ReLU())def forward(self,x):b,c = x.shape[0:2]weight =  self.get_weight(x)h,w = weight.shape[2:]weighted = weight.view(b, c, self.kernel_size ** 2, h, w).softmax(2)  # b c*kernel**2,h,w ->  b c k**2 h w feature = self.generate_feature(x).view(b, c, self.kernel_size ** 2, h, w)  #b c*kernel**2,h,w ->  b c k**2 h w   获得感受野空间特征weighted_data = feature * weightedconv_data = rearrange(weighted_data, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size, # b c k**2 h w ->  b c h*k w*kn2=self.kernel_size)return self.conv(conv_data)class RFAConv_U(nn.Module): # 基于Unfold实现的RFAConvdef __init__(self, in_channel, out_channel, kernel_size=3):super().__init__()self.kernel_size = kernel_sizeself.unfold = nn.Unfold(kernel_size=(kernel_size, kernel_size), padding=kernel_size // 2)self.get_weights = nn.Sequential(nn.Conv2d(in_channel * (kernel_size ** 2), in_channel * (kernel_size ** 2), kernel_size=1,groups=in_channel),nn.BatchNorm2d(in_channel * (kernel_size ** 2)))self.conv = nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, padding=0, stride=kernel_size)self.bn = nn.BatchNorm2d(out_channel)self.act = nn.ReLU()def forward(self, x):b, c, h, w = x.shapeunfold_feature = self.unfold(x)  # 获得感受野空间特征  b c*kernel**2,h*wx = unfold_featuredata = unfold_feature.unsqueeze(-1)weight = self.get_weights(data).view(b, c, self.kernel_size ** 2, h, w).permute(0, 1, 3, 4, 2).softmax(-1)weight_out = rearrange(weight, 'b c h w (n1 n2) -> b c (h n1) (w n2)', n1=self.kernel_size, n2=self.kernel_size) # b c h w k**2 -> b c h*k w*kreceptive_field_data = rearrange(x, 'b (c n1) l -> b c n1 l', n1=self.kernel_size ** 2).permute(0, 1, 3, 2).reshape(b, c, h, w, self.kernel_size ** 2) # b c*kernel**2,h*w ->  b c h w k**2data_out = rearrange(receptive_field_data, 'b c h w (n1 n2) -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size) # b c h w k**2 -> b c h*k w*kconv_data = data_out * weight_outconv_out = self.conv(conv_data)return self.act(self.bn(conv_out))class SE(nn.Module):def __init__(self, in_channel, ratio=16):super(SE, self).__init__()self.gap = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Sequential(nn.Linear(in_channel, ratio, bias=False),  # 从 c -> c/rnn.ReLU(),nn.Linear(ratio, in_channel, bias=False),  # 从 c/r -> cnn.Sigmoid())def forward(self, x):b, c= x.shape[0:2]y = self.gap(x).view(b, c)y = self.fc(y).view(b, c,1, 1)return yclass RFCBAMConv(nn.Module):def __init__(self,in_channel,out_channel,kernel_size=3,stride=1,dilation=1):super().__init__()if kernel_size % 2 == 0:assert("the kernel_size must be  odd.")self.kernel_size = kernel_sizeself.generate = nn.Sequential(nn.Conv2d(in_channel,in_channel * (kernel_size**2),kernel_size,padding=kernel_size//2,stride=stride,groups=in_channel,bias =False),nn.BatchNorm2d(in_channel * (kernel_size**2)),nn.ReLU())self. get_weight = nn.Sequential(nn.Conv2d(2,1,kernel_size=3,padding=1,bias=False),nn.Sigmoid())self.se =  SE(in_channel)self.conv = nn.Sequential(nn.Conv2d(in_channel,out_channel,kernel_size,stride=kernel_size),nn.BatchNorm2d(out_channel),nn.ReLU())def forward(self,x):b,c = x.shape[0:2]channel_attention =  self.se(x)generate_feature = self.generate(x)h,w = generate_feature.shape[2:]generate_feature = generate_feature.view(b,c,self.kernel_size**2,h,w)generate_feature = rearrange(generate_feature, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size)unfold_feature = generate_feature * channel_attentionmax_feature,_ = torch.max(generate_feature,dim=1,keepdim=True)mean_feature = torch.mean(generate_feature,dim=1,keepdim=True)receptive_field_attention = self.get_weight(torch.cat((max_feature,mean_feature),dim=1))conv_data = unfold_feature  * receptive_field_attentionreturn self.conv(conv_data)class h_sigmoid(nn.Module):def __init__(self, inplace=True):super(h_sigmoid, self).__init__()self.relu = nn.ReLU6(inplace=inplace)def forward(self, x):return self.relu(x + 3) / 6class h_swish(nn.Module):def __init__(self, inplace=True):super(h_swish, self).__init__()self.sigmoid = h_sigmoid(inplace=inplace)def forward(self, x):return x * self.sigmoid(x)class RFCAConv(nn.Module):def __init__(self, inp, oup,kernel_size,stride, reduction=32):super(RFCAConv, self).__init__()self.kernel_size = kernel_sizeself.generate = nn.Sequential(nn.Conv2d(inp,inp * (kernel_size**2),kernel_size,padding=kernel_size//2,stride=stride,groups=inp,bias =False),nn.BatchNorm2d(inp * (kernel_size**2)),nn.ReLU())self.pool_h = nn.AdaptiveAvgPool2d((None, 1))self.pool_w = nn.AdaptiveAvgPool2d((1, None))mip = max(8, inp // reduction)self.conv1 = nn.Conv2d(inp, mip, kernel_size=1, stride=1, padding=0)self.bn1 = nn.BatchNorm2d(mip)self.act = h_swish()self.conv_h = nn.Conv2d(mip, inp, kernel_size=1, stride=1, padding=0)self.conv_w = nn.Conv2d(mip, inp, kernel_size=1, stride=1, padding=0)self.conv = nn.Sequential(nn.Conv2d(inp,oup,kernel_size,stride=kernel_size))def forward(self, x):b,c = x.shape[0:2]generate_feature = self.generate(x)h,w = generate_feature.shape[2:]generate_feature = generate_feature.view(b,c,self.kernel_size**2,h,w)generate_feature = rearrange(generate_feature, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size)x_h = self.pool_h(generate_feature)x_w = self.pool_w(generate_feature).permute(0, 1, 3, 2)y = torch.cat([x_h, x_w], dim=2)y = self.conv1(y)y = self.bn1(y)y = self.act(y) h,w = generate_feature.shape[2:]x_h, x_w = torch.split(y, [h, w], dim=2)x_w = x_w.permute(0, 1, 3, 2)a_h = self.conv_h(x_h).sigmoid()a_w = self.conv_w(x_w).sigmoid()return self.conv(generate_feature * a_w * a_h)

五、添加RFAConv卷积

5.1STEP1

首先找到ultralytics/nn文件路径下新建一个Add-module的python文件包【这里注意一定是python文件包,新建后会自动生成_init_.py】,如果已经跟着我的教程建立过一次了可以省略此步骤,随后新建一个MLCA.py文件并将上文中提到的注意力机制的代码全部粘贴到此文件中,如下图所示
在这里插入图片描述

5.2STEP2

在STEP1中新建的_init_.py文件中导入增加改进模块的代码包如下图所示
在这里插入图片描述

5.3STEP3

找到ultralytics/nn文件夹中的task.py文件,在其中按照下图添加在这里插入图片描述

5.4STEP4

定位到ultralytics/nn文件夹中的task.py文件中的def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)函数添加如图代码,【如果不好定位可以直接ctrl+f搜索定位】

在这里插入图片描述

六、yaml文件与运行

6.1yaml文件

在这里RFAConv其实相当于一个注意力机制,由于YOLO11主要创新点就是C3k2注意力机制,因此不建议使用RFAConv替换,主要使用RFCAConv和RFVBAMConv两个卷积来进行改进
yaml文件1

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, RFCAConv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, RFCAConv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, RFCAConv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, RFCAConv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, RFCAConv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, RFCAConv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

yaml文件2

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, RFCBAMConv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, RFCBAMConv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, RFCBAMConv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, RFCBAMConv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, RFCBAMConv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, RFCBAMConv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

6.2运行成功截图

在这里插入图片描述
以上添加位置仅供参考,具体添加位置与效果以自己的数据集结果为准

OK 以上就是添加RFAConv卷积的全部过程了,后续将持续更新尽情期待

在这里插入图片描述


http://www.ppmy.cn/news/1537255.html

相关文章

LeetCode讲解篇之1043. 分隔数组以得到最大和

文章目录 题目描述题解思路题解代码题目链接 题目描述 题解思路 对于这题我们这么考虑,我们选择以数字的第i个元素做为分隔子数组的右边界,我们需要计算当前分隔子数组的长度为多少时能让数组[0, i]进行分隔数组的和最大 我们用数组f表示[0, i)区间内的…

【VUE】Virtual Dom的优势在哪里

Virtual DOM 是一个轻量的 JavaScript 对象模型,它以 JS 对象的形式来描述真实的 DOM ,可以在内存中进行操作、比较,然后只对需要更新的部分进行实际的 DOM 操作,从而最小化 DOM 操作的次数,提高渲染效率。 Vue.js 中…

计算机前沿技术-人工智能算法-大语言模型-最新研究进展-2024-10-02

计算机前沿技术-人工智能算法-大语言模型-最新研究进展-2024-10-02 1. APM: Large Language Model Agent-based Asset Pricing Models Authors: Junyan Cheng, Peter Chin https://arxiv.org/abs/2409.17266 APM: 基于大型语言模型的代理资产定价模型(LLM Agent-b…

MySQL进阶 - 索引

01 索引概述 【1】概念:索引就是一种有序的数据结构,可用于高效查询数据。在数据库表中除了要保存原始数据外,数据库还需要去维护索引这种数据结构,通过这种数据结构来指向原始数据,这样就可以根据这些数据结构实现高…

Deepin V23中安装屏幕保护程序(xcreensaver)

一、系统:Deepin V23 用apt install screensaver安装的屏幕保护程序提示太旧,无法运行。于是采用从官网(https://www.jwz.org/xscreensaver/)下载源码来编译安装。 二、下载源码并解压 #cd /usr/local/src #tar -zxvf xscreensaver-6.09.tar.gz 三安装相…

75 华三vlan端口隔离

华三vlan端口隔离 为了实现端口间的二层隔离,可以将不同的端口加入不同的VLAN,但VLAN资源有限。采用端口隔离特性,用户只需要将端口加入到隔离组中,就可以实现隔离组内端口之间二层隔离,而不关心这些端口所属VLAN&…

chmod修改文件夹及子文件夹权限-需要加-R

chmod修改文件夹及子文件夹权限-需要加-R chmod 755 /home/aaa/test -R 755有3位,最高位7是设置文件所有者访问权限,第二位是设置群组访问权限,最低位是设置其他人访问权限。 7表示所有权限(读、写、执行)、5表示…

python+request+unittest+ddt自动化框架

参考资料: 用户中心 - 博客园 抓包模拟器笔记 肖sir__接口自动化pythonrequestddt(模版) - xiaolehua - 博客园 pythonrequestunittestddt自动化框架 博文阅读密码验证 - 博客园 肖sir__python之模块configparser - xiaolehua - 博客园 c…