YOLO11改进|卷积篇|RFAConv创新空间注意力和标准卷积操作

server/2024/10/11 3:26:14/

在这里插入图片描述

目录

    • 一、RFAConv卷积
      • 1.1RFAConv卷积介绍
      • 1.2RFAConv核心代码
    • 五、添加RFAConv卷积
      • 5.1STEP1
      • 5.2STEP2
      • 5.3STEP3
      • 5.4STEP4
    • 六、yaml文件与运行
      • 6.1yaml文件
      • 6.2运行成功截图

一、RFAConv卷积

1.1RFAConv卷积介绍

在这里插入图片描述
RFAConv卷积操作提出了一种融合了空间注意力机制和标准卷积操作的新型卷积单元。它的设计旨在提升模型在捕捉空间上下文信息时的表现,同时保留传统卷积操作的高效性。
RFAConv 的工作原理

  1. 随机特征聚合(Random Feature Aggregation)
    RFAConv 在卷积操作中引入了随机特征聚合的概念。这意味着,卷积核在每次执行卷积操作时,不会像传统卷积那样固定地使用每个输入通道中的所有特征。相反,它会随机选择一部分输入特征图进行卷积,这种随机选择增强了模型对输入数据的多样性和随机变化的适应性。

  2. 空间注意力机制(Spatial Attention Mechanism)
    RFAConv 融合了空间注意力机制,旨在增强网络对空间位置的关注。传统的卷积操作是平等地处理所有输入特征位置,而 RFAConv 的空间注意力机制允许网络重点关注那些对当前任务(例如目标检测或分类)最为重要的位置。通过这种方式,RFAConv 能够有效过滤掉冗余信息,专注于更具信息量的特征区域。

  3. 标准卷积操作的改进
    RFAConv 依然保持了标准卷积操作的基本架构,但通过引入上述两种创新,提升了模型对空间关系的建模能力,同时不会大幅增加计算开销。这种结构既保持了传统卷积的效率,又增强了其在复杂场景下的表达能力。
    RFAConv卷积的结构图如下
    在这里插入图片描述

1.2RFAConv核心代码


import torch
from torch import nn
from einops import rearrangeclass RFAConv(nn.Module): # 基于Group Conv实现的RFAConvdef __init__(self,in_channel,out_channel,kernel_size=3,stride=1):super().__init__()self.kernel_size = kernel_sizeself.get_weight = nn.Sequential(nn.AvgPool2d(kernel_size=kernel_size, padding=kernel_size // 2, stride=stride),nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=1, groups=in_channel,bias=False))self.generate_feature = nn.Sequential(nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=kernel_size,padding=kernel_size//2,stride=stride, groups=in_channel, bias=False),nn.BatchNorm2d(in_channel * (kernel_size ** 2)),nn.ReLU())self.conv = nn.Sequential(nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, stride=kernel_size),nn.BatchNorm2d(out_channel),nn.ReLU())def forward(self,x):b,c = x.shape[0:2]weight =  self.get_weight(x)h,w = weight.shape[2:]weighted = weight.view(b, c, self.kernel_size ** 2, h, w).softmax(2)  # b c*kernel**2,h,w ->  b c k**2 h w feature = self.generate_feature(x).view(b, c, self.kernel_size ** 2, h, w)  #b c*kernel**2,h,w ->  b c k**2 h w   获得感受野空间特征weighted_data = feature * weightedconv_data = rearrange(weighted_data, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size, # b c k**2 h w ->  b c h*k w*kn2=self.kernel_size)return self.conv(conv_data)class RFAConv_U(nn.Module): # 基于Unfold实现的RFAConvdef __init__(self, in_channel, out_channel, kernel_size=3):super().__init__()self.kernel_size = kernel_sizeself.unfold = nn.Unfold(kernel_size=(kernel_size, kernel_size), padding=kernel_size // 2)self.get_weights = nn.Sequential(nn.Conv2d(in_channel * (kernel_size ** 2), in_channel * (kernel_size ** 2), kernel_size=1,groups=in_channel),nn.BatchNorm2d(in_channel * (kernel_size ** 2)))self.conv = nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, padding=0, stride=kernel_size)self.bn = nn.BatchNorm2d(out_channel)self.act = nn.ReLU()def forward(self, x):b, c, h, w = x.shapeunfold_feature = self.unfold(x)  # 获得感受野空间特征  b c*kernel**2,h*wx = unfold_featuredata = unfold_feature.unsqueeze(-1)weight = self.get_weights(data).view(b, c, self.kernel_size ** 2, h, w).permute(0, 1, 3, 4, 2).softmax(-1)weight_out = rearrange(weight, 'b c h w (n1 n2) -> b c (h n1) (w n2)', n1=self.kernel_size, n2=self.kernel_size) # b c h w k**2 -> b c h*k w*kreceptive_field_data = rearrange(x, 'b (c n1) l -> b c n1 l', n1=self.kernel_size ** 2).permute(0, 1, 3, 2).reshape(b, c, h, w, self.kernel_size ** 2) # b c*kernel**2,h*w ->  b c h w k**2data_out = rearrange(receptive_field_data, 'b c h w (n1 n2) -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size) # b c h w k**2 -> b c h*k w*kconv_data = data_out * weight_outconv_out = self.conv(conv_data)return self.act(self.bn(conv_out))class SE(nn.Module):def __init__(self, in_channel, ratio=16):super(SE, self).__init__()self.gap = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Sequential(nn.Linear(in_channel, ratio, bias=False),  # 从 c -> c/rnn.ReLU(),nn.Linear(ratio, in_channel, bias=False),  # 从 c/r -> cnn.Sigmoid())def forward(self, x):b, c= x.shape[0:2]y = self.gap(x).view(b, c)y = self.fc(y).view(b, c,1, 1)return yclass RFCBAMConv(nn.Module):def __init__(self,in_channel,out_channel,kernel_size=3,stride=1,dilation=1):super().__init__()if kernel_size % 2 == 0:assert("the kernel_size must be  odd.")self.kernel_size = kernel_sizeself.generate = nn.Sequential(nn.Conv2d(in_channel,in_channel * (kernel_size**2),kernel_size,padding=kernel_size//2,stride=stride,groups=in_channel,bias =False),nn.BatchNorm2d(in_channel * (kernel_size**2)),nn.ReLU())self. get_weight = nn.Sequential(nn.Conv2d(2,1,kernel_size=3,padding=1,bias=False),nn.Sigmoid())self.se =  SE(in_channel)self.conv = nn.Sequential(nn.Conv2d(in_channel,out_channel,kernel_size,stride=kernel_size),nn.BatchNorm2d(out_channel),nn.ReLU())def forward(self,x):b,c = x.shape[0:2]channel_attention =  self.se(x)generate_feature = self.generate(x)h,w = generate_feature.shape[2:]generate_feature = generate_feature.view(b,c,self.kernel_size**2,h,w)generate_feature = rearrange(generate_feature, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size)unfold_feature = generate_feature * channel_attentionmax_feature,_ = torch.max(generate_feature,dim=1,keepdim=True)mean_feature = torch.mean(generate_feature,dim=1,keepdim=True)receptive_field_attention = self.get_weight(torch.cat((max_feature,mean_feature),dim=1))conv_data = unfold_feature  * receptive_field_attentionreturn self.conv(conv_data)class h_sigmoid(nn.Module):def __init__(self, inplace=True):super(h_sigmoid, self).__init__()self.relu = nn.ReLU6(inplace=inplace)def forward(self, x):return self.relu(x + 3) / 6class h_swish(nn.Module):def __init__(self, inplace=True):super(h_swish, self).__init__()self.sigmoid = h_sigmoid(inplace=inplace)def forward(self, x):return x * self.sigmoid(x)class RFCAConv(nn.Module):def __init__(self, inp, oup,kernel_size,stride, reduction=32):super(RFCAConv, self).__init__()self.kernel_size = kernel_sizeself.generate = nn.Sequential(nn.Conv2d(inp,inp * (kernel_size**2),kernel_size,padding=kernel_size//2,stride=stride,groups=inp,bias =False),nn.BatchNorm2d(inp * (kernel_size**2)),nn.ReLU())self.pool_h = nn.AdaptiveAvgPool2d((None, 1))self.pool_w = nn.AdaptiveAvgPool2d((1, None))mip = max(8, inp // reduction)self.conv1 = nn.Conv2d(inp, mip, kernel_size=1, stride=1, padding=0)self.bn1 = nn.BatchNorm2d(mip)self.act = h_swish()self.conv_h = nn.Conv2d(mip, inp, kernel_size=1, stride=1, padding=0)self.conv_w = nn.Conv2d(mip, inp, kernel_size=1, stride=1, padding=0)self.conv = nn.Sequential(nn.Conv2d(inp,oup,kernel_size,stride=kernel_size))def forward(self, x):b,c = x.shape[0:2]generate_feature = self.generate(x)h,w = generate_feature.shape[2:]generate_feature = generate_feature.view(b,c,self.kernel_size**2,h,w)generate_feature = rearrange(generate_feature, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size,n2=self.kernel_size)x_h = self.pool_h(generate_feature)x_w = self.pool_w(generate_feature).permute(0, 1, 3, 2)y = torch.cat([x_h, x_w], dim=2)y = self.conv1(y)y = self.bn1(y)y = self.act(y) h,w = generate_feature.shape[2:]x_h, x_w = torch.split(y, [h, w], dim=2)x_w = x_w.permute(0, 1, 3, 2)a_h = self.conv_h(x_h).sigmoid()a_w = self.conv_w(x_w).sigmoid()return self.conv(generate_feature * a_w * a_h)

五、添加RFAConv卷积

5.1STEP1

首先找到ultralytics/nn文件路径下新建一个Add-module的python文件包【这里注意一定是python文件包,新建后会自动生成_init_.py】,如果已经跟着我的教程建立过一次了可以省略此步骤,随后新建一个MLCA.py文件并将上文中提到的注意力机制的代码全部粘贴到此文件中,如下图所示
在这里插入图片描述

5.2STEP2

在STEP1中新建的_init_.py文件中导入增加改进模块的代码包如下图所示
在这里插入图片描述

5.3STEP3

找到ultralytics/nn文件夹中的task.py文件,在其中按照下图添加在这里插入图片描述

5.4STEP4

定位到ultralytics/nn文件夹中的task.py文件中的def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)函数添加如图代码,【如果不好定位可以直接ctrl+f搜索定位】

在这里插入图片描述

六、yaml文件与运行

6.1yaml文件

在这里RFAConv其实相当于一个注意力机制,由于YOLO11主要创新点就是C3k2注意力机制,因此不建议使用RFAConv替换,主要使用RFCAConv和RFVBAMConv两个卷积来进行改进
yaml文件1

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, RFCAConv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, RFCAConv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, RFCAConv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, RFCAConv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, RFCAConv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, RFCAConv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

yaml文件2

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'# [depth, width, max_channels]n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, RFCBAMConv, [128, 3, 2]] # 1-P2/4- [-1, 2, C3k2, [256, False, 0.25]]- [-1, 1, RFCBAMConv, [256, 3, 2]] # 3-P3/8- [-1, 2, C3k2, [512, False, 0.25]]- [-1, 1, RFCBAMConv, [512, 3, 2]] # 5-P4/16- [-1, 2, C3k2, [512, True]]- [-1, 1, RFCBAMConv, [1024, 3, 2]] # 7-P5/32- [-1, 2, C3k2, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9- [-1, 2, C2PSA, [1024]] # 10# YOLO11n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 2, C3k2, [512, False]] # 13- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)- [-1, 1, RFCBAMConv, [256, 3, 2]]- [[-1, 13], 1, Concat, [1]] # cat head P4- [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)- [-1, 1, RFCBAMConv, [512, 3, 2]]- [[-1, 10], 1, Concat, [1]] # cat head P5- [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)- [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

6.2运行成功截图

在这里插入图片描述
以上添加位置仅供参考,具体添加位置与效果以自己的数据集结果为准

OK 以上就是添加RFAConv卷积的全部过程了,后续将持续更新尽情期待

在这里插入图片描述


http://www.ppmy.cn/server/129922.html

相关文章

tcp连接超时自动断开禁止访问

1、修改TCP默认TCP连接超时间 1.1 永久设置 # vim /etc/sysctl.conf net.netfilter.nf_conntrack_tcp_timeout_established10800 [rootfrs ~]# sysctl -p 使用参数生效,永久生效 1.2临时设置 # sysctl -w net.netfilter.nf_conntrack_tcp_timeout_established…

YOLO11改进|注意力机制篇|引入局部注意力HaloAttention

目录 一、【HaloAttention】注意力机制1.1【HaloAttention】注意力介绍1.2【HaloAttention】核心代码 二、添加【HaloAttention】注意力机制2.1STEP12.2STEP22.3STEP32.4STEP4 三、yaml文件与运行3.1yaml文件3.2运行成功截图 一、【HaloAttention】注意力机制 1.1【HaloAttent…

专利:创新的盾牌与创新者的利器

在当今快速发展的科技时代,创新已成为推动社会进步和经济增长的关键因素。专利,作为保护创新成果的重要法律工具,对于激励创新、保护发明人权益以及促进技术进步具有至关重要的作用。 专利的定义与类型 专利是政府授予发明人对其发明创造在…

web网页项目--用户登录,注册页面代码

index.html <!DOCTYPE html> <html lang"zxx"><head><title>xxx注册</title><!-- Meta tags --><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0&q…

论文翻译 | Fairness-guided Few-shot Prompting for LargeLanguage Models

摘要 大型语言模型已经显示出令人惊讶的执行上下文学习的能力&#xff0c;也就是说&#xff0c;这些模型可以通过对由几个输入输出示例构建的提示进行条件反射&#xff0c;直接应用于解决大量下游任务。然而&#xff0c;先前的研究表明&#xff0c;由于训练示例、示例顺序和提示…

需求9——通过一个小需求来体会service层的作用

昨天在完成了睿哥的需求验收之后&#xff0c;暂时没有其他任务&#xff0c;因此今天可能会比较有空闲时间。趁着这个机会&#xff0c;我打算把之前完成的一些需求进行总结&#xff0c;方便以后复习和参考。 在8月份的时候&#xff0c;我负责了一个需求&#xff0c;该需求的具体…

国内的无人机行业的现状和前景分析

近年来&#xff0c;随着科技的飞速发展&#xff0c;无人机&#xff08;Unmanned Aerial Vehicle, UAV&#xff09;作为战略性新兴产业的重要组成部分&#xff0c;在全球范围内迅速崛起。无人机利用无线电遥控设备和自备的程序控制装置操纵&#xff0c;实现不载人飞行&#xff0…

SQL进阶技巧:统计各时段观看直播的人数

目录 0 需求描述 1 数据准备 2 问题分析 3 小结 如果觉得本文对你有帮助&#xff0c;那么不妨也可以选择去看看我的博客专栏 &#xff0c;部分内容如下&#xff1a; 数字化建设通关指南 专栏 原价99&#xff0c;现在活动价39.9&#xff0c;十一国庆后将上升至59.9&#…