亚博microros小车-原生ubuntu支持系列:5-姿态检测

server/2025/1/24 19:26:09/

MediaPipe 介绍参见:亚博microros小车-原生ubuntu支持系列:4-手部检测-CSDN博客 

本篇继续迁移姿态检测。

一 背景知识

以下来自亚博官网

MediaPipe Pose是⼀个⽤于⾼保真⾝体姿势跟踪的ML解决⽅案,利⽤BlazePose研究,从RGB视频帧推断出33个3D坐标和全⾝背景分割遮罩,该研究也为ML Kit姿势检测API提供了动⼒。

MediaPipe姿势中的地标模型预测了33个姿势坐标的位置(参⻅下图)。

image-20240125170006220

 跟手部检测类似

import cv2
import mediapipe as mpmp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_pose = mp.solutions.pose
pose = mp_pose.Pose(static_image_mode=False, model_complexity=1, smooth_landmarks=True, min_detection_confidence=0.5, min_tracking_confidence=0.5)cap = cv2.VideoCapture(0)#打开默认摄像头
while True:ret,frame = cap.read()#读取一帧图像#图像格式转换frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)# 因为摄像头是镜像的,所以将摄像头水平翻转# 不是镜像的可以不翻转frame= cv2.flip(frame,1)#输出结果results = pose.process(frame)frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)if results.pose_landmarks:print(f'pose_landmarks:{results.pose_landmarks}' )# 关键点可视化mp_drawing.draw_landmarks(frame, results.pose_landmarks, mp_pose.POSE_CONNECTIONS)else:print('there are no person!')continue    cv2.imshow('MediaPipe pose', frame)if cv2.waitKey(1) & 0xFF == 27:break
cap.release()

运行效果:

位姿检测

src/yahboom_esp32_mediapipe/yahboom_esp32_mediapipe/目录下新建文件02_PoseDetector.py

#!/usr/bin/env python3
# encoding: utf-8#import ros lib
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Point
import mediapipe as mp
#import define msg
from yahboomcar_msgs.msg import PointArray
from cv_bridge import CvBridge
from sensor_msgs.msg import Image, CompressedImage
#import commom lib
import cv2 as cv
import numpy as np
import timefrom rclpy.time import Time
import datetimeprint("import done")class PoseDetector(Node):def __init__(self, name,mode=False, smooth=True, detectionCon=0.5, trackCon=0.5):super().__init__(name)self.mpPose = mp.solutions.poseself.mpDraw = mp.solutions.drawing_utils#初始化位姿self.pose = self.mpPose.Pose(static_image_mode=mode,smooth_landmarks=smooth,min_detection_confidence=detectionCon,min_tracking_confidence=trackCon )self.pub_point = self.create_publisher(PointArray,'/mediapipe/points',1000)#输出关键点样式self.lmDrawSpec = mp.solutions.drawing_utils.DrawingSpec(color=(0, 0, 255), thickness=-1, circle_radius=6)self.drawSpec = mp.solutions.drawing_utils.DrawingSpec(color=(0, 255, 0), thickness=2, circle_radius=2)#位姿检测   def pubPosePoint(self, frame, draw=True):pointArray = PointArray()img = np.copy(frame)#图片格式转换img_RGB = cv.cvtColor(frame, cv.COLOR_BGR2RGB)self.results = self.pose.process(img_RGB)if self.results.pose_landmarks:#关键点输出if draw: self.mpDraw.draw_landmarks(frame, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS, self.lmDrawSpec, self.drawSpec)self.mpDraw.draw_landmarks(img, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS, self.lmDrawSpec, self.drawSpec)for id, lm in enumerate(self.results.pose_landmarks.landmark):point = Point()point.x, point.y, point.z = lm.x, lm.y, lm.zpointArray.points.append(point)self.pub_point.publish(pointArray)return frame, imgdef frame_combine(slef,frame, src):if len(frame.shape) == 3:frameH, frameW = frame.shape[:2]srcH, srcW = src.shape[:2]dst = np.zeros((max(frameH, srcH), frameW + srcW, 3), np.uint8)dst[:, :frameW] = frame[:, :]dst[:, frameW:] = src[:, :]else:src = cv.cvtColor(src, cv.COLOR_BGR2GRAY)frameH, frameW = frame.shape[:2]imgH, imgW = src.shape[:2]dst = np.zeros((frameH, frameW + imgW), np.uint8)dst[:, :frameW] = frame[:, :]dst[:, frameW:] = src[:, :]return dstclass MY_Picture(Node):def __init__(self, name):super().__init__(name)self.bridge = CvBridge()self.sub_img = self.create_subscription(CompressedImage, '/espRos/esp32camera', self.handleTopic, 1) #获取esp32传来的图像self.last_stamp = Noneself.new_seconds = 0self.fps_seconds = 1self.pose_detector = PoseDetector('pose_detector')#回调函数def handleTopic(self, msg):self.last_stamp = msg.header.stamp  if self.last_stamp:total_secs = Time(nanoseconds=self.last_stamp.nanosec, seconds=self.last_stamp.sec).nanosecondsdelta = datetime.timedelta(seconds=total_secs * 1e-9)seconds = delta.total_seconds()*100if self.new_seconds != 0:self.fps_seconds = seconds - self.new_secondsself.new_seconds = seconds#保留这次的值start = time.time()frame = self.bridge.compressed_imgmsg_to_cv2(msg)frame = cv.resize(frame, (640, 480))cv.waitKey(10)frame, img = self.pose_detector.pubPosePoint(frame,draw=False)end = time.time()fps = 1 / ((end - start)+self.fps_seconds)text = "FPS : " + str(int(fps))cv.putText(frame, text, (20, 30), cv.FONT_HERSHEY_SIMPLEX, 0.9, (0, 0, 255), 1)dist = self.pose_detector.frame_combine(frame, img)cv.imshow('dist', dist)# print(frame)cv.waitKey(10)def main():print("start it")rclpy.init()esp_img = MY_Picture("My_Picture")try:rclpy.spin(esp_img)except KeyboardInterrupt:passfinally:esp_img.destroy_node()rclpy.shutdown()

主要逻辑跟之前的手部探测类似,MY_Picture(Node):从摄像头获取图像,调用 pubPosePoint(frame,draw=False)探测位姿。

测试:

启动图像代理
docker run -it --rm -v /dev:/dev -v /dev/shm:/dev/shm --privileged --net=host microros/micro-ros-agent:humble udp4 --port 9999 -v4

重新构建后运行:

bohu@bohu-TM1701:~/yahboomcar/yahboomcar_ws$ ros2 run yahboom_esp32_mediapipe PoseDetector 
import done
start it
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1737459931.889105   73213 gl_context_egl.cc:85] Successfully initialized EGL. Major : 1 Minor: 5
I0000 00:00:1737459931.892356   73266 gl_context.cc:369] GL version: 3.2 (OpenGL ES 3.2 Mesa 23.2.1-1ubuntu3.1~22.04.3), renderer: Mesa Intel(R) UHD Graphics 620 (KBL GT2)
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1737459931.986178   73249 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1737459932.041183   73256 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1737459932.068208   73256 landmark_projection_calculator.cc:186] Using NORM_RECT without IMAGE_DIMENSIONS is only supported for the square ROI. Provide IMAGE_DIMENSIONS or use PROJECTION_MATRIX.
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
Corrupt JPEG data: premature end of data segment
Corrupt JPEG data: premature end of data segment
Corrupt JPEG data: premature end of data segment
Corrupt JPEG data: premature end of data segment

受限于小车摄像头,太近了拍不了。用手机放个体操视频来测试下

 


http://www.ppmy.cn/server/161094.html

相关文章

Elasticsearch(ES)基础查询语法的使用

1. Match Query (全文检索查询) 用于执行全文检索,适合搜索文本字段。 { “query”: { “match”: { “field”: “value” } } } match_phrase:精确匹配短语,适合用于短语搜索。 { “query”: { “match_phrase”: { “field”: “text” }…

回首2024,展望2025

2024年,是个充满挑战与惊喜的年份。在这366个日夜里,我站在编程与博客的交汇点,穿越了无数的风景与挑战,也迎来了自我成长的丰收时刻。作为开发者的第十年,我依然步伐坚定,心中始终带着对知识的渴望与对自我…

使用Chrome和Selenium实现对Superset等私域网站的截图

最近遇到了一个问题,因为一些原因,我搭建的一个 Superset 的 Report 功能由于节假日期间不好控制邮件的发送,所以急需一个方案来替换掉 Superset 的 Report 功能 首先我们需要 Chrome 浏览器和 Chrome Driver,这是执行数据抓取的…

基于单片机的开关电源设计(论文+源码)

本次基于单片机的开关电源节能控制系统的设计中,在功能上设计如下: (1)系统输入220V; (2)系统.输出0-12V可调,步进0.1V; (3)LCD液晶显示实时电压&#xff…

科普篇 | “机架、塔式、刀片”三类服务器对比

一、引言 在互联网的世界里,服务器就像是默默运转的超级大脑,支撑着我们日常使用的各种网络服务。今天,咱们来聊聊服务器家族中的三位 “明星成员”:机架式服务器、塔式服务器和刀片式服务器。如果把互联网比作一座庞大的城市&…

汉口银行企业网银 提示未识别到您的key,请尝试更换usb插口

忘记截图了,之前现象是设备管理器 USB驱动识别到了,每次插上去拔出来都有对应提示;但是 企业网银客户端一直提示识别不到USBkey; 官网首页的下载地址竟然是老版本客户端,新版本客户端要去角落里面找 汉口银行企业网银官网下载地…

C语言数组与字符串操作全解析:从基础到进阶,深入掌握数组和字符串处理技巧

系列文章目录 01-C语言从零到精通:常用运算符完全指南,掌握算术、逻辑与关系运算 02-C语言控制结构全解析:轻松掌握条件语句与循环语句 03-C语言函数参数传递深入解析:传值与传地址的区别与应用实例 04-C语言数组与字符串操作全解…

Dockerfile另一种使用普通用户启动的方式

基础镜像的Dockerfile # 使用 Debian 11.9 的最小化版本作为基础镜像 FROM debian:11.11# 维护者信息 LABEL maintainer"caibingsen" # 复制自定义的 sources.list 文件(如果有的话) COPY sources.list /etc/apt/sources.list # 创建…