更改仿真地图

在fishros_ws/src/fishbot_description/launch/gazebo_sim.launch.py

image.png

将default_world_path = urdf_tutorial_path + '/world/red_room.world’

更改为自己的仿真地图,仿真地图不会写的自己问ai

节点了解

开启仿真后,使用ros2 topic list,大致可以看到有以下节点

/camera_sensor/camera_info
/camera_sensor/depth/camera_info
/camera_sensor/depth/image_raw
/camera_sensor/image_raw
/camera_sensor/points
/clock
/cmd_vel
/dynamic_joint_states
/fishbot_diff_drive_controller/transition_event
/fishbot_joint_state_broadcaster/transition_event
/imu
/joint_states
/odom
/parameter_events
/performance_metrics
/robot_description
/rosout
/scan
/tf
/tf_static

其中我们使用到/camera_sensor/image_raw,/cmd_vel

/camera_sensor/image_raw

我们需要是接受图像节点,然后处理数据

/cmd_vel

这是**控制机器人运动的线速度和角速度指令,**我们处理完数据后将信息发送到这里来控制机器人

关键代码

 def image_callback(self, msg):
        try:
            # 转换图像格式
            cv_image = self.bridge.imgmsg_to_cv2(msg, "bgr8")
        except Exception as e:
            self.get_logger().error(f'Image conversion error: {e}')
            return

        # 颜色过滤(红色有两个范围)
        hsv = cv2.cvtColor(cv_image, cv2.COLOR_BGR2HSV)
        
        # 定义红色范围(两个区间)
        lower_red1 = np.array([0, self.s_min, self.v_min])
        upper_red1 = np.array([self.h_min, 255, self.v_max])
        lower_red2 = np.array([self.h_max, self.s_min, self.v_min])
        upper_red2 = np.array([180, 255, self.v_max])

        # 创建掩膜
        mask1 = cv2.inRange(hsv, lower_red1, upper_red1)
        mask2 = cv2.inRange(hsv, lower_red2, upper_red2)
        mask = cv2.bitwise_or(mask1, mask2)

        # 形态学操作去噪
        kernel = np.ones((5,5), np.uint8)
        mask = cv2.erode(mask, kernel, iterations=1)
        mask = cv2.dilate(mask, kernel, iterations=2)

        # 寻找轮廓
        contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        
        if len(contours) > 0:
            # 找到最大轮廓
            max_contour = max(contours, key=cv2.contourArea)
            ((x, y), radius) = cv2.minEnclosingCircle(max_contour)
            
            if radius > 10:  # 过滤小区域
                # 绘制跟踪结果
                cv2.circle(cv_image, (int(x), int(y)), int(radius), (0, 255, 255), 2)
                
                # 计算控制指令(简单比例控制)
                error = x - cv_image.shape[1]/2
                kp = 0.01  # 比例系数
                
                twist = Twist()
                twist.linear.x = 0.2  # 前进速度
                twist.angular.z = -float(error) * kp  # 转向速度
                
                self.publisher_.publish(twist)
        
        # 显示调试图像(调试用)
        cv2.imshow("Red Tracking", cv_image)
        cv2.waitKey(1)