前视声呐目标识别定位(六)-代码解析之目标截图并传输

前视声呐目标识别定位(一)-基础知识

前视声呐目标识别定位(二)-目标识别定位模块   

前视声呐目标识别定位(三)-部署至机器人

前视声呐目标识别定位(四)-代码解析之启动识别模块

前视声呐目标识别定位(五)-代码解析之修改声呐参数

前视声呐目标识别定位(六)-代码解析之目标截图并传输

前视声呐目标识别定位(七)-代码解析之录制数据包

前视声呐目标识别定位(八)-代码解析之各模块通信

前视声呐目标识别定位(九)-声呐驱动  

        识别目标后,截取目标框大小的图片,压缩传输。

1、test_client.py

        python3 test_client.py 7 1

elif cmd == str(7):
    print("sent capture object img cmd...")
    arrBuff = bytearray(b'\xff\xbb\xff\xbb\xee\xff\xee\xff')

        在协议文件中,启动识别并截图的协议为:

        故模块将该数据包发送至auv_server。

2、auv_server.py

def test_cmd(self):
    while True:
        self.test_conn, self.test_addr = self.test_server.accept()
        test_cmd_msg = self.test_conn.recv(1024)
        if len(test_cmd_msg)>0:
            self.conn.send(test_cmd_msg)

        直接将指令转发至center_server

3、center_server.py

# send the control_center cmd from auv to the control_center module on nx
def recv_control_center_msg(self):
    while True:          
        cmd_msg = self.nx_client.recv(1024)
        if len(cmd_msg) > 3:
            # send the control_center cmd to the control_center module
            if cmd_msg[-4:] == b'\xee\xff\xee\xff':
                self.control_center_socket.send(cmd_msg)
            # send the sonar parameters cmd to the sonar module
            elif cmd_msg[-4:] == b'\xee\xaa\xee\xff':
                self.sonar_param_socket.send(cmd_msg)
            else:
                print("cmd from auv error, no such cmd...")

        根据'\ee\ff\ee\ff'判断为不带参数的指令,转control_center.py

4、control_center.py

def rcev_cmd(self):
...

    # capture the object image and sent back to auv
    elif recvmsg[0:4] == self.capture_object_img_msg[0:4]:
        if self.capture_img_subprocess:
            self.capture_img_subprocess.kill()
            self.capture_img_subprocess = subprocess.Popen(self.capture_object_img_cmd, shell=True, executable="/bin/bash")

....

        根据'\ff\bb\ff\bb'判断为启动识别程序。

self.capture_object_img_cmd = 'ros2 topic pub --once /yolov5/capture_object_image std_msgs/msg/Bool data:\ true'

        发布/yolov5/capture_object_image。

5、yolov5_sonar.py

        监听到topic: /yolov5/capture_object_image

class Yolo_Dect(Node):
    def __init__(self):
    ...
        self.object_image_capture_sub = self.create_subscription(Bool, "/yolov5/capture_object_image", self.object_image_capture_callback, 1)
    ...

        
    def object_image_capture_callback(self, msg):
        if msg.data:
            self.object_image_capture_flag = True

        获取截图数据:

def image_callback(self, image):
    ...
    # capture the image to transport
    if self.object_image_capture_flag:
        self.image_capture = np.frombuffer(image.data, dtype=np.uint8).reshape(
                image.height, image.width, -1)
        self.capture_image_h = image.height
        self.capture_image_w = image.width
        self.capture_image_channel = len(image.data) / (image.height * image.width)

     逐个将目标截图并压缩编码,每个目标截图长度和宽度均不小于300像素,通过image_min_length设置,通过image_quality(0-100)可以设置压缩图像的质量,质量越小,压缩后的图片大小越小,然后传输。

        协议文件中,目标图片的协议为:

def dectshow(self, org_img, boxs, sonar_azimuth, sonar_range):
    ...
    # tcp transport the cut and compressed object images data
    # ensure get the image
    if self.capture_image_h * self.capture_image_w > 0:
        object_image_num = len(self.objects_azimuth_range.object_azimuth_range)
        # ensure object in the image
        if object_image_num > 0:
        #if object_image_num > 1:   #ensure several object for test
            try:                    
                object_image_count = 0
                for object_a_r in self.objects_azimuth_range.object_azimuth_range:
                    # cut the object in the image
                    if object_a_r.xmax - object_a_r.xmin < (self.image_min_length / 2):
                        cut_xmin = int((object_a_r.xmin + object_a_r.xmax) / 2.0 - self.image_min_length / 2)
                        cut_xmax = int((object_a_r.xmin + object_a_r.xmax) / 2.0 + self.image_min_length / 2)
                    else:
                        cut_xmin = int(object_a_r.xmin - 50)
                        cut_xmax = int(object_a_r.xmax + 50)

                    if object_a_r.ymax - object_a_r.ymin < (self.image_min_length / 2):
                        cut_ymin = int((object_a_r.ymin + object_a_r.ymax) / 2.0 - self.image_min_length / 2)
                        cut_ymax = int((object_a_r.ymin + object_a_r.ymax) / 2.0 + self.image_min_length / 2)
                    else:
                        cut_ymin = int(object_a_r.ymin - 50)
                        cut_ymax = int(object_a_r.ymax + 50)
                    cut_image = self.image_capture[max(0, cut_ymin) : min(self.image_capture.shape[0], cut_ymax), max(0, cut_xmin) : min(self.image_capture.shape[1], cut_xmax)]
                            
                    # get the class, probability, azimuth and range of the object
                    img_buff = bytearray(b'\xff\xbb\xff\xbb')
                    img_buff += bytearray(object_image_num.to_bytes(4, byteorder='little'))
                    img_buff += bytearray(object_image_count.to_bytes(4, byteorder='little'))
                    object_class = self.object_name[object_a_r.class_name]
                    img_buff += bytearray(object_class.to_bytes(4, byteorder='little')) 
                    img_buff += struct.pack('<f', object_a_r.probability) 
                    img_buff += struct.pack('<f', object_a_r.object_azimuth)
                    img_buff += struct.pack('<f', object_a_r.object_range)

                    # get the height, width and channel of the object image
                    cut_img_height = cut_image.shape[0]
                    cut_img_width = cut_image.shape[1]
                    cut_img_channel = cut_image.shape[2]
                    img_buff += bytearray(cut_img_height.to_bytes(4, byteorder='little')) 
                    img_buff += bytearray(cut_img_width.to_bytes(4, byteorder='little')) 
                    img_buff += bytearray(cut_img_channel.to_bytes(4, byteorder='little'))                            
                            
                    # encode the object image
                    encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), self.image_quality]
                    result, encode_image = cv2.imencode('.jpg', cut_image, encode_param)
                    # get object image data length
                    cut_image_data_len = len(encode_image)
                    img_buff += bytearray(cut_image_data_len.to_bytes(4, byteorder='little'))
                    # objcet image data
                    img_buff += bytearray(encode_image)

                    #print("the trans image length is", len(encode_image))
                        
                    img_buff += bytearray(b'\xff\xb0\xff\xb0')
                    self.yolo_client.sendall(img_buff)

                    #print("sent image " + str(object_image_count))
                    object_image_count += 1
                        
                self.capture_image_h = 0
                self.capture_image_w = 0
                self.image_capture = None
                self.object_image_capture_flag = False

            except Exception as e:
                self.capture_image_h = 0
                self.capture_image_w = 0
                self.image_capture = None
                print("image transport error... ", e)

目标图片数据传输至server_center.py

6、server_center.py

# recv sonar img and send to auv
def rcv_sonar_img_msg(self):
    self.sonar_param_socket, self.sonar_param_addr = self.sonar_param_server.accept()
    while True:
        img_msg = self.sonar_param_socket.recv(65536)
        if img_msg[0:4] == b'\xff\xcc\xff\xcc':
            self.nx_client.sendall(img_msg)
        if img_msg == b'':
            self.sonar_param_socket, self.sonar_param_addr = self.sonar_param_server.accept()

目标图片传输至auv_server.py

7、auv_server.py

# receive the data from auv
def recv_msg(self):
    ...
    # object images
    if pkg_head == b'\xff\xbb\xff\xbb':
        # object num in one image capture
        object_num = struct.unpack('i',recv_msg[4:8])[0]

        # current object info in the objects of the captured image
        object_info = object()
        cur_object_num = struct.unpack('i',recv_msg[8:12])[0]
        object_info.object_class = struct.unpack('i', recv_msg[12:16])[0]
        object_info.object_probability = struct.unpack('f', recv_msg[16:20])[0]
        object_info.object_azimuth = struct.unpack('f', recv_msg[20:24])[0]
        object_info.object_range = struct.unpack('f', recv_msg[24:28])[0]
        self.recv_images.object_info.append(object_info)
        # current object image info
        img_height = struct.unpack('i',recv_msg[28:32])[0]
        img_width = struct.unpack('i',recv_msg[32:36])[0]
        img_channel = struct.unpack('i',recv_msg[36:40])[0]
        img_data_len = struct.unpack('i',recv_msg[40:44])[0]

        img_data = recv_msg[44:44 + img_data_len]
        pkg_end = recv_msg[44 + img_data_len : 48 + img_data_len]
        #print(pkg_end)

        if pkg_end == b'\xff\xb0\xff\xb0':
            self.recv_images.object_image_data.append(img_data)

        if self.object_count < object_num:
            self.object_count += 1
                
        if self.object_count == object_num:
            self.display_image_flag = True
            self.object_count = 0

最后通过display_objects_image展示目标图片。

此外关于目标识别模块,还有一个功能是实时将目标的信息传输给auv辅助导航,目标信息的协议参见协议文件。

为了看log调试方便,这个功能我注释掉了,如果不需要图片信息的话,可以将其打开,在yolov5_sonar.py中:

# self.yolo_client.sendall(pkg_buff)
  • 3
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
当使用纯方位法声呐进行目标定位时,可以使用MATLAB编写代码来实现。以下是一个简单的示例代码,用于说明基本的实现方法: ```matlab % 声呐参数 fs = 1000; % 采样率 c = 1500; % 声速 N = 64; % 接收阵元个数 d = 0.5; % 接收阵元间距 % 目标参数 theta = 30; % 目标方位角(单位:度) R = 100; % 目标距离(单位:米) % 生成接收信号 t = (0:N-1) / fs; s = exp(1i * 2 * pi * R / c * sin(theta/180*pi) * t); % 声呐接收信号矩阵 X = zeros(N, N); % 声呐接收信号模拟 for n = 1:N % 计算接收阵元位置 x = (n-1) * d; % 计算接收信号延迟 tau = 2 * x / c * sin(theta/180*pi); % 接收信号加入延迟 s_t = s .* exp(1i * 2 * pi * fs * tau); % 添加噪声(可选) noise = randn(size(s_t)) * 0.1; s_t = s_t + noise; % 存储接收信号 X(:, n) = s_t.'; end % 基于接收信号进行目标定位 angle_range = -90:0.5:90; % 方位角范围 correlation = zeros(size(angle_range)); for i = 1:length(angle_range) % 计算目标方位角 target_theta = angle_range(i); % 计算目标延迟 target_tau = 2 * d / c * sin(target_theta/180*pi); % 生成目标信号 target_signal = exp(1i * 2 * pi * fs * target_tau * (0:N-1).'); % 计算相关性 correlation(i) = abs(target_signal' * X(:, 1)); end % 绘制相关性曲线 figure; plot(angle_range, correlation); xlabel('方位角(度)'); ylabel('相关性'); title('目标定位结果'); ``` 这段代码实现了一个简单的单目标定位过程,基于纯方位法声呐的原理。需要注意的是,这只是一个示例代码,实际应用中可能需要根据具体的需求进行修改和优化。 希望这个示例可以帮助到您!如有其他问题,请随时提问。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值