0
点赞
收藏
分享

微信扫一扫

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理


基本思想:客户的开发板,搞一个deepstream开发板上进行推流检测并将视频推到手机上进行实时显示,如果开发板的python环境有问题的话,可以在pc端进行模型转换,不用计较pc端的驱动版本和tensorrt版本,模型转换完成之后,在移植开发板上即可,因为deepstream 只使用c++ 进行推理,后期准备搞个http通信,和界面相配合,完成终端的绘图和数据映射,开发板只做推理和加速处理

首先需要配置vnc连接或者TTL通信

一、开发板的系统刷机之后的环境现状

li@li-desktop:~$ sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
li@li-desktop:~$ sudo apt-get install libssl-dev
li@li-desktop:~$ sudo apt-get install libgstrtspserver-1.0-0 libjansson4

li@li-desktop:~$ uname -a
Linux li-desktop 4.9.253-tegra #1 SMP PREEMPT Mon Jul 26 12:19:28 PDT 2021 aarch64 aarch64 aarch64 GNU/Linux
li@li-desktop:~$ jetson_release -v
- NVIDIA Jetson Xavier NX (Developer Kit Version)
* Jetpack 4.6 [L4T 32.6.1]
* NV Power Mode: MODE_20W_6CORE - Type: 8
* jetson_stats.service: active
- Board info:
* Type: Xavier NX (Developer Kit Version)
* SOC Family: tegra194 - ID:25
* Module: P3668 - Board: P3509-000
* Code Name: jakku
* CUDA GPU architecture (ARCH_BIN): 7.2
* Serial Number: 1422521039095
- Libraries:
* CUDA: 10.2.300
* cuDNN: 8.2.1.32
* TensorRT: 8.0.1.6
* Visionworks: 1.6.0.501
* OpenCV: 4.1.1 compiled CUDA: NO
* VPI: ii libnvvpi1 1.1.15 arm64 NVIDIA Vision Programming Interface library
* Vulkan: 1.2.70
- jetson-stats:
* Version 3.1.1
* Works on Python 3.6.9
li@li-desktop:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
li@li-desktop:~$ python3
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> import tensorrt
>>> exit()
li@li-desktop:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
li@li-desktop:~$ python3
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pycuda
>>>

整个开发板的性能参数jtop

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_python

二、基本的库都有,安装一下deepstream ,使用motrix下载即可

li@li-desktop:~$ cd sxj731533730/
li@li-desktop:~/sxj731533730$ axel -n 100 https://developer.download.nvidia.com/assets/Deepstream/DeepStream_6.0.1/deepstream_sdk_v6.0.1_jetson.tbz2
li@li-desktop:~/sxj731533730$ sudo tar xpvf deepstream_sdk_v6.0.1_jetson.tbz2 -C /
li@li-desktop:~/sxj731533730$ cd /opt/nvidia/deepstream/deepstream-6.0/
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ./install.sh
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ldconfig
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo vim /etc/ld.so.conf
/opt/nvidia/deepstream/deepstream-6.0/lib
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ldconfig

测试安装deepstream成功

li@li-desktop:~$ deepstream-app --version-all
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 8.0
cuDNN Version: 8.2
libNVWarp360 Version: 2.0.1d3

三、测试deepstream 拉流处理

li@li-desktop:~$ sudo vim /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

修改 [sink0],将 enable 改为 0

修改 [sink1],将 enable 改为 1

测试deepstream

li@li-desktop:~$ sudo deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
Opening in BLOCKING MODE
0:00:04.888175377 10973 0x38827120 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 6]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
....
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
**PERF: 66.34 (64.04) 66.34 (64.04) 66.34 (64.04) 66.34 (64.04)
**PERF: 58.75 (59.57) 58.75 (59.57) 58.75 (59.57) 58.75 (59.57)
**PERF: 59.46 (59.49) 59.46 (59.49) 59.46 (59.49) 59.46 (59.49)
**PERF: 63.21 (60.67) 63.21 (60.67) 63.21 (60.67) 63.21 (60.67)
**PERF: 65.88 (61.96) 65.88 (61.96) 65.88 (61.96) 65.88 (61.96)
** INFO: <bus_callback:217>: Received EOS. Exiting ...

Quitting
[NvMultiObjectTracker] De-initialized
App run successful

三、先测试一下tensorrtx,转一下模型(第三步其实不需要,只测试deepstream 可直接第四步)

训练的模型 参考​

使用的版本为yolov5 tag6.0

li@li-desktop:~/sxj731533730$ git clone https://github.com/wang-xinyu/tensorrtx.git
Cloning into 'tensorrtx'...
remote: Enumerating objects: 1883, done.
remote: Counting objects: 100% (551/551), done.
remote: Compressing objects: 100% (87/87), done.
remote: Total 1883 (delta 504), reused 464 (delta 464), pack-reused 1332
Receiving objects: 100% (1883/1883), 1.63 MiB | 3.14 MiB/s, done.
Resolving deltas: 100% (1226/1226), done.
li@li-desktop:~/sxj731533730$ cd yolov5/
li@li-desktop:~/sxj731533730/yolov5$ cp ../tensorrtx/yolov5/gen_wts.py .
li@li-desktop:~/sxj731533730/yolov5$ python3 gen_wts.py -w run/exp3/weights/best.pt -o yolov5s.wts
YOLOv5 🚀 v6.1-124-g8c420c4 torch 1.8.0 CPU

li@li-desktop:~/sxj731533730/yolov5$ ls
CONTRIBUTING.md exp3 LICENSE run utils
data export.py models setup.cfg val.py
detect.py gen_wts.py README.md train.py yolov5s.wts
Dockerfile hubconf.py requirements.txt tutorial.ipynb

测试一下Jetson Xavier Nx 的tensorrt+yolov5

li@li-desktop:~/sxj731533730/yolov5$ cd ..
li@li-desktop:~/sxj731533730$ cd tensorrtx/
li@li-desktop:~/sxj731533730/tensorrtx$ cd yolov5
li@li-desktop:~/sxj731533730/tensorrtx/yolov5$ mkdir build
li@li-desktop:~/sxj731533730/tensorrtx/yolov5$ cd build/
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$ sudo vim ../yololayer.h
//static constexpr int CLASS_NUM = 80;
static constexpr int CLASS_NUM = 6;
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$ cmake ..
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found CUDA: /usr/local/cuda (found version "10.2")
-- Found OpenCV: /usr (found version "4.1.1")
-- Configuring done
-- Generating done
-- Build files have been written to: /home/li/sxj731533730/tensorrtx/yolov5/build
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$ make -j8
[ 16%] Building NVCC (Device) object CMakeFiles/myplugins.dir/myplugins_generated_yololayer.cu.o
[100%] Linking CXX executable yolov5
[100%] Built target yolov5
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$ ./yolov5 -s ../../../yolov5/yolov5s.wts yolov5s.engine s
Loading weights: ../../../yolov5/yolov5s.wts
Building engine, please wait for a while...
[04/12/2022-21:13:45] [W] [TRT] Detected invalid timing cache, setup a local cache instead
Build engine successfully!
li@li-desktop:~/sxj731533730/tensorrtx/yolov5/build$ ./yolov5 -d yolov5s.engine ../custom/
inference time: 13ms

测试标签, 不方便说~

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_desktop_02

四、进行deepstream配置和运行

li@li-desktop:~/sxj731533730$ git clone https://github.com/marcoslucianops/DeepStream-Yolo.git
Cloning into 'DeepStream-Yolo'...
remote: Enumerating objects: 570, done.
remote: Counting objects: 100% (492/492), done.
remote: Compressing objects: 100% (375/375), done.
remote: Total 570 (delta 303), reused 222 (delta 102), pack-reused 78
Receiving objects: 100% (570/570), 194.32 KiB | 710.00 KiB/s, done.
Resolving deltas: 100% (334/334), done.
li@li-desktop:~/sxj731533730$ cd DeepStream-Yolo/
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ ls
config_infer_primary.txt deepstream_app_config.txt readme.md
config_infer_primary_yolor.txt docs utils
config_infer_primary_yoloV2.txt labels.txt
config_infer_primary_yoloV5.txt nvdsinfer_custom_impl_Yolo
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ cd nvdsinfer_custom_impl_Yolo/
li@li-desktop:~/sxj731533730/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo$ CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ cp utils/gen_wts_yoloV5.py ../yolov5/
li@li-desktop:~/sxj731533730/yolov5$ python3 gen_wts_yoloV5.py -w run/exp3/weights/yolov5s.pt
YOLOv5 🚀 v6.1-124-g8c420c4 torch 1.8.0 CPU
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ cp ../yolov5/run/exp3/weights/yolov5s.cfg .
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ cp ../yolov5/run/exp3/weights/yolov5s.wts .

测试一下自己的视频,我将图片压缩成视频,image2video就好

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ sudo vim config_infer_primary_yoloV5.txt
#custom-network-config=yolov5n.cfg
#model-file=yolov5n.wts
custom-network-config=yolov5s.cfg
model-file=yolov5s.wts


li@li-desktop:~/sxj731533730/DeepStream-Yolo$ sudo vim deepstream_app_config.txt #uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
uri=file:///home/li/sxj731533730/DeepStream-Yolo/example.mp4 # using image2video.py to test video
#config-file=config_infer_primary.txt
config-file=config_infer_primary_yoloV5.txt

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ sudo vim labels.txt

五、执行命令

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt

Using winsys: x11
0:00:00.676080735 22860 0x1a939530 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files

Loading pre-trained weights
Loading weights of yolov5s complete
Total weights read: 7054819
Building YOLO network
.....
**PERF: 33.01 (33.10)
**PERF: 33.09 (33.08)
**PERF: 33.14 (33.10)
**PERF: 33.14 (33.11)
**PERF: 33.10 (33.10)
**PERF: 33.10 (33.10)
**PERF: 33.12 (33.10)
**PERF: 32.99 (33.09)
**PERF: 33.08 (33.10)
**PERF: 33.07 (33.09)
**PERF: 33.09 (33.10)
**PERF: 33.04 (33.09)

**PERF: FPS 0 (Avg)
**PERF: 33.06 (33.09)
** INFO: <bus_callback:217>: Received EOS. Exiting ...

Quitting
App run successful

测试视频截图

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_开发板_03

可以转一下视频使用h264进行推理,突然发现自己太傻了,之前搞的视频小组从海康摄像头拉取得h264转RGB帧,然后写入共享内存,菜鸡我用Opencv读出来再给deepstream+TensoRT处理,完全可以搞成直接h264给共享内存用deepstream处理

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ ffmpeg -i example.avi -f h264 -vcodec libx264 -s 1280x720 -r 25 example.264

改一下配置使用h264视频读入方式

uri=file:///home/li/sxj731533730/DeepStream-Yolo/example.264

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_开发板_04

六、查看其硬件接入的摄像头和其支持的分辨率

li@li-desktop:~$ v4l2-ctl --list-devices
HD WebCam 2MP (usb-3610000.xhci-2.3):
/dev/video0
li@li-desktop:~$ v4l2-ctl --list-formats-ext --device=0
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : Motion-JPEG
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x360
Interval: Discrete 0.033s (30.000 fps)

Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x360
Interval: Discrete 0.033s (30.000 fps)

Index : 2
Type : Video Capture
Pixel Format: 'H264' (compressed)
Name : H.264
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x360
Interval: Discrete 0.033s (30.000 fps)

li@li-desktop:~$ cd sxj731533730/DeepStream-Yolo/
li@li-desktop:~/sxj731533730/DeepStream-Yolo$ sudo vim /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source2_csi_usb_dec_infer_resnet_int8.txt

完整文件我贴一下,自己对比原来的文件

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ cat /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source2_csi_usb_dec_infer_resnet_int8.txt 

修改之后的文件内容(自己比对与源文件修改处)


[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=1
width=1280
height=720

[source0]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=5
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
camera-csi-sensor-id=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=1
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary.txt

[tests]
file-loop=0

测试视频,进行usb摄像头拉流和推流

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source2_csi_usb_dec_infer_resnet_int8.txt

然后在window11电脑上使用 ffplay进行拉流显示(需要在window11配置一下ffmpeg环境,可以在cmd中使用ffpmeg和ffplay就行)

C:\Users\Administrator>ffplay rtsp://li:27122@192.168.10.188:8554/ds-test

拉流显示

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_python_05

 显示画面 一个为VNC显示的Jetson画面 一个window11本地使用ffmpeg拉流的画面,因为普通画面 没有目标检测框显示

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_desktop_06

  七、先测试自己的模型进行本地视频读取 、检测、推流(慢点走 一步一步来)

li@li-desktop:~/sxj731533730/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt

修改配置文件deepstream_app_config.txt  (自己比对与源文件对比修改处)

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///home/li/sxj731533730/DeepStream-Yolo/example.264
num-sources=1
gpu-id=0
cudadec-memtype=0



[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400


[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

测试画面

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_开发板_07

 

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_开发板_08

八、使用usb摄像头进行检测和推流,且关闭本地显示画面

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=0
type=3
uri=file:///home/li/sxj731533730/DeepStream-Yolo/example.264
num-sources=1
gpu-id=0
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
num-sources=1
gpu-id=0
cudadec-memtype=0

#[sink0]
#enable=1
#type=2
#sync=0
#gpu-id=0
#nvbuf-memory-type=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV5.txt

[tests]
file-loop=0

测试显示画面ffplay拉流 

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_开发板_09

第九步、写个pyqt界面,先写个简单的界面,完整的稍后放出

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_python_10

 源码

#!/usr/bin/python
# -*- coding: UTF-8 -*-
import sys
import cv2
import math
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import QPalette, QBrush, QPixmap
import os
import numpy as np
import time
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
from imutils.object_detection import non_max_suppression

class Ui_MainWindow(QtWidgets.QWidget):
def __init__(self, parent=None):
super(Ui_MainWindow, self).__init__(parent)
self.num=1;
# self.face_recong = face.Recognition()
self.timer_camera = QtCore.QTimer()
self.cap=None
self.CAM_NUM = 0
self.set_ui()
self.slot_init()
self.__flag_work = 0
self.x = 0
self.count = 0

def set_ui(self):

self.__layout_main = QtWidgets.QHBoxLayout()
self.__layout_fun_button = QtWidgets.QVBoxLayout()
self.__layout_data_show = QtWidgets.QVBoxLayout()

self.button_open_camera = QtWidgets.QPushButton(u'开始播放')
self.button_pause = QtWidgets.QPushButton(u'暂停播放')
self.button_close = QtWidgets.QPushButton(u'关闭播放')

# Button 的颜色修改
button_color = [self.button_open_camera,self.button_pause, self.button_close]
for i in range(3):
button_color[i].setStyleSheet("QPushButton{color:black}"
"QPushButton:hover{color:red}"
"QPushButton{background-color:rgb(78,255,255)}"
"QPushButton{border:2px}"
"QPushButton{border-radius:10px}"
"QPushButton{padding:2px 4px}")

self.button_open_camera.setMinimumHeight(50)
self.button_pause.setMinimumHeight(50)
self.button_close.setMinimumHeight(50)
# move()方法移动窗口在屏幕上的位置到x = 300,y = 300坐标。
self.move(500, 500)

# 信息显示
self.label_show_camera = QtWidgets.QLabel()
self.label_move = QtWidgets.QLabel()
self.label_move.setFixedSize(100, 100)

self.label_show_camera.setFixedSize(641, 481)
self.label_show_camera.setAutoFillBackground(False)

self.__layout_fun_button.addWidget(self.button_open_camera)
self.__layout_fun_button.addWidget(self.button_pause)
self.__layout_fun_button.addWidget(self.button_close)
self.__layout_fun_button.addWidget(self.label_move)

self.__layout_main.addLayout(self.__layout_fun_button)
self.__layout_main.addWidget(self.label_show_camera)

self.setLayout(self.__layout_main)
self.label_move.raise_()
self.setWindowTitle(u'摄像头')

'''
# 设置背景图片
palette1 = QPalette()
palette1.setBrush(self.backgroundRole(), QBrush(QPixmap('background.png')))
self.setPalette(palette1)
'''

def slot_init(self):
self.button_open_camera.clicked.connect(self.button_open_camera_click)
self.button_pause.clicked.connect(self.button_pause_camera_click)
self.timer_camera.timeout.connect(self.show_camera)
self.button_close.clicked.connect(self.close)

def button_pause_camera_click(self):
self.timer_camera.blockSignals(False)
if self.timer_camera.isActive() == True and self.num%2==1:
self.button_pause.setText(u'暂停播放')
self.num=self.num+1
cv2.imwrite("screenshoot" + ".jpg", self.image)
self.timer_camera.blockSignals(True)
else:
self.num = self.num + 1
self.button_pause.setText(u'继续播放')

def button_open_camera_click(self):
if self.timer_camera.isActive() == False:
#flag = self.cap.open(self.CAM_NUM)
self.cap = cv2.VideoCapture("F:\\project_17\\test.avi")
flag=self.cap.isOpened()
if flag == False:
msg = QtWidgets.QMessageBox.warning(self, u"Warning", u"请检测相机与电脑是否连接正确",
buttons=QtWidgets.QMessageBox.Ok,
defaultButton=QtWidgets.QMessageBox.Ok)
# if msg==QtGui.QMessageBox.Cancel:
# pass
else:
self.timer_camera.start(30)
self.button_open_camera.setText(u'关闭相机')
else:
self.timer_camera.stop()
self.cap.release()
self.label_show_camera.clear()
self.button_open_camera.setText(u'打开相机')

def show_camera(self):
flag, self.image = self.cap.read()
# face = self.face_detect.align(self.image)
# if face:
# pass
self.show = cv2.resize(self.image, (640, 480))
self.show = cv2.cvtColor(self.show, cv2.COLOR_BGR2RGB)
# print(show.shape[1], show.shape[0])
# show.shape[1] = 640, show.shape[0] = 480
showImage = QtGui.QImage(self.show.data, self.show.shape[1], self.show.shape[0], QtGui.QImage.Format_RGB888)
self.label_show_camera.setPixmap(QtGui.QPixmap.fromImage(showImage))
# self.x += 1
# self.label_move.move(self.x,100)

# if self.x ==320:
# self.label_show_camera.raise_()

def closeEvent(self, event):

ok = QtWidgets.QPushButton()
cacel = QtWidgets.QPushButton()

msg = QtWidgets.QMessageBox(QtWidgets.QMessageBox.Warning, u"关闭", u"是否关闭!")

msg.addButton(ok, QtWidgets.QMessageBox.ActionRole)
msg.addButton(cacel, QtWidgets.QMessageBox.RejectRole)
ok.setText(u'确定')
cacel.setText(u'取消')
# msg.setDetailedText('sdfsdff')
if msg.exec_() == QtWidgets.QMessageBox.RejectRole:
event.ignore()
else:
# self.socket_client.send_command(self.socket_client.current_user_command)

if self.timer_camera.isActive():
self.timer_camera.stop()

event.accept()





if __name__ == "__main__":
App = QApplication(sys.argv)
ex = Ui_MainWindow()
ex.show()
sys.exit(App.exec_())

完整的代码,待补充

第十步:下一步准备接入6个摄像头,开始拉流, 实现自己的意图

25、Jetson Xavier Nx 使用deepstream6.0进行目标检测和推流处理_desktop_11

未完待续

参考:

如何在deepstream-app里调用USB与CSI摄像头-电子发烧友网


举报

相关推荐

0 条评论