Running the Plumerai People Detection demo¶
Installing and running the Plumerai People Detection demo can be done in 4 simple steps.
Step 0: Understand your system¶
In case of issues with this step, visit the troubleshooting page for this step.
The Plumerai People Detection demo works on Linux and on macOS. Depending on the operating system you are using, there might be additional details required. Make note of the following information, which might be required in the next steps.
On Linux, open a terminal and run the uname -a
command. The People Detection demo will only work if the output includes x86_64
(i.e. a 64-bit Intel/AMD machine), aarch64
(i.e. a 64-bit Arm machine), or armv7l
(i.e. a 32-bit Arm machine). In all other cases, please contact Plumerai and include the output of the uname -a
and getconf LONG_BIT
command.
Note that uname -a
prints the kernel architecture, which might be different from the user-space version. In particular on a 32-bit RaspberryPiOS, the kernel might be 64-bit, but the user-space is 32-bit. In that case, uname -a
will print aarch64
, while getconf LONG_BIT
will print 32
and thus the armv7l
instructions on this page should be used.
The demo application uses the open-source GStreamer software, which supports different camera stacks. By default on Linux we will use the Video4Linux / V4L camera stack through GStreamer. There is one exception: if you use the PiCam camera (e.g. on a RaspberryPi) we will use the libcamera camera stack instead.
On Apple's macOS, the People Detection demo will work with both the older Intel x86-64 models and the newer Apple Silicon Mx arm64 models. In case of the older Intel based models, follow the instructions in the next steps under 'macOS x86-64'. For M1/M2 or newer models, follow the instructions under macOS arm64
.
Step 1: Install GStreamer¶
In case of issues with this step, visit the troubleshooting page for this step.
The demo application uses the open-source GStreamer software. This software is not developed by Plumerai, but it is a requirement to run the Plumerai People Detection demo. It can be installed as follows, depending on the system you are using, see step 0 above.
Run the following command in a terminal window:
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-x \
gstreamer1.0-gl gstreamer1.0-plugins-base-apps v4l-utils
Run the following command in a terminal window:
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-x \
gstreamer1.0-gl gstreamer1.0-plugins-base-apps
apt install libyaml-dev python3 python3-pip python3-yaml python3-ply \
python3-jinja2 ninja-build git build-essential libgnutls28-dev libssl-dev openssl
pip3 install meson
git clone https://git.libcamera.org/libcamera/libcamera.git
cd libcamera
git checkout 668a5e674aed65b8982b449b4bed58ff7e3e1413 # corresponds to v0.1.0
meson setup build
ninja -C build install
After this install, we have to update the dynamic linker cache to let the system know about the newly installed libraries:
If you forget to run this, you can expect errors saying thatlibcamera.so.0.1
was not found. To test if the plugin is properly installed, you can run the following commands:
In case of other Linux distributions, we refer to the official GStreamer documentation. Make sure to install both the base GStreamer software and all the plugin sets: good
, bad
, and ugly
.
Install two macOS universal packages as listed on the official download page:
1. The runtime package.
2. The development package.
To test your installation, you can run the following GStreamer test command in a terminal window:
This should display a test video source in a small window: a few different colours and a moving black/white area in the bottom-right.
Step 2: Install the demo¶
In case of issues with this step, visit the troubleshooting page for this step.
Now it is time to install the Plumerai People Detection demo itself. First unpack the contents of the plumerai_video_intelligence_demo.zip
to a folder of your own choice. Then, follow the instructions specific to your system:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
plumerai_demo
can be executed. Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
plumerai_demo
can be executed. Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
plumerai_demo
can be executed. Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Step 3: Running with a camera¶
The Plumerai People Detection software is designed to be used with live camera data. This section covers that use-case. However, it can also be used with video files from disk (step 4) or an RTSP stream (step 5). In that case, step 3 can be skipped.
To make the demo run, we first need to determine the camera device, input format, and resolution.
Step 3A: Selecting the camera¶
In case of issues with this step, visit the troubleshooting page for this step.
Some systems might have multiple cameras attached. The demo therefore requires the user to select a camera to use. You can run the gst-device-monitor-1.0 Video
command in a terminal to see which devices you have attached. The Plumerai demo application has a command-line argument input_source
, which can be determined as follows:
Inspect the output of gst-device-monitor-1.0 Video
to find the name of the camera under device.path
. The first camera is typically at /dev/video0
, and subsequent cameras at /dev/videoN
where N
is 1, 2 or higher. If e.g. /dev/video1
does not work, try /dev/video2
, since sometimes one index is skipped. This name should be supplied to the input_source
argument in step 3C.
The demo currently does not support camera selection in this case: it assumes there is only one PiCam attached. For the input_source
in step 3C, use PiCamV2
or PiCamV3
as input_source
argument depending on your model.
On macOS the demo simply requires an integer index for the camera ID as the input_source
argument in step 3C. So for the first camera this will be simply 0
, and subsequent camera are 1
, 2
, etc. Note that there might also be virtual cameras set-up, so the first actual camera might not have ID 0. Use gst-device-monitor-1.0 Video
to find the indices of the camera to use.
To test your camera selection, you can run the following GStreamer test command in a terminal window:
<YOUR_CAMERA>
needs to be changed to e.g. /dev/video0
for the first camera. Step 3B: Selecting the video input¶
In case of issues with this step, visit the troubleshooting page for this step.
Once the camera is chosen, the input format and resolution (width and height) need to be chosen. Again, running the gst-device-monitor-1.0 Video
command in a terminal can provide this information. For example, the command might output:
name : Integrated_Webcam_HD: Integrate
class : Video/Source
caps : image/jpeg, width=1280, height=720, framerate=30/1
image/jpeg, width=960, height=540, framerate=30/1
image/jpeg, width=640, height=480, framerate=30/1
video/x-raw, format=YUY2, width=640, height=480, framerate=30/1
video/x-raw, format=YUY2, width=320, height=240, framerate=30/1
In this example, two camera input formats are supported: compressed JPEG and raw YUY2, and several input resolutions are supported, of which 1280x720 in JPEG-mode is the highest.
The higher the input resolution, the better the detection results can become. However, note that this might slow down the entire demo application, because the camera capture and video-displaying might take up more resources. The frame rate displayed on screen is purely for the Plumerai People Detection algorithm itself, and does not count camera capture or displaying of the results.
The --camera_input_format
argument in step 3C can be set to one of the supported formats according to gst-device-monitor-1.0 Video
. However, there are system-specific restrictions:
The --camera_input_format
argument can be set to YUY2
or JPEG
.
The --camera_input_format
argument has to be set to YUY2
.
The --camera_input_format
argument has to be set to YUY2
.
Step 3C: Running the demo¶
In case of issues with this step, visit the troubleshooting page for this step.
After you have made a note of the camera device, video format, and width and height from the previous steps, you can run the demo as follows:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
cd plumerai_video_intelligence/demo/linux_aarch64/
export GST_PLUGIN_PATH=/usr/local/lib/aarch64-linux-gnu/
./plumerai_demo <picam_model> <video_width> <video_height> --camera_input_format YUY2
PiCamV3
on the command-line, because this enables auto-focus mode. The demo will also run with PiCam
or PiCamV2
as argument, but the camera will keep its last-known focus setting, yielding bad results. The demo supports additional optional arguments and has a built-in 'help' functionality. To see all options and documentation, run the demo binary as ./plumerai_demo --help
.
Step 4 (optional): Running with a video file¶
In case of issues with this step, visit the troubleshooting page for this step.
Optionally, the Plumerai People Detection can be used with a pre-recorded video file instead of live camera input. In this case, locate your video file and run the demo as follows:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
For example:
Step 5 (optional): Running with an RTSP stream¶
In case of issues with this step, visit the troubleshooting page for this step.
Optionally, the Plumerai People Detection can be used with live camera input coming from a camera that streams its video using RTSP. In this case, locate your video file and run the demo as follows:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
Open a terminal window, navigate to the folder where the package was unzipped using cd
, and then run:
For example: