Skip to content

Running the Plumerai People Detection demo

Installing and running the Plumerai People Detection demo can be done in 4 simple steps.

Step 0: Understand your system

In case of issues with this step, visit the troubleshooting page for this step.

The Plumerai People Detection demo works on Linux and on macOS. Depending on the operating system you are using, there might be additional details required. Make note of the following information, which might be required in the next steps.

On Linux, open a terminal and run the uname -a command. The People Detection demo will only work if the output includes x86_64 (i.e. a 64-bit Intel/AMD machine) or aarch64 (i.e. a 64-bit Arm machine). In all other cases, please contact Plumerai and include the output of the uname -a command.

The demo application uses the open-source GStreamer software, which supports different camera stacks. By default on Linux we will use the Video4Linux / V4L camera stack through GStreamer. There is one exception: if you use the PiCam camera (e.g. on a RaspberryPi) we will use the libcamera camera stack instead.

On Apple's macOS, the People Detection demo will work with both the older Intel x86-64 models and the newer Apple Silicon Mx arm64 models. In case of the older Intel based models, follow the instructions in the next steps under 'macOS x86-64'. For M1/M2 or newer models, follow the instructions under macOS arm64.

Step 1: Install GStreamer

In case of issues with this step, visit the troubleshooting page for this step.

The demo application uses the open-source GStreamer software. This software is not developed by Plumerai, but it is a requirement to run the Plumerai People Detection demo. It can be installed as follows, depending on the system you are using, see step 0 above.

Run the following command in a terminal window:

sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
  libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base \
  gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
  gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-x \
  gstreamer1.0-gl gstreamer1.0-plugins-base-apps v4l-utils

Run the following command in a terminal window:

sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
  libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base \
  gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
  gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-x \
  gstreamer1.0-gl gstreamer1.0-plugins-base-apps
Next we need to install the libcamera package with its GStreamer plugin. We'll have to do this from source, following these instructions. In essence this just means running the following commands (recommended to become root first):
apt install libyaml-dev python3 python3-pip python3-yaml python3-ply \
  python3-jinja2 ninja-build git build-essential libgnutls28-dev libssl-dev openssl
pip3 install meson
git clone https://git.libcamera.org/libcamera/libcamera.git
cd libcamera
git checkout 668a5e674aed65b8982b449b4bed58ff7e3e1413  # corresponds to v0.1.0
meson setup build
ninja -C build install
The above script checks out a particular commit. This corresponds to the latest release at the time of testing by Plumerai. Newer versions might work as well, but have not been tested. Older versions might work as well, however for the PiCam v3 at least version 0.1.0 is required.

In case of other Linux distributions, we refer to the official GStreamer documentation. Make sure to install both the base GStreamer software and all the plugin sets: good, bad, and ugly.

Install two macOS universal packages as listed on the official download page:
1. The runtime package.
2. The development package.

To test your installation, you can run the following GStreamer test command in a terminal window:

gst-launch-1.0 videotestsrc ! autovideosink

This should display a test video source in a small window: a few different colours and a moving black/white area in the bottom-right.

Step 2: Install the demo

In case of issues with this step, visit the troubleshooting page for this step.

Now it is time to install the Plumerai People Detection demo itself. First unpack the contents of the plumerai_people_detection_demo.zip to a folder of your own choice. Then, follow the instructions specific to your system:

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_x86_64_v4l/
chmod +x plumerai_demo
This ensures plumerai_demo can be executed.

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_aarch64/
chmod +x plumerai_demo
This ensures plumerai_demo can be executed.

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_x86_64/
chmod +x plumerai_demo
xattr -d com.apple.quarantine plumerai_demo
xattr -d com.apple.quarantine libplumerai_gstreamer_plugin.dylib
xattr -d com.apple.quarantine libosxvideo.dylib
This ensures these files have the correct permissions and attributes, since they will have been downloaded.

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_arm64/
chmod +x plumerai_demo
xattr -d com.apple.quarantine plumerai_demo
xattr -d com.apple.quarantine libplumerai_gstreamer_plugin.dylib
xattr -d com.apple.quarantine libosxvideo.dylib
This ensures these files have the correct permissions and attributes, since they will have been downloaded.

Step 3: Running with a camera

The Plumerai People Detection software is designed to be used with live camera data. This section covers that use-case. However, it can also be used with video files from disk (step 4) or an RTSP stream (step 5). In that case, step 3 can be skipped.

To make the demo run, we first need to determine the camera device, input format, and resolution.

Step 3A: Selecting the camera

In case of issues with this step, visit the troubleshooting page for this step.

Some systems might have multiple cameras attached. The demo therefore requires the user to select a camera to use. You can run the gst-device-monitor-1.0 Video command in a terminal to see which devices you have attached. The Plumerai demo application has a command-line argument input_source, which can be determined as follows:

Inspect the output of gst-device-monitor-1.0 Video to find the name of the camera under device.path. The first camera is typically at /dev/video0, and subsequent cameras at /dev/videoN where N is 1, 2 or higher. If e.g. /dev/video1 does not work, try /dev/video2, since sometimes one index is skipped. This name should be supplied to the input_source argument in step 3C.

The demo currently does not support camera selection in this case: it assumes there is only one PiCam attached. For the input_source in step 3C, use picam as input_source argument.

On macOS the demo simply requires an integer index for the camera ID as the input_source argument in step 3C. So for the first camera this will be simply 0, and subsequent camera are 1, 2, etc. Note that there might also be virtual cameras set-up, so the first actual camera might not have ID 0. Use gst-device-monitor-1.0 Video to find the indices of the camera to use.

To test your camera selection, you can run the following GStreamer test command in a terminal window:

gst-launch-1.0 v4l2src device=<YOUR_CAMERA> ! glimagesink
Here, <YOUR_CAMERA> needs to be changed to e.g. /dev/video0 for the first camera.

gst-launch-1.0 libcamerasrc ! glimagesink

gst-launch-1.0 avfvideosrc device-index=<YOUR_CAMERA_ID> ! glimagesink
Here, <YOUR_CAMERA_ID> needs to be changed to e.g. 0 for the first camera.

Step 3B: Selecting the video input

In case of issues with this step, visit the troubleshooting page for this step.

Once the camera is chosen, the input format and resolution (width and height) need to be chosen. Again, running the gst-device-monitor-1.0 Video command in a terminal can provide this information. For example, the command might output:

 name  : Integrated_Webcam_HD: Integrate
 class : Video/Source
 caps  : image/jpeg, width=1280, height=720, framerate=30/1
         image/jpeg, width=960, height=540, framerate=30/1
         image/jpeg, width=640, height=480, framerate=30/1
         video/x-raw, format=YUY2, width=640, height=480, framerate=30/1
         video/x-raw, format=YUY2, width=320, height=240, framerate=30/1

In this example, two camera input formats are supported: compressed JPEG and raw YUY2, and several input resolutions are supported, of which 1280x720 in JPEG-mode is the highest.

The higher the input resolution, the better the detection results can become. However, note that this might slow down the entire demo application, because the camera capture and video-displaying might take up more resources. The frame rate displayed on screen is purely for the Plumerai People Detection algorithm itself, and does not count camera capture or displaying of the results.

The --camera_input_format argument in step 3C can be set to one of the supported formats according to gst-device-monitor-1.0 Video. However, there are system-specific restrictions:

The --camera_input_format argument can be set to YUY2 or JPEG.

The --camera_input_format argument has to be set to YUY2.

The --camera_input_format argument has to be set to YUY2.

Step 3C: Running the demo

In case of issues with this step, visit the troubleshooting page for this step.

After you have made a note of the camera device, video format, and width and height from the previous steps, you can run the demo as follows:

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_x86_64_v4l/
./plumerai_demo <input_source> <video_width> <video_height> --camera_input_format <video_format>
For example:
./plumerai_demo /dev/video0 1280 720 --camera_input_format JPEG

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_aarch64/
./plumerai_demo <input_source> <video_width> <video_height> --camera_input_format <video_format>
For example:
./plumerai_demo /dev/video0 1280 720 --camera_input_format JPEG

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_aarch64/
./plumerai_demo picam <video_width> <video_height> --camera_input_format YUY2
For example with a PiCam v2:
./plumerai_demo picam 800 600 --camera_input_format YUY2
For PiCam v3, 1270x720 might be a better choice of resolution.

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_x86_64/
./plumerai_demo <input_source_id> <video_width> <video_height>
For example:
./plumerai_demo 0 1280 720

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_arm64/
./plumerai_demo <input_source_id> <video_width> <video_height>
For example:
./plumerai_demo 0 1280 720

The demo supports additional optional arguments and has a built-in 'help' functionality. To see all options and documentation, run the demo binary as ./plumerai_demo --help.

Step 4 (optional): Running with a video file

In case of issues with this step, visit the troubleshooting page for this step.

Optionally, the Plumerai People Detection can be used with a pre-recorded video file instead of live camera input. In this case, locate your video file and run the demo as follows:

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_x86_64_v4l/
./plumerai_demo /path/to/video_file.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_aarch64/
./plumerai_demo /path/to/video_file.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_x86_64/
./plumerai_demo /path/to/video_file.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_arm64/
./plumerai_demo /path/to/video_file.mp4 <video_width> <video_height>

For example:

./plumerai_demo ~/Videos/test_video.mp4 1920 1080

Step 5 (optional): Running with an RTSP stream

In case of issues with this step, visit the troubleshooting page for this step.

Optionally, the Plumerai People Detection can be used with live camera input coming from a camera that streams its video using RTSP. In this case, locate your video file and run the demo as follows:

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_x86_64_v4l/
./plumerai_demo rtsp://stream_url.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/linux_aarch64/
./plumerai_demo rtsp://stream_url.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_x86_64/
./plumerai_demo rtsp://stream_url.mp4 <video_width> <video_height>

Open a terminal window, navigate to the folder where the package was unzipped using cd, and then run:

cd plumerai_people_detection/demo/macos_arm64/
./plumerai_demo rtsp://stream_url.mp4 <video_width> <video_height>

For example:

./plumerai_demo rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 240 160