Skip to content

Plumerai Video Intelligence JNI

This document describes the Java Native Interface (JNI) for the Plumerai Video Intelligence software for videos.

Compilation

When running with the JNI, make sure to point the Java library path to where the Plumerai video intelligence shared object can be found, e.g. java -Djava.library.path=path/to/plumerai/. Furthermore, be sure to include both of the Plumerai Java source files in your project: PlumeraiBoxPrediction.java and PlumeraiVideoIntelligence.java.

Android

If you are using the JNI API from Android, simply place the libplumeraivideointelligence.so file in the jniLibs folder for your target, e.g. in app/src/main/jniLibs/arm64-v8a, and proceed as normal.

If you encounter errors regarding a missing libc++_shared.so file, then you have two options to solve it:

  1. The file libc++_shared.so is provided by the Android NDK, so you can simply copy that file from the NDK to the folder where you also put libplumeraivideointelligence.so and it will be automatically loaded. See also this suggestion. For example this might be done as follows: cp ~/Android/ndk/26.1.10909125/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/aarch64-linux-android/libc++_shared.so ~/AndroidTestApp/app/src/main/jniLibs/arm64-v8a/ (modify as needed for your system and project).
  2. If you don't compile any C++ code yet, first add a dummy C++ source file to your Android project. Then, in your Gradle file, add arguments '-DANDROID_STL=c++_shared' and cppFlags '' to the section android { defaultConfig { externalNativeBuild { cmake { (you might have to create the externalNativeBuild and cmake sections). That should trigger Gradle to add libc++_shared.so to your APK file. For more information about this solution, see this thread or this stackoverflow issue.

The API

The JNI API consists of two Java source files which are self-documented. It is simple enough: there is a constructor that needs to be run once, and a process_frame method that must be called for every input frame. There is an example below.

PlumeraiVideoIntelligence

public PlumeraiVideoIntelligence(int height, int width);

Initializes a new video intelligence object.

This needs to be called only once at the start of the application.

Arguments

  • height int: The height of the input image in pixels.
  • width int: The width of the input image in pixels.

Returns

Nothing.

deletePlumeraiVideoIntelligence

public synchronized void deletePlumeraiVideoIntelligence();

Destructor, should be called manually at the end to clean-up.

Arguments

None.

Returns

Nothing.

processFrame

public int processFrame(ByteArray image_data, PlumeraiBoxPrediction[] results,
                        int max_results, float delta_t);

Process a single frame from a video sequence.

This version supports RGB888 input. Note that the algorithm comes with a built-in threshold (e.g. 0.6 - this differs per model): bounding boxes with confidences lower than that value won't be reported at all by this function.

Arguments

  • image_data ByteBuffer: A ByteBuffer object with 3x8-bit RGB image data (1st byte red, 3rd blue) of size height * width * 3. The ByteBuffer needs to be allocated with allocateDirect.
  • results PlumeraiBoxPrediction[]: The resulting bounding-boxes found in the frame, should be preallocated by the user to size max_results.
  • max_results int: The maximum number of bounding-boxes that will be returned.
  • delta_t float: The time in seconds it took between this and the previous video frame (1/fps). If left to 0, then the system clock will be used to compute this value.

Returns

int: an error code, which can be any of the following:

  • SUCCESS = 0: Everything was OK.
  • INTERNAL_ERROR = -1: Should not occur, contact Plumerai if this happens.
  • INVALID_DELTA_T = -2: The delta_t parameter should be >= 0.
  • INVALID_BUFFER_TYPE = -100: The image_data buffer is not a direct ByteBuffer.
  • INVALID_BUFFER_SIZE = -101: The image_data buffer is not the correct size.

BoxPrediction

public class PlumeraiBoxPrediction {
  public float y_min; // top coordinate between 0 and 1 in height dimension
  public float x_min; // left coordinate between 0 and 1 in width dimension
  public float y_max; // bottom coordinate between 0 and 1 in height dimension
  public float x_max; // right coordinate between 0 and 1 in width dimension
  public float confidence; // between 0 and 1, higher means more confident
  public int id; // the tracked identifier of this box
  public int class_id; // the class of the detected object, see below

  // Allowed values for the `class_id` variable above:
  public static final int DETECTION_CLASS_UNKNOWN = 0;
  public static final int DETECTION_CLASS_PERSON = 1;
  public static final int DETECTION_CLASS_HEAD = 2;
  public static final int DETECTION_CLASS_FACE = 3;
}

A structure representing a single resulting bounding box. Coordinates are between 0 and 1, the origin is at the upper left corner. Confidence values lie between 0 and 1. Note that the algorithm comes with a built-in threshold (e.g. 0.6 - this differs per model): bounding boxes with confidences lower than that value won't be produced at all by the Plumerai Video Intelligence functions.

Example usage

Below is an example of using the JNI shown above.

import com.plumerai.box_prediction.PlumeraiBoxPrediction;
import com.plumerai.video_intelligence.PlumeraiVideoIntelligence;
import java.nio.ByteBuffer;

public class PlumeraiVideoIntelligenceExample {

  public static void main(String[] args) {

    // Initialize the Plumerai Video Intelligence algorithm with 1600x1200 input
    PlumeraiVideoIntelligence pvi = new PlumeraiVideoIntelligence(1200, 1600);
    final int max_results = 10;

    // Allocate storage for the input data, e.g. a camera frame
    ByteBuffer image_data = ByteBuffer.allocateDirect(1600 * 1200 * 3);

    // Loop over frames in a video stream
    while (true) {

      // Normally we would obtain the next frame here, e.g. from the camera
      image_data = ...

      // Prepare the results structure
      PlumeraiBoxPrediction[] results = new PlumeraiBoxPrediction[max_results];
      for (int i = 0; i < max_results; ++i) {
        results[i] = new PlumeraiBoxPrediction();
        results[i].id = -1; // to indicate invalid results, see below
      }

      // Run the Plumerai video intelligence algorithm
      final float delta_t = 0.0f; // Should be set as the time between frames
      final int error_code = pvi.processFrame(image_data, results,
                                              max_results, delta_t);
      if (error_code != PlumeraiVideoIntelligence.SUCCESS) {
        System.err.printf("Plumerai video intelligence: error %d\n", error_code);
        return;
      }

      // Print the results
      for (int i = 0; i < max_results; ++i) {
        PlumeraiBoxPrediction r = results[i];
        if (results[i].id != -1) {
          System.out.printf(
              "Box #%d with confidence %.2f @ (x,y) -> (%.2f,%.2f)-(%.2f,%.2f)\n",
              r.id, r.confidence, r.x_min, r.y_min, r.x_max, r.y_max);
        }
      }
    }

    // Clean-up
    pvi.deletePlumeraiVideoIntelligence();
  }
}