Video ingest in Python

With recent advancements in machine learning, robotics, and autonomy, video is now an essential test artifact for modern hardware development teams.

Nominal has 1st class support for video ingestion, analysis, time sychronization across sensor channels, and automated checks that signal when a video feature is out-of-spec.

This guide demonstrates how to upload a video file to Nominal in Python. Other guides in the Video section demonstrate basic Python video analysis and computer vision for video pre-processing prior to Nominal upload.

Connect to Nominal

Get your Nominal API token from your User settings page.

See the Quickstart for more details on connecting to Nominal from Python.

1import nominal.nominal as nm
2
3nm.set_token(
4 base_url = 'https://api.gov.nominal.io/api',
5 token = '* * *' # Replace with your Access Token from
6 # https://app.gov.nominal.io/settings/user?tab=tokens
7)
If you’re not sure whether your company has a Nominal tenant, please reach out to us.

Download sample video

1dataset_repo_id = 'nominal-io/raptor-engine-fire'
2dataset_filename = 'raptor_fire_first_5_seconds.mov'

For convenience, Nominal hosts sample test data on Hugging Face. To download the sample data for this guide, copy-paste the snippet below.

1from huggingface_hub import hf_hub_download
2
3dataset_path = hf_hub_download(
4 repo_id=f"{dataset_repo_id}",
5 filename=f"{dataset_filename}",
6 repo_type='dataset'
7)
8
9print(f"File saved to: {dataset_path}")

(Make sure to first install huggingface_hub with pip3 install huggingface_hub).

Since our dataset is a video file, we’ll rename dataset_path:

1video_path = dataset_path

Display video inline

If you’re working in Jupyter notebook, here’s a shortcut to display the video inline in your notebook.

1from ipywidgets import Video
2Video.from_file(video_path)

Upload video to Nominal

Once uploaded to Nominal, video test artifacts can be analyzed collaboratively in no-code workflows and integrated with checks to signal off-nominal video features.

Nominal requires a video start time for video file upload. If the absolute time that the video was captured is not important, you can use an arbitrary datetime like datetime.now() or 2011-11-11 11:11:11.

Video start times are used to align playback with other time-domain data in your run. Whichever absolute start time that you choose for your video (for example, 2011-11-11 11:11:11), make sure that it aligns with the other start times in your run’s data sources.

1import nominal.nominal as nm
2from datetime import datetime
3
4vid = nm.upload_video(
5 file = video_path,
6 name = 'Raptor Fire Test',
7 start = datetime.strptime('2011-11-11 11:11:11', '%Y-%m-%d %H:%M:%S')
8)

Add a Video to a Run

Once you’ve saved a Video to the Nominal platform, it’s easy to add the Video to a Run - Nominal’s container for multi-modal datasets that belong to the same test run and time domain.

In Nominal, Runs are containers of multimodal test data - including Datasets, Videos, Logs, and database connections.

To see your organization’s latest Runs, head over to the Runs page

Create an empty Run

The below code will create an empty Run container. Any type of data that Nominal supports (Video, CSV files, database connections, etc) and has an overlapping time domain can be attached to this Run container.

First, we’ll use OpenCV to get the video length in seconds. Nominal Runs expect both a start and end timestamp. In the example above, we’ve used 2011-11-11 11:11:11 as an arbitrary start time. To get the Run end time, we’ll simply add the video duration (in seconds) to this start time. If you haven’t make sure to install OpenCV for Python with pip install opencv-python.

1import cv2
2from datetime import datetime, timedelta
3
4video = cv2.VideoCapture(video_path)
5
6# Get the frames per second and total frame count
7fps = video.get(cv2.CAP_PROP_FPS)
8frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
9
10# Calculate the duration
11duration_seconds = frame_count / fps
12print(f"Duration: {duration_seconds} seconds")
13
14video.release()
15
16start_time = datetime.strptime('2011-11-11 11:11:11', '%Y-%m-%d %H:%M:%S'),
17end_time = start_time + timedelta(seconds=duration_seconds)

Now that we have the Run start and end times as variables, we create our empty Run container:

1import nominal.nominal as nm
2
3engine_fire_run = nm.create_run(
4 name = 'Engine Fire Run',
5 start = start_time,
6 end = end_time,
7 description = 'Run for Raptor engine fire.',
8)
9
10engine_fire_run

Attach Nominal Video to Run

We’ll use add_datset() to add the video file to the Run:

1engine_fire_run.add_dataset(
2 dataset = vid,
3 ref_name = "engine fire video",
4)

Ref names (reference names) are a namespace for data sources that share common channels, but do not necessarily belong to the same Run. They allow data sources with similar schema to be referenced as a group. For example, data sources with the same ref name can share Workbook templates and Checklists.

Retrieve a video

Retrieve a video with its RID (resource ID), which can be copy pasted for any video on the Videos page.

1vid = nm.get_video('ri.video.cerulean-staging.video.d6c6e12c-05b3-4bb0-9f45-97609b7c9da2')

Archive a video

Archiving a video prevents it from displaying on the Videos page.

1vid.archive()

To unarchive a video:

1vid.unarchive()

Now the video will display again on the Videos page.

Appendix

Inspect video metadata with OpenCV

The free OpenCV Python library is invaluable for video inspection and low-level video editing.

Below is a simple script that demonstrates how to capture basic video file properties such as video duration, frames per second, frame size, and total frame count.

You can install OpenCV for Python with pip install opencv-python.

1import cv2
2
3# Load the video
4# video_path = 'raptor_fire_first_5_seconds.mp4'
5# ☝️ Replace with your video_path or use the example video_path from above
6cap = cv2.VideoCapture(video_path)
7
8# Get video properties
9frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Width of the video frames
10frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Height of the video frames
11fps = cap.get(cv2.CAP_PROP_FPS) # Frames per second (fps)
12frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # Total number of frames
13
14# Calculate the duration of the video in seconds
15duration = frame_count / fps
16
17# Print video properties
18print(f"Video Size: {frame_width}x{frame_height} (width x height)")
19print(f"FPS: {fps}")
20print(f"Total Frames: {frame_count}")
21print(f"Duration: {duration:.2f} seconds")
22
23# Release the video capture object
24cap.release()
Video Size: 640x360 (width x height)
FPS: 30.0
Total Frames: 150
Duration: 5.00 seconds

If interested in more advanced video processing in Python, check out the object identification guide.