How to write a custom V4L2 camera driver for Jetson Orin
Writing a V4L2 camera driver for Jetson Orin is a real project: kernel module, V4L2 subdev, sensor init sequence, and DTS integration — each a potential blocker on its own. Most engineers hit the same four or five failure points on the first attempt. This post covers the complete driver structure and what to do when it stops working.
Key Insights
- Jetson uses the tegra-camera-platform framework, not raw V4L2 subdev ops — the DTS structure and driver ops are Jetson-specific
- The sensor mode table in the DTS is as important as the C driver code — wrong timing values cause subtle streaming failures
- First goal: get any frame through the pipeline, then iterate on format, controls, and mode accuracy
- You need the sensor’s full register map including init sequences and streaming enable registers — public datasheets are rarely sufficient
- Test with
v4l2-ctlbefore Argus to separate driver issues from Argus integration issues
Understanding the Jetson camera driver stack
Before writing a single line of code, it helps to understand what you are plugging into.
A Jetson camera driver sits at the bottom of a chain: your driver talks to the sensor via I2C, the sensor sends MIPI CSI-2 data to the NVCSI, the NVCSI feeds the VI (Video Input) capture engine, and V4L2 / Argus sits above the VI.
The tegra-camera-platform framework coordinates all of this. Your driver implements a set of ops that the framework calls during probe, stream start/stop, and mode changes. You do not directly configure NVCSI or VI — the framework does that based on your DTS mode table.
Your sensor driver (imx_custom.c)
↓ implements tegra_cam_platform_sensor_ops
tegra-camera-platform framework
↓ configures
NVCSI → VI → V4L2 / Argus
The driver skeleton
Start with the minimal driver structure:
#include <media/camera_common.h>
#include <linux/module.h>
#include <linux/i2c.h>
static int imx_custom_s_power(struct v4l2_subdev *sd, int on)
{
/* 1. Toggle VDD, VDDIO, VANA rails */
/* 2. Toggle XCLR (reset) GPIO — deassert for power-on */
/* 3. Wait for sensor power-on time per datasheet */
return 0;
}
static int imx_custom_set_mode(struct v4l2_subdev *sd, u32 val)
{
struct camera_common_data *s_data = to_camera_common_data(sd->dev);
/* Write register table for mode 'val' */
return imx_custom_write_table(s_data->client,
imx_custom_mode_table[val]);
}
static int imx_custom_s_stream(struct v4l2_subdev *sd, int enable)
{
struct camera_common_data *s_data = to_camera_common_data(sd->dev);
u8 val = enable ? 0x01 : 0x00;
return imx_custom_write_reg(s_data->client,
IMX_CUSTOM_STREAM_ENABLE_REG, val);
}
static struct camera_common_sensor_ops imx_custom_sensor_ops = {
.numfmts = ARRAY_SIZE(imx_custom_fmts),
.fmts = imx_custom_fmts,
.power_on = imx_custom_power_on,
.power_off = imx_custom_power_off,
.s_stream = imx_custom_s_stream,
.set_mode = imx_custom_set_mode,
.g_frame_interval = imx_custom_g_frame_interval,
.s_frame_interval = imx_custom_s_frame_interval,
};
The register write function is typically a straightforward I2C transfer. Use the camera_common infrastructure (camera_common_i2c_write) rather than raw i2c_transfer — it handles retry logic that sensor initialization sequences often need.
The mode table
The mode table is the most time-consuming part to get right. Each mode defines the sensor’s MIPI output timing for a given resolution and frame rate. Wrong values cause uncorr_err from the NVCSI or off-by-one frame rate.
The critical values — get these from the sensor datasheet, not by guessing:
mode0 { /* 1920x1080 @ 30fps */
mclk_khz = "24000"; /* Input clock frequency */
num_lanes = "4"; /* MIPI data lanes */
tegra_sinterface = "serial_a"; /* NVCSI interface */
vc_id = "0"; /* Virtual channel */
discontinuous_clk = "no";
dpcm_enable = "false";
cil_settletime = "0";
active_w = "1920"; /* Active pixel width */
active_h = "1080"; /* Active pixel height */
mode_type = "bayer";
pixel_phase = "rggb"; /* Bayer pattern — check datasheet */
csi_pixel_bit_depth = "10"; /* RAW10, RAW12, etc. */
line_length = "2200"; /* Total line length in pixels */
inherent_gain = "1";
mclk_multiplier = "20.0"; /* PLL multiplier from MCLK */
pix_clk_hz = "480000000"; /* Pixel clock = line_length × rows × fps */
min_gain_val = "1";
max_gain_val = "16";
min_framerate = "1.0";
max_framerate = "30.0";
min_exp_time = "28"; /* In microseconds */
max_exp_time = "33000";
embedded_metadata_height = "0";
};
The pix_clk_hz must be consistent with line_length, frame height, and frame rate: pix_clk_hz = line_length × (active_h + vblank) × framerate. Get this from the sensor PLL register table in the datasheet.
Sensor register tables
Sensor init sequences are typically long arrays of (address, value) pairs. Structure them as:
static struct reg_8 imx_custom_mode_1920x1080_30fps[] = {
/* Streaming disable */
{0x0100, 0x00},
/* PLL settings for 24MHz MCLK, 4-lane, 960Mbps */
{0x0301, 0x05},
{0x0303, 0x02},
/* ... many more registers ... */
/* Streaming enable */
{0x0100, 0x01},
{IMX_TABLE_END, 0x00},
};
The table ends with a sentinel value (IMX_TABLE_END) that the write function uses to stop. Write a simple loop that processes the table entry by entry:
static int imx_custom_write_table(struct i2c_client *client,
const struct reg_8 *table)
{
int err;
while (table->addr != IMX_TABLE_END) {
err = camera_common_i2c_write(client, table->addr, table->val);
if (err)
return err;
table++;
}
return 0;
}
Testing progression
Do not try to get a perfect frame immediately. Work through this progression:
-
Driver probes.
dmesg | grep imx_customshows probe success. If probe fails, fix the DTS compatible string, I2C address, or GPIO/regulator references. -
V4L2 device appears.
v4l2-ctl --list-devicesshows your sensor. If absent, the VI-to-NVCSI link in the DTS is wrong. -
Format sets without error.
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=RG10exits cleanly. -
Streaming starts.
v4l2-ctl --stream-mmap --count=10captures frames without timeout. Any frame — even a corrupted one — means the pipeline is working. -
Frame is valid. Capture to file and view with a RAW image viewer. Wrong Bayer pattern? Fix
pixel_phasein the mode table. Wrong resolution? Fixactive_w/active_h.
For debugging MIPI errors during this process, see V4L2 uncorr_err on Jetson: what it means and how to fix it. For sensor-specific issues like I2C detection without frames, see IMX sensor on Jetson: I2C detects but no frames — 5 root causes.
NVIDIA’s complete sensor driver programming guide for Jetson is in the Jetson Linux Developer Guide. Reference driver source code (IMX390, IMX219) is in the L4T kernel source at kernel/nvidia/drivers/media/i2c/.
NVIDIA Jetson Expert Support
Stuck on a Jetson bring-up?
We've debugged this failure mode before. BSP, device tree, camera pipelines, OTA — most blockers clear in the first session. No long retainers. No guessing.
Frequently Asked Questions
What is the difference between writing a V4L2 driver for Jetson vs. a standard Linux system?
On standard Linux, you implement raw V4L2 subdev ops (s_power, s_stream, get_fmt, set_fmt) directly. On Jetson, NVIDIA's tegra-camera-platform framework sits between your driver and V4L2 — you implement tegra_cam_platform_sensor_ops instead. The framework handles NVCSI configuration, Argus integration, and sensor mode negotiation. The DTS structure is also Jetson-specific, with sensor_modes tables that don't exist in mainline Linux.
How long does it take to write a custom V4L2 camera driver for Jetson?
A driver for a well-documented sensor (Sony IMX, OmniVision OV) with a published register map typically takes 3–7 days for an engineer with V4L2 experience: 1–2 days for the driver skeleton and register write functions, 1–2 days to write the mode table and get first light, and 1–3 days to implement V4L2 controls (gain, exposure) and validate with Argus. For a less-documented sensor or a sensor requiring unusual register sequences, plan 2–3 weeks.
Do I need to submit my Jetson camera driver to the mainline Linux kernel?
No. Jetson camera drivers use the tegra-camera-platform framework which is NVIDIA-specific and not in the mainline kernel. Your driver lives in the L4T kernel tree (out-of-tree or in nvidia/drivers/media/i2c/) and is built against the NVIDIA-provided L4T kernel source. Mainline submission is not required and would require a major rewrite to mainline-compatible V4L2 subdev ops.
What sensor register documentation do I need before starting a Jetson camera driver?
You need: the sensor initialization register sequence for each operating mode (resolution, frame rate), the streaming enable/disable registers, the exposure and gain control registers, and the PLL configuration registers if MCLK is different from your operating mode. For Sony IMX sensors, this documentation is available through Sony's partner portal. For OmniVision sensors, it is in the OV design guide. Publicly available datasheets rarely have the full register map.
Can I test my Jetson camera driver without Argus or nvarguscamerasrc?
Yes. Use v4l2-ctl to test directly: set the format with --set-fmt-video, then stream with --stream-mmap. You also need to set the subdev format using --set-subdev-fmt on the sensor's /dev/v4l-subdevN node. If v4l2-ctl streaming works, nvarguscamerasrc will also work (Argus uses the same underlying V4L2/NVCSI pipeline). Testing with v4l2-ctl first avoids Argus-specific issues confusing the diagnosis.
Written by
Andrés CamposCo-Founder & CTO · ProventusNova
8 years deep in embedded systems — from underwater ROVs to edge AI. Andrés leads every technical delivery personally.
Connect on LinkedIn