jetpackjetsonjetpack 6migrationl4tbsp

Upgrading from JetPack 5 to JetPack 6: what breaks and how to fix it

Andres Campos ·

Key Insights

  • JetPack 5 to 6 is a full platform jump: Ubuntu 20.04 → 22.04, CUDA 11.4 → 12.6, cuDNN 8 → 9, TensorRT 8 → 10, kernel 5.10 → 5.15
  • OTA from JP5 to JP6 is not supported — reflash is required
  • Everything compiled against JP5 must be rebuilt: TRT engines, camera drivers, kernel modules, Python environments
  • Custom camera drivers need porting — the tegracam sensor API changed between L4T R35 and R36
  • Xavier-series modules (AGX Xavier, Xavier NX) are not supported in JetPack 6 — JP5.1.x is the end of the line for Xavier

JetPack 5 to 6 is a platform upgrade, not a package update

The version bump from 5 to 6 understates what changed. The full platform delta:

ComponentJetPack 5 (L4T R35)JetPack 6 (L4T R36)
OSUbuntu 20.04Ubuntu 22.04
KernelLinux 5.10Linux 5.15
CUDA11.412.6
cuDNN8.x9.x
TensorRT8.x10.x
Python (default)3.83.10
OpenCV (bundled)4.5.44.8.1
GStreamer1.161.22

This means OTA from JP5 to JP6 is not supported. You cannot apt upgrade your way there. Reflash is required. Teams that try OTA hit A/B slot errors or partial upgrades that leave the system unbootable.

Also worth knowing: JP6 dropped support for Xavier-series modules (AGX Xavier, Xavier NX). If you’re on Xavier, JP5.1.x is your last JetPack.

The complete break list when upgrading JetPack 5 to JetPack 6

Go through this before you start. Every item here has caused a blocked migration in the field.

1. CUDA-compiled binaries

Any binary compiled against CUDA 11.4 will not run on CUDA 12.6 without recompilation. This includes your own code and any third-party libraries you built from source. apt-installed NVIDIA libraries will update automatically, but binaries in /opt/, /usr/local/, or your home directory won’t.

Audit: grep -r "libcuda" /opt /usr/local ~/.local 2>/dev/null | grep ".so"

2. cuDNN 8 applications

JetPack 6 ships libcudnn.so.9. Any application that dynamically links libcudnn.so.8 will fail with error while loading shared libraries: libcudnn.so.8: cannot open shared object file.

Options: recompile against cuDNN 9, or install cuDNN 8 from the NVIDIA archive alongside 9 (not recommended long-term, but buys time during migration).

3. TensorRT engines

TensorRT serialized engines are not portable across TRT versions. Every engine built on JP5 (TRT 8.x) must be regenerated on JP6 (TRT 10.x). This is not optional — attempting to load a TRT 8 engine in TRT 10 will crash at deserialization.

Additionally, TRT 10 removed several deprecated APIs from TRT 8. If your code uses IPluginV2 directly or the legacy calibrator interfaces, it will fail to compile.

4. PyTorch and TorchVision

The JP5-compatible PyTorch wheels (CUDA 11.4 builds) are incompatible with JP6’s CUDA 12.6. You need the JP6-specific wheels from NVIDIA’s PyPI or the Jetson Containers project. The version numbers don’t always match what you’d install from pip on a desktop — always use the Jetson-specific builds.

5. Custom camera drivers

This is the hardest one. L4T R36 made changes to the V4L2 subdev framework and the tegracam sensor driver API. A camera driver that works perfectly on JP5 will often fail to compile on JP6, and even if it compiles, behavior differences in the capture subsystem may cause runtime failures.

Specific things that change:

  • Sensor driver registration APIs in tegracam_core
  • NVCSI configuration interfaces
  • The mclk clock handling changed in R36

We cover this in more depth in the section below.

6. Out-of-tree kernel modules

The kernel version changed from 5.10 to 5.15. Any out-of-tree kernel module (.ko file) built for L4T R35 will fail to load on R36 with ERROR: could not insert module: Invalid module format. All kernel modules must be rebuilt against the L4T R36 kernel headers.

7. Device tree changes on custom carrier boards

L4T R36 updated the reference DTS files significantly. If you have a custom carrier board DTS that was derived from JP5 reference files, it needs an audit before JP6. Known breaking changes:

  • DWC3 USB controller: ref clock became mandatory (see our DWC3 error -71 post)
  • PCIe clock references changed on some Orin variants
  • UART/SPI/I2C pinmux configurations may need revalidation
  • ODMDATA format changes for some carrier board profiles

8. GStreamer plugin compatibility

GStreamer moved from 1.16 to 1.22. Most pipelines work without changes, but some NVIDIA-specific elements (nvv4l2decoder, nvarguscamerasrc) had interface changes. Pipelines that rely on deprecated property names or pad caps negotiation behavior from 1.16 may need updating.

9. Package names and Python environment

The Ubuntu 22.04 jump means some package names changed (python3-dev → same, but libpython3.8-dev is no longer default, etc.). virtualenvs built against Python 3.8 need to be recreated. Anything using distutils directly will break since it was removed in Python 3.12 (not relevant here) but setuptools behavior changed in 3.10.

Camera driver porting from JetPack 5 to JetPack 6

This is the part most teams underestimate. If you have a custom CSI camera driver, plan for 2–5 days of porting work depending on how much the driver deviates from the NVIDIA sensor driver template.

The main changes in L4T R36 that break JP5 camera drivers:

tegracam API changes. The tegracam_device and tegracam_ctrl_ops structures changed. Functions that were called directly in R35 are now wrapped or renamed. Drivers built against the R35 headers will hit compile errors in tegracam_core.h.

MCLK handling. The sensor mclk clock acquisition changed. In R35, many drivers called clk_get directly. R36 moved this into the framework. Drivers that manage the mclk themselves need the clock handling rewritten.

V4L2 subdev pad config API. The v4l2_subdev_pad_ops structure had members renamed in newer kernel versions included in R36. This causes compile errors that look unrelated to camera bring-up but are.

The fastest path: take the NVIDIA reference sensor driver for a similar sensor (the IMX219 or IMX477 reference drivers in the JP6 source tree), diff it against the JP5 version, and apply the same structural changes to your driver. Don’t try to patch your JP5 driver forward blindly.

For reference on the full camera bring-up process in JP6, see our CSI camera driver bring-up post.

ML stack and TensorRT migration

Rebuilding the ML stack in sequence matters. Do it in this order to avoid dependency conflicts:

  1. Flash JP6 and boot clean
  2. Install CUDA 12.6 + cuDNN 9 from the JP6 image (they come pre-installed)
  3. Install JP6-specific PyTorch wheels (from NVIDIA’s Jetson PyPI or dusty-nv/jetson-containers)
  4. Install TorchVision matching your PyTorch version (check the Jetson Containers compatibility matrix)
  5. Re-export ONNX models from your training framework
  6. Rebuild TensorRT engines on the target with trtexec or your engine builder

Do not try to install desktop PyTorch wheels. The pip install torch from PyPI will attempt to download CUDA 12 wheels but they are x86 builds and will fail or silently install wrong. Use dusty-nv/jetson-containers for pre-built, verified JP6-compatible PyTorch and TorchVision wheels — it’s the most reliable source for the Jetson ML stack.

NVIDIA’s official JetPack 6 release notes document the full package delta and known issues for each point release.

What to audit before you start the migration

AreaWhat to checkRisk if skipped
Camera driversDo you have custom .c driver files?2–5 days of porting work discovered mid-migration
TRT enginesList all .engine files in productionRuntime crashes at model load
Kernel moduleslsmod and /lib/modules/ on your JP5 systemModules silently missing on JP6
Python environmentsAll virtualenvs, conda envsImport failures in production
Hardcoded CUDA pathsGrep for /usr/local/cuda-11Runtime errors in scripts
Carrier board DTSIs your DTS derived from JP5 reference?Boot failures after flash

Frequently Asked Questions

Can I upgrade from JetPack 5 to JetPack 6 via OTA?

No. JetPack 5 and JetPack 6 run on different base OS versions (Ubuntu 20.04 vs 22.04) and incompatible L4T versions (R35 vs R36). OTA from JP5 to JP6 is not supported. You need to reflash.

Will my JetPack 5 camera drivers work on JetPack 6?

Unlikely without porting. The V4L2 subdev API changed between L4T R35 and R36, and the tegracam sensor driver framework had breaking changes. Custom camera drivers need to be ported and recompiled against L4T R36 sources.

Why does “libcudnn.so.8 not found” appear after upgrading to JetPack 6?

JetPack 5 ships cuDNN 8.x. JetPack 6 ships cuDNN 9.x. The shared library filename changed from libcudnn.so.8 to libcudnn.so.9. Any binary compiled against cuDNN 8 needs to be recompiled, or cuDNN 8 must be installed separately from the NVIDIA archive.

Does JetPack 6 support the same TensorRT models as JetPack 5?

TensorRT engines are not portable between versions. JP5 used TensorRT 8.x; JP6 uses TensorRT 10.x. You need to re-export your ONNX models and rebuild TRT engines on the new version. Some deprecated TRT 8 APIs were removed in TRT 10.

What Python version does JetPack 6 use?

JetPack 6 defaults to Python 3.10 on Ubuntu 22.04. JetPack 5 used Python 3.8. This affects PyPI package compatibility and any virtualenvs or scripts that assume Python 3.8.


ProventusNova helps hardware startups migrate Jetson BSPs without losing weeks to dependency hell. If your JetPack 5 to 6 migration is stalled — camera drivers, TRT engines, custom BSP — book a scoping call.