-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an experimental docker image #171
base: master
Are you sure you want to change the base?
Conversation
9d80896
to
8bd4dca
Compare
8bd4dca
to
632ca64
Compare
Known issues when using with x86 + GPUIf you run gpu codes in the container on x86 host, it won't work. I suspect CUDA in L4T is ONLY compatible with Tegra chips. Run, cp -r /usr/local/cuda/samples /tmp
cd /tmp/samples/1_Utilities/deviceQuery
make -j`nproc`
/tmp/samples/bin/aarch64/linux/release/deviceQuery then it emits an error below, root@devenv:/tmp/samples/1_Utilities/deviceQuery# /tmp/samples/bin/aarch64/linux/release/deviceQuery
/tmp/samples/bin/aarch64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL . Next, run root@devenv:/tmp/samples/1_Utilities/deviceQuery# /tmp/samples/bin/aarch64/linux/release/deviceQuery
/tmp/samples/bin/aarch64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
NvRmPrivGetChipIdLimited: Could not read Tegra chip id/rev
Expected on kernels without fuse support, using Tegra K1
NvRmPrivGetChipPlatform: Could not read platform information
Expected on kernels without fuse support, using silicon
libnvrm_gpu.so: NvRmGpuLibOpen failed
cudaGetDeviceCount returned 999
-> unknown error
Result = FAIL
But if you uncomment ouxt_automation/docker/docker_testbench/Dockerfile.perceptioncamtest Lines 58 to 59 in 632ca64
/tmp/samples/bin/aarch64/linux/release/deviceQuery in the container, the error code is 35, not 999.
|
@hakuturu583 please ensure it can "colcon build" and "colcon test" on x86 correctly. If successful, tensorrt_yolox will emit compilation errors and others can be built and tested properly |
Can you add documentation here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add document.
Run CPU only tasks on any hosts (ex. Builds, Tests)
and the run this in the container.
# colcon build --continue-on-error
Run with GPU(still partway, only works on a real Jetson host)
and, for example, the run this in the container.
If you use GUI, run xhost +local:root and xhost -local:root as non-root.