-
-
Notifications
You must be signed in to change notification settings - Fork 474
Open
Description
I'm viewing multiple layers of dense pointclouds in rviz, and it's lagging with just one layer, and gets worse as more layers are activated. I have two RTX 3080s, and at no point will GPU 1 get utilized, only GPU 0, usually around 50%, while CPU load is rarely more than 20% and RAM goes to 25GB max.
Is there any way to improve performance by forcing rviz to only use a specific graphics card or utilize more resources?
Your environment
- OS Version: e.g. Ubuntu 20.04
- CPU: AMD® Ryzen threadripper pro 3995wx 64-cores × 128
- GPU: 2 x NVIDIA GeForce RTX 3080/PCIe/SSE2
- RAM: 128GB
- ROS Distro: [Noetic]
- RViz, Qt, OGRE, OpenGl version as printed by rviz:
[${node}.VisualizerApp::init]: rviz version 1.14.10
[${node}.VisualizerApp::init]: compiled against Qt version 5.12.8
[${node}.VisualizerApp::init]: compiled against OGRE version 1.9.0 (Ghadamon)
[/rviz_1637200062035959003.RenderSystem::forceGlVersion]: Forcing OpenGl version 0.
[/rviz_1637200062035959003.RenderWindow* rviz::RenderSystem::makeRenderWindow]: Stereo is NOT SUPPORTED
[/rviz_1637200062035959003.RenderSystem::detectGlVersion]: OpenGL device: NVIDIA GeForce RTX 3080/PCIe/SSE2
[/rviz_1637200062035959003.RenderSystem::detectGlVersion]: OpenGl version: 4.6 (GLSL 4.6).
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels