Replies: 1 comment
-
|
I'm not super familiar, but WebGL allows you to query things like how many texture slots are available and lot of other limits (supported extensions as well). Some of those probably correspond pretty well with higher-end cards, but I'm not sure which off the top of my head. Often all you need to do is optimize your model to get it to run well on all GPUs. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m trying to make a script that will assist in assessing end users capabilities of playing various WebGL content. My projects play fine in higher end GPU’s but suffer horrible aliasing, low FPS, and other undesirable artifacting on business systems or integrated graphics.
From testing with the available systems at my disposal I found that anything equivalent to a Nvidia 1060 will play fine. On Intel Mac I had ok results with a AMD Radeon Vega Frontier Edition. And on Silicone M1 Mac Mini it played adequately. It also played well on an iPhone 12 Max Pro (other than not being designed for that screen size).
I am not much of a coder. Other than first determining if the end users' system supports WebGL to begin with, my approach centers around accessing the "WEBGL_debug_renderer_info" extension and looking at the "UNMASKED_RENDERER_WEBGL" perameter to check for type of installed GPU. But I suspect this method is not the best approach (because of all the different GPU names) and can miss a lot of qualifying cards depending upon how each browser reports them.
What are better things to test for to let me know if end users system is too weak? Again, my base line is equivalent to a NVIDIA 1060.
Beta Was this translation helpful? Give feedback.
All reactions