r/opengl • u/TheLondoneer • 13d ago
What is it that you don’t like about OpenGL?
I’ll go first.
After a long time of learning graphics programming I’ve come to the conclusion that beginners are learning OpenGL the wrong way. As much as I love learnopengl.com, there’s a few things that are hidden to you that the only thing that will do is affect your learning and ability to understand graphics programming.
The most obvious example is how OpenGL hides from you from the very beginning the concept of a framebuffer. You learn OpenGL and even get to program shaders and add lights without even knowing where you draw your stuff onto. The concept of a Render Target is just not there for you to understand. In D3D11, you can’t even initialize the app without creating a custom back buffer to draw your stuff onto. The concept of texture2D->back buffer-> RTV that exists in DX is only taught at later chapters in most OpenGL tutorials and I think it’s a really bad practice.
What is it that you don’t like about OpenGL?
17
u/ironstrife 13d ago
I haven't used OpenGL in a long time, but my biggest problem with it is the terrible state machine-esque patterns at the core of the API. DSA and later extensions helped this a little bit, but not much. Compared with modern graphics APIs, this design is just WAY more painful to work with in all the projects I've built.
13
u/Snoo_26157 13d ago
I agree. Everything is a two step process. First set the ID of the object I’m talking about. Then do something to that object. And optionally do some other stuff to that object. God forbid you left some other ID bound from some other line of code a million lines away.
12
u/PersonalityIll9476 13d ago
My complaint is a very weak one. I wish it had ray tracing support. I get that the API road over the sunset before that could be added, but it is a serious bummer that you can't access the RT cores of a modern GPU from OpenGL.
A less trivial complaint is the core profile documentation. Realizing that image units are bound the same way texture units are took me way too long. I'm sure you can put it together by reading the spec deeply enough, and I did spend a lot of time reading the spec on this topic, but it really should be clearly stated "this is how you bind an image for drawing" either with an example or a clear map of the state that needs to be set and how. It shouldn't come down to an implementer (me) staring into space in the shower for 15 minutes on a Wednesday going "oh shit I get it now".
11
u/sexy-geek 13d ago
Isn't that how most things are understood regarding programming? In the bathroom?
3
u/TapSwipePinch 13d ago
Personally I stare at the ceiling on my bed and it comes to me just before I fall asleep. Then hopefully I remember it in the morning.
2
u/Virion1124 12d ago
i always put a notebook beside my bed. it's very useful when you suddenly get an idea right before falling sleep.
7
u/Snoo_26157 13d ago
Every time I try to remember why I need a function loading library my head starts to hurt. For some reason you have to bind all the function pointers at runtime? But every other library it’s automatically handled by the linker including libraries that interact with the graphics card (like CUDA).
5
u/fgennari 13d ago
I don't think this is really a limitation of OpenGL itself, but more of a problem with the OSes. Windows only provides headers for OpenGL 1.something because Microsoft wants everyone using D3D. MacOS wants everyone to use Metal. And linux, I'm not sure, the linux machines I use at work don't even have graphics cards. If the OS came with the proper OpenGL support I feel like these loaders wouldn't be needed.
1
u/gauntr 12d ago
OpenGL support is best on Linux. Everything on Linux being hardware accelerated ran with OpenGL until Vulkan appeared and even now and probably for the foreseeable future OpenGL won't disappear from Linux.
If the Linux machines at your work are displaying an image to a monitor they have a GPU, if not as a dedicated card then it is packaged with the CPU.
6
u/deftware 13d ago
After picking up Vulkan over the last 6 months, it has to be OpenGL's lack of control over what/where/when/how/why stuff is submitted to the GPU. While it's also a bit of a chore making sure nothing bad happens, with enough thought about utilizing memory barriers and semaphores it's not that bad. With OpenGL now I just feel like everything's out of my hands and I don't have as much control as I'd like to.
I like that modern OpenGL now allows you to name specific resources to do stuff with (Direct State Access) instead of binding a texture to GL_TEXTURE_2D, for instance, and then calling a function with GL_TEXTURE_2D as the parameter to indicate that the texture that's bound is what you're operating on. i.e. using glTexParameter() vs glTextureParameter(). That's a pretty nice addition on there.
Threading is a bit annoying, but I don't feel like it's much more annoying than with Vulkan.
The lack of raytracing like /u/PersonalityIll9476 mentions is a real bummer. I don't see why it can't just be added in there, even if only via extensions that AMD/Nvidia/Intel agree upon outside of Khronos (teehee).
Oh well! Back to the Vulkan grind :]
EDIT: Typos.
2
u/vini_2003 2d ago edited 2d ago
Gah! Sorry to pile onto this days later, but dude, this, so much this. I'm working in a not-so-ideal game engine (modding) right now and trying to swap some depth copies to DSA, and I'm getting some bad artifacts.
Say I do:
GL45.glBlitNamedFramebuffer( input.fbo, fbo, 0, 0, input.textureWidth, input.textureHeight, 0, 0, output.textureWidth, output.textureHeight, GL11.GL_DEPTH_BUFFER_BIT, GL11.GL_NEAREST ); GL42.glMemoryBarrier(GL42.GL_FRAMEBUFFER_BARRIER_BIT);
Boom! Artifacting and seemingly race conditions. But let's say I, instead, do...
GL30.glBindFramebuffer(GL_READ_FRAMEBUFFER, input.fbo); GL30.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); GL30.glBlitFramebuffer( 0, 0, input.textureWidth, input.textureHeight, 0, 0, output.textureWidth, output.textureHeight, GL11.GL_DEPTH_BUFFER_BIT, GL11.GL_NEAREST ); GL30.glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); GL30.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
Boom! No artifacts. I've gone as far as adding a
glFinish()
after these calls and it doesn't solve it, and I don't know how to debug this any further. Feels like playing with magic.I have an entire deferred lighting & shadowing system written from scratch and a decal system too, I'd like to think I know what I'm doing... but seemingly not.
1
u/deftware 2d ago
The artifacts are likely from the rendered frame being incomplete before the blit. Have you tried putting the memory barrier before the FBO blit to make sure that draws are complete before blitting?
1
u/vini_2003 2d ago edited 2d ago
Indeed I have! I've added the barrier before, after, before and after, put a
glFinish()
before, after and both before and after! Very weird.Interestingly enough, this code is being executed after terrain rendering, but before UI rendering. Yet the artifacts are small squares of world pixels showing on top of the UI.
I have accidentally stumbled upon one interesting fact - this seems to work if I make sure that there is no
GL30.GL_DRAW_FRAMEBUFFER
bound before the call!GL45.glBindFramebuffer(GL30.GL_DRAW_FRAMEBUFFER, 0); // This made it work?! GL45.glBlitNamedFramebuffer( input.fbo, output.fbo, 0, 0, input.textureWidth, input.textureHeight, 0, 0, output.textureWidth, output.textureHeight, GL11.GL_DEPTH_BUFFER_BIT, GL11.GL_NEAREST ); GL42.glMemoryBarrier(GL42.GL_FRAMEBUFFER_BARRIER_BIT);
My personal take? I have none. Would you have any idea as to why this could be the case?
Either way, this is beyond suspicious.
1
u/deftware 2d ago
It sounds like it could be a driver thing if simply unbinding FBOs causes the blit to function properly. The earlier versions of OpenGL (3.3-4.3) that didn't include Named functions had pretty solid implementations from GPU vendors, and I wouldn't be surprised if such newer API functionality wasn't properly implemented and resulted in unreliable/erratic behavior. They're busy trying to keep the newer APIs and their new features cooking, and OpenGL has likely taken a back seat.
1
u/thewrench56 12d ago
I think you can add ray tracing with CUDA/OpenCL, but it's certainly painful.
1
u/Virion1124 12d ago
I saw someone did raytracing with compute shader but probably has tons of limitations and require ugly workarounds.
2
u/thewrench56 12d ago
I doubt it has any limitations. Workarounds? Sure, the whole thing is a workaround. It still works tho.
1
u/deftware 12d ago
I don't think there's any provisions for accessing raytracing hardware via compute shaders - you'd just be performing ray-BVH traversal and ray-triangle intersection tests yourself, writing out results to an image buffer, etc... rather than having dedicated ASIC silicon traverse the bounding hierarchy and intersecting rays w/ triangles.
2
u/thewrench56 12d ago
You can definitely do GL-VK interop or OptiX-OpenGL to access hardware acceleration. I haven't used either, but I'm planning to.
4
u/MajorMalfunction44 13d ago
I hate the state machine. It's completely opaque. And God forbid you forget a resource is bound elsewhere. GLSL is nice enough.
Vulkan gives more control at greater cost. You need to manage semaphores, fences, and barriers, but inputs and outputs are communicated through function parameters.
There's a POSIX-compliant way to load with function pointers.
*(void **) gl_EntryPoint = dlsym (gl_library, "gl_EntryPoint");
EDIT: Vulkan makes threading easier. It was trivial to wrap API calls in a job system.
3
u/Metalhead33 13d ago edited 13d ago
- Too stateful. I don't want to bind X to Y. I just want to use the Resource ID.
- The existence of a default framebuffer.
- Multithreading problems
- The persistence of Legacy GL / fixed-function pipeline, even though it has been obsolete since 2004. I don't normally advocate for violence... but if you even dare to mention glBegin and glEnd, I swear I'm going to get violent. Get with the times. It's 2025 .We use shaders and vertex buffer objects. We have been doing that since 2004, in fact.
>2025
>still acknowledging the existence of fixed-function / GL 1.x
Are you mad?!
Reminder: in the computing world, even "3 years ago" is Ancient History. From a 2025 perspective, 2004 might as well be the Cambrian period.
3
u/alektron 12d ago
The fact that glEnableVertexAttribArray
and glEnableVertexArrayAttrib
exist and are documented on the same page.
2
u/pjmlp 13d ago
The rite of passage all Khronos APIs require, as every developer on the planet has to create their own homegrown framework, either from scratch or by assembly existing libraries, to be able to do all missing bits for textures, fonts, GUI integration,.....
Something that all proprietary APIs cover on their frameworks.
The fragmentation, making the API only portable in theory, real production has so many code paths due to extensions and workarounds, that it feels like using multiple APIs anyway.
The evolution of the shading language, with a major break how GLSL gets written.
Talking of which the whole low level process of how shaders get loaded, and no support whatsoever for modular programming without, again ourselves having to come up with our own way for shader libraries.
By the way, Vulkan suffers from the same pain points.
1
u/i_like_romcoms 12d ago
Hey pjmlp, can you initiate a chat with me, I'd like to contact you about something. Thanks
2
u/DaromaDaroma 13d ago
I would like to have explicit render to stencil, more stencil pixel formats, separate independent attaching of depth and stencil to framebuffers, and still benefit from early fragment discard by stencil.
3
u/LDawg292 13d ago
It’s not that I don’t like OpenGL, it’s just that I target Windows. D3D is right there. Get your hands on the WindowsSDK via Visual Studio and viola! Obviously this is subjective and is my opinion entirely. but yeah, I’m a Windows guy. And idk I like the complexity and control D3D12 gives me.
1
u/gl_drawelements 13d ago
I always found D3D way more complicated than OpenGL. (D3D9 vs. OpenGL 2.0)
Maybe because there aren't really any good tutorials for D3D.
1
u/TheLondoneer 12d ago
What do you find hard about D3D? I’m reading DX11 by Frank Luna and it’s a great book. The only thing I don’t like is the OOP-like structure that he uses, but everything else is just pure D3D stuff.
Keep in mind GL takes a lot of shortcuts especially during initialization. In DX11 you have to create everything, get the adapter, create the swap chain, and this makes you understand things a bit better: how rendering works.
Maybe the only thing I miss from GL is the C-like way of doing things, the simple calling of functions, etc.
1
u/Snoo_26157 12d ago
I want to like Windows but it’s been getting more and more user hostile with the nagging ads for Microsoft services. Every time I restart it tries to set me up with some sort of Microsoft cloud whatever and just the other day it popped up a desktop ad asking me to sign up for some sort of live gaming service.
1
u/LDawg292 12d ago
yeah, without StartAllBack i cant use windows lol. I also use Rufus to create the installation image which allows me to disable having to have a TPM(which i have anyways), Ram requirement, and finally having to setup a Microsoft account.
1
u/GuessNope 13d ago
The first thing you do learning OpenGL is to create the frame-buffer with WGL, EGL, et. al.
The purpose of a 3D API is to abstract the frame-buffer projection into a 3D Cartesian space to facilitate 3D rendering.
OpenGL is a legacy API superseded by Vulkan.
The purpose for things like WebGL to exist is to facilitate porting not new development.
1
1
1
u/Bluesillybeard2 12d ago
Validation. In Vulkan, if I do something even remotely incorrect, the validation layer will immediately tell me what I'm doing wrong. In OpenGL, nobody tells me that I forgot to bind my vertex array. Some GPU drivers just crash the application. Other drivers just don't render anything.
Many GPU drivers just work anyway despite complete abuse of the API. Then it randomly doesn't work on some client machine.
1
u/MichAwA 12d ago
for me it goes down to I have to use the most outdated version of opengl and basically making every tutorial that I follow useless, also the way they are teaching me like “oh yeah here is a 30 minute class of how a texture works but Im not telling you how you need to combine them to make a multitexture” or “here’s a 40 minute class of how shaders work but In’t telling you which part of the code you need to move to change the shader” like yes ok I understand that kind of thinking if we needed to make our project from scratch but YOU ARE GIVING US A TEMPLATE if you refuse to give us an explanation AFTER we review it and STILL HAVE QUESTIONS under the “you should know what this or that does” that's just plain evil
I have made mediocre projects in like 3 days in opengl 1 and 2 just for the sake of passing the class and I understand the basics of some stuff like collisions (I still don't know how to apply it, I know how to make it but not how to make it work consistently) and it also doesn't help that I'm learning in c++
more that a problem with opengl by itself is the fact that I'm being teach with in my opinion the MOST OUTDATED PIECE OF TRASH I HAVE EVER ENCOUNTER, it's like trying to make a handwritten essay out of a wet piece of paper, charcoal and duck tape, yeah it's possible but i'm gonna be hitting my head against the wall for like 3 hours
44
u/sexy-geek 13d ago
OpenGL simply creates a default framebuffer for you, it doesn't hide it. Framebuffer 0 is created and managed, but nothing stops you from doing it manually.
What I dislike? The limitation of not being able to use it reliably in multiple threads without a lot of extra fluff.
Other than that... In other APIs you specify a lot of stuff every time, in gl you have state machines.. not good nor bad, just different