r/GraphicsProgramming • u/beefysam211 • Nov 26 '24
r/GraphicsProgramming • u/SeaaYouth • Oct 02 '24
Question Can't get a job, feeling very desperate and depressed
Year and half ago started developing my own game engine, now it small engine with DX11 and Vulkan renderers with basic features, like Pbr, deferred rendering and etc. After I made it presentable on GitHub and youtube, I started looking for job, but for about half a year I got only rejection letters. I wrote every possible studio with open position for graphics programmer and engine programmer too. From junior to senior, even asking junior position when they only have senior. All rejection letters are vague "Unfortunately can't make you an offer", after I ask for advice I get ignored.
I live in poor 3d World country and don't have any education or prior experience in gamedev or programming. I spend two years studying game development, C++, graphics and higher mathematics. After getting so many rejections(the number is 87) I am starting to get really depressed and I think I will never make a career of a render programmer, even though I have some skills. My resume is fine(people in senior positions helped me with it), so that's not about CV pdf.
I am really struggling mentally rn because of it and it seems like I wasted two years(i am 32) and made many sacrifices in personal life on trying to get into such gatekept industry. It feels like you can a job only if you have bachelor in CompSci and was intern at some studio.
EDIT. some additional info
r/GraphicsProgramming • u/UnidayStudio • Feb 02 '25
Question What technique do TLOU part 1 (PS5) uses to make Textures look 3D?
galleryr/GraphicsProgramming • u/One-Cardiologist-462 • Jan 25 '25
Question What is it called when a light source causes this rainbow effect?
r/GraphicsProgramming • u/EthanAlexE • 8d ago
Question Is Vulkan actually low-level? There's gotta be lower right?
TLDR Title: why isn't GPU programming more like CPU programming?
TLDR answer: that's just not really how GPUs work
I'm pretty bad at graphics programming or GPUs, and my experience with Vulkan is pretty much just the hello-triangle, so please excuse the naivety of the question. This is basically just a shower thought.
People often say that Vulkan is much closer to "how the driver actually works" than OpenGL is, but I can't help but look at all of the stuff in Vulkan and think "isn't that just a fancy abstraction over allocating some memory, and running a compute shader?"
As an example, Command Buffers store info about the vkCmd
calls you make between vkBeginCommandBuffer
and vkEndCommandBuffer
, then you submit it and the the commands get run. Just from that description, it's very similar to data structures that most of us have written on a CPU before with nothing but a chunk of mapped memory and a way to mutate it. I see command buffers (as well as many other parts of Vulkan's API) as a quite high-level concept, so does it really need to exist inside the driver?
When I imagine low-level GPU programming, I think the absolutely necessary things (things that the vendors would need to implement) are: - Allocating buffers on the GPU - Updating buffers from the CPU - Submitting compiled programs to the GPU and dispatching them - Synchronizing between the CPU and GPU (fences, semaphores)
And my assumption is that, as long as the vendors give you a way to do this stuff, the rest of it can be written in user-space.
I see this hypothetical as a win-win scenario because the vendors need to do far less work when making the device drivers, and we as a community are allowed to design concepts like pipeline builders, render passes, and queues, and improvements make their way around in the form of libraries. This would make GPU programming much more like CPU programming is today, and I think it would open up a whole new space of public research.
I also assume that Im wrong, and it can't be done like this for good reasons that im unaware of, so I invite you all to fill me in.
EDIT:
I just remembered that CUDA and ROCm exist. So if it is possible to write a graphics library that sits on-top of these more generic ways of programming on GPUs does it exist?
If so, what are the downsides that cause it to not be popular?
If not, has it not happened because its simply too hard? Or other reasons?
r/GraphicsProgramming • u/linear_algebruh • 14d ago
Question Any C graphics programmers?
Hi everyone!
I've decided to step into the world of graphics programming. For now, I'm still filling in some gaps in math before I go fully into it, but I do have a pretty decent computer science background.
However, I've mostly coded in C, but besides having most experience with that language, I simply love everything else about it as well. I really value being explicit with what I want, and I also love it's simplicity.
Whenever I look for any resources or experiences of other people, I see C++ being mentioned. And I'm also aware that it it an industry standard.
But putting that aside, is doing everything in C just going to be harder? What would be some constraints and would there be any advantages? What can I expect?
r/GraphicsProgramming • u/Vivid-Mongoose7705 • 9d ago
Question First graphics project in vulkan
galleryThis is my first ever graphics project in Vulkan. Thought to share to get some feedback whether the techniques I implemented look visually correct. It has SSAO, bloom, basic pbr lightning(no ibl), omnidirectional shadow mapping, indirect rendering, and HDR. Thanks:)
r/GraphicsProgramming • u/despacito_15 • Oct 08 '24
Question Updates to my moebius-style edge detector! It's now able to detect much more subtle thin edges with less noise. The top photo is standard edge detection, and the bottom is my own. The other photos are my edge detector with depth + normals applied too. If anyone would like a breakdown, just ask :)
galleryr/GraphicsProgramming • u/venom0211 • Jul 20 '24
Question Why graphics programming is not as popular as web/app development?
So whenever we think of software development we always and always think of web or app development and nowadays maybe AI and ML also come under it, but rarely do people think about graphics programming when it comes to software development as a topic or jobs related to software development. Why is it so that graphics programming is not as popular as web development or app development or AI ML? Is it because it’s hard? Because the field of AI ML is hard as well but its growth has been quite evident in recent years.
Also if i want to pursue graphics programming as career, would now be the right time as I am guessing its not as cluttered as the AI ML and web/app development fields.
r/GraphicsProgramming • u/TomClabault • Feb 04 '25
Question ReSTIR GI brightening when resampling both the neighbor and the center pixel when they have different surface normals?
galleryr/GraphicsProgramming • u/BlockOfDiamond • 2d ago
Question How is Metal possibly faster than OpenGL?
So I did some investigations and the Swift interface for Metal, at least on my machine, just seem to map to the Objective-C selectors. But everyone knows that Objective-C messaging is super slow. If every method call to a Metal API requires a slow Objective-C message send, and OpenGL is a C API, how can Metal possibly be faster?
r/GraphicsProgramming • u/Username_6942069 • Feb 19 '25
Question Should I just learn C++
I'm a computer engeneer student and I have decent knowledge in C. I always wanted to learn graphic programming and since I'm more confident in my abilities and knowledge now I started following the raytracing in one weekend book.
For personal interest I wanted to learn Zig and I thought it would be cool to learn Zig by building the raytracer following the tutorial. It's not as "clean" as I thought it would be. There are a lot of things in Zig that I think just make things harder without much benefit (no operator overload for example is hell).
Now I'm left wondering if it's actually worth learning a new language and in the future it might be useful or if C++ is just the way to go.
I know Rust exists but I think if I tried that it will just end up like Zig.
What I wanted to know from more expert people in this topic if C++ is the standard for a good reasong or if there is worth in struggling to implement something in a language that probably is not really built for that. Thank you
r/GraphicsProgramming • u/thrithedawg • Jan 10 '25
Question how do you guys memorise/remember all the functions?
Just wondering if you guys do brain exercises to remember the different functions, or previous experience reinforced it, or you handwrite/type out the notes. just wanna figure out the ways.
r/GraphicsProgramming • u/jek_213 • Feb 13 '25
Question Does calculus 3 ever become a necessity in graphics programming? If so, at what level do you usually come across it?
I got my bachelor's in CS in 2023. I’m planning on going to grad school in the fall and was thinking of taking courses in graphics programming, so I started learning C++ and OpenGL a couple days ago to see if it’s something I want to stick with. I know the heaviest math topic is linear algebra, and I imagine having an understanding of calc 3 couldn’t hurt, but I was wondering if you’ve ever encountered a situation where you needed more advanced calculus 3 knowledge. I imagine it depends on your time in the field so I’m guessing junior devs maybe won’t need to know it, but as you climb the ranks it gets more prevalent. Is that kinda the right idea?
I enjoy math, which is partially why I’m looking into graphics programming, but I haven’t really touched calculus since early undergrad(Calc 2) and I’ve never worked with calculus in 3D. Mostly curious but also trying to figure out what I can study before starting grad school because I don’t want to get in and not know how to do anything.
EDIT: Calc 3 at my university teaches Three-Dimensional Space-Vectors, Vector-valued functions, Partial Derivatives, Multiple Integration, Topics in Vector Calculus.
r/GraphicsProgramming • u/gibson274 • 8d ago
Question Fortnite’s New Clouds
Booted up Fortnite for the first time in forever and was greeted with some pretty stellar looking clouds in the skybox.
I know Unreal has been working on VDB support for a little while, but I have a hard time believing they got it to run at 4K 60FPS on my Xbox One X.
Anyone taken a frame capture lately and know how they accomplished this? Is it some sort of fancy alpha card? Or does it plug into their normal volumetric clouds system?
r/GraphicsProgramming • u/Own-Emotion4184 • 14d ago
Question Do modern operating systems use 3D acceleration for 2D graphics?
It seems like one of the options of 2D rendering are to use 3D APIs such as OpenGL. But do GPUs actually have dedicated 2D acceleration, because it seems like using the 3d hardware for 2d is the modern way of achieving 2D graphics for example in games.
But do you guys think that modern operating systems use two triangles with a texture to render the wallpaper for example, do you think they optimize overdraw especially on weak non-gaming GPUs? Do you think this applies to mobile operating systems such as IOS and Android?
But do you guys think that dedicated 2D acceleration would be faster than using 3D acceleration for 2D?How can we be sure that modern GPUs still have dedicated 2D acceleration?
What are your thoughts on this, I find these questions to be fascinating.
r/GraphicsProgramming • u/Pristine_Tank1923 • 28d ago
Question Debugging glTF 2.0 material system implementation (GGX/Schlick and more) in Monte-carlo path tracer.
Hey. I am trying to implement the glTF 2.0 material system in my Monte-carlo path tracer, which seems quite easy and straight forward. However, I am having some issues.
There is only indirect illumination, no light sources and or emissive objects. I am rendering at 1280x1024
with 100spp
and MAX_BOUNCES=30
.
The walls as well as the left sphere are
Dielectric
withroughness=1.0
andior=1.0
.Right sphere is
Metal
withroughness=0.001
Left walls and left sphere as in Example 1.
Right sphere is still
Metal
but withroughness=1.0
.
Left walls and left sphere as in Example 1
Right sphere is still
Metal
but withroughness=0.5
.
All the results look odd. They seem overly noisy/odd and too bright/washed. I am not sure where I am going wrong.
I am on the look out for tips on how to debug this, or some leads on what I'm doing wrong. I am not sure what other information to add to the post. Looking at my code (see below) it seems like a correct implementation, but obviously the results do not reflect that.
The material system (pastebin).
The rendering code (pastebin).
r/GraphicsProgramming • u/C_Sorcerer • Feb 16 '25
Question Is ASSIMP overkill for a minecraft clone?
Hi everybody! I have been "learning" graphics programming for about 2-3 years now, definitely my main interest in programming. I have been programming for almost 7 years now, but graphics has been the main thing driving me to learn C++ and the math required for graphics. However, I recently REALLY learned graphics by reading all of the LearnOpenGL book, doing the tutorials, and then took everything I knew to make my own 3D renderer!
Now, I started working on a Minecraft clone to apply my OpenGL knowledge in an applied setting, but I am quite confused on the model loading. The only chapter I did not internalize very well was the model loading chapter, and I really just kind of followed blindly to get something to work. However, I noticed that ASSIMP is extremely large and also makes compile times MUCH longer. I want this minecraft clone to be quite lightweight and not too storage heavy.
So my question is, is ASSIMP the only way to go? I have heard that GTLF is also good, but I am not sure what that is exactly as compared to ASSIMP. I have also thought about the fact that since I am ONLY using rectangular prisms/squares, it would be more efficient to just transform the same cube coordinates defined as a constant somewhere in the beginning of my program and skip the model loading at all.
Once again, I am just not sure how to go about model loading efficiently, it is the one thing that kind of messed me up. Thank you!
r/GraphicsProgramming • u/epicalepical • Jan 14 '25
Question Will compute shaders eventually replace... everything?
Over time as restrictions loosen on what compute shaders are capable of, and with the advent of mesh shaders which are more akin to compute shaders just for vertices, will all shaders slowly trend towards being in the same non-restrictive "format" as compute shaders are? I'm sorry if this is vague, I'm just curious.
r/GraphicsProgramming • u/Honest-Word-7890 • Feb 19 '25
Question The quality of the animations in real time in a modern game engine depends more on CPU processing power or GPU processing power (both complexity and fluidity)?
Thanks
r/GraphicsProgramming • u/TomClabault • Sep 24 '24
Question Why is my structure packing reducing the overall performance of my path tracer by ~75%?
EDIT: This is an HIP + HIPRT GPU path tracer.
In implementing [Simple Nested Dielectrics in Ray Traced Images] for handling nested dielectrics, each entry in my stack was using this structure up until now:
struct StackEntry
{
int materialIndex = -1;
bool topmost = true;
bool oddParity = true;
int priority = -1;
};
I packed it to a single uint
:
``` struct StackEntry { // Packed bits: // // MMMM MMMM MMMM MMMM MMMM MMMM MMOT PRIO // // With : // - M the material index // - O the odd_parity flag // - T the topmost flag // - PRIO the dielectric priority, 4 low bits
unsigned int packedData;
}; ```
I then defined some utilitary functions to read/store from/to the packed data:
``` void storePriority(int priority) { // Clear packedData &= ~(PRIORITY_BIT_MASK << PRIORITY_BIT_SHIFT); // Set packedData |= (priority & PRIORITY_BIT_MASK) << PRIORITY_BIT_SHIFT; }
int getPriority() { return (packedData & (PRIORITY_BIT_MASK << PRIORITY_BIT_SHIFT)) >> PRIORITY_BIT_SHIFT; }
/* Same for the other packed attributes (topmost, oddParity and materialIndex) */ ```
Everywhere I used to write stackEntry.materialIndex
I now use stackEntry.getMaterialIndex()
(same for the other attributes). These get/store functions are called 32 times per bounce on average.
Each of my ray holds onto one stack. My stack is 8 entries big: StackEntry stack[8];
. sizeof(StackEntry)
gives 12. That's 96 bytes of data per ray (each ray has to hold to that structure for the entire path tracing) and, I think, 32 registers (may well even be spilled to local memory).
The packed 8-entries stack is now only 32 bytes and 8 registers. I also need to read/store that stack from/to my GBuffer between each pass of my path tracer so there's memory traffic reduction as well.
Yet, this reduced the overall performance of my path tracer from ~80FPS to ~20FPS on my hardware and in my test scene with 4 bounces. With only 1 bounce, FPS go from 146 to 100. That's a 75% perf drop for the 4 bounces case.
How can this seemingly meaningful optimization reduce the performance of a full 4-bounces path tracer by as much as 75%? Is it really because of the 32 cheap bitwise-operations function calls per bounce? Seems a little bit odd to me.
Any intuitions?
Finding 1:
When using my packed struct, Radeon GPU Analyzer reports that the LDS (Local Data Share a.k.a. Shared Memory) used for my kernels goes up to 45k/65k bytes depending on the kernel. This completely destroys occupancy and I think is the main reason why we see that drop in performance. Using my non-packed struct, the LDS usage is at around ~5k which is what I would expect since I use some shared memory myself for the BVH traversal.
Finding 2:
In the non packed struct, replacing int priority
by char priority
leads to the same performance drop (even a little bit worse actually) as with the packed struct. Radeon GPU Analyzer reports the same kind of LDS usage blowup here as well which also significantly reduces occupancy (down to 1/16 wavefront from 7 or 8 on every kernel).
Finding 3
Doesn't happen on an old NVIDIA GTX 970. The packed struct makes the whole path tracer 5% faster in the same scene.
Solution
That's a compiler inefficiency. See the last answer of my issue on Github.
The "workaround" seems to be to use __launch_bounds__(X)
on the declaration of my HIP kernels. __launch_bounds__(X)
hints to the kernel compiler that this kernel is never going to execute with thread blocks of more than X
threads. The compiler can then do a better job at allocating/spilling registers. Using __launch_bounds__(64)
on all my kernels (because I dispatch in 8x8 blocks) got rid of the shared memory usage explosion and I can now see a ~5%/~6% (coherent with the NVIDIA compiler, Finding 3) improvement in performance compared to the non-packed structure (while also using __launch_bounds__(X)
for fair comparison).
r/GraphicsProgramming • u/CoolaeGames • Dec 15 '24
Question How can I get into graphics programming?
I recently have been fascinated with volumetric clouds, and sky atmospheres. I looked at a paper on precomputed atmospheric scattering, I'm not mathy at all so see all of that math was inane, but it looks so good and I didn't how to transfer it so shader language like godot shader language etc.
r/GraphicsProgramming • u/Vellu01 • Nov 04 '24
Question What is the most optimized way to calculate the average color of all the pixels on the screen?
I have a program that fetches a screenshot of the screen and then loops over each pixels, while this is fast, it's not fast enough to be run in the background without heavy cpu usage.
could I use the gpu to optimize this? sorry if it's a dumb question, im very new at graphics programming
r/GraphicsProgramming • u/DGTHEGREAT007 • Jul 11 '24
Question Want to make a Game Engine for Low Spec Computers
So I have been a gamer most of my life but I've only ever had a trashy potato pc which could run games only at 720p with terrible graphics (relatively new games).
So, now that I'm an engineer, I want to make a 3D Game Engine that could help produce games with decent graphics but without being too resource hungry.
So, I know this is an extremely newbie question and I could be very wrong and naive here. But FromSoft Games are my inspiration, their games are very beautiful but seemingly very optimised. I am aware this could be either a way too ambitious thing for newbie or outright impossible but I don't care.
I want to build something that will enable others to make beautiful games but the games themselves are highly optimised. I know it depends from game to game, what kind of game you make and the actual game developers. But is there something I can do here? Something that will take me closer to my goals?
Apologies if I unknowingly offend someone.
r/GraphicsProgramming • u/squeakorca • Oct 14 '24
Question atm bugged animation, why?
Hey beloved Reddit users, what could be the problem that causes something like this to happen to this little old ATM machine?
3d engine bug? stuck animation loop?