Solved How is Lighting determined?
-
Using a very primitive style of lighting such as having one "light position" with a color value of a bright red color, how is lighting actually determined?
For instance, vertex information is uploaded to the GPU, this vertex information is to be activated, then a draw call, then a release call, then moving on to the next activation and draw call.Here I wonder, How does lighting get computed? Is the GPU so fast that it stores the "rendered information"(first draw call on screen), then when the second rendered information is displayed(second draw call on screen) and per draw call it recalculates the shadow and brightness upon the objects; OR, when it comes to the subject of "lighting", do all objects run a "bind" command telling the GPU activating a sense of "complete" knowledge of every draw call about to happen, then two draw calls occur, which come with "interaction" calculations in a swoop and then the results are stored on screen with the GPU no longer at the "ready" when two release commands occur?
I have read much about the pipeline process of OpenGL, tesslation, fragment shader process. I however still do not have enough information to be able to write the "most basic" lighting/shading example.
I do know how i could write one shader that would self extract a model(vertex/vertices) and create basic lighting(because in my mind all the interaction data(when to create brightness and when to create shadows) is within on shader).
I am knowledgeable yet so ignorant.....
-
It's hard to answer because there's enough material here to cover several thick books. There's also not one single answer. There are dozens upon dozens lighting techniques out there that utilize different hardware capabilities the app is targeting. Some of them are neatly described in the GPU Gems books by NVidia.
But to give you a couple keywords that you can follow up on. There are three basic groups of techniques commonly utilized: forward rendering, deferred rendering and raytracing.
Forward rendering is the simplest and it's just basically a loop over all objects and all lights and for each pair a color of a pixel is determined given the light position, pixel position and material properties. The downside is that the number of calculations needed grows fast, because it's number of objects times number of lights, so it's suitable for cases with just a few lights or as a fallback when other techniques can't be used for some reason.
Deferred rendering works a little different. Objects are rendered to a so called g-buffer. It's a collection of buffers that encode various aspects of the objects like z-depth, diffuse, albedo, specular colors, surface normal, roughness factor etc. Then a second pass takes g-buffer as input and for each light calculates the color of a pixel. This technique is suitable for a large number of small lights, because the complexity is now number of objects plus number of lights. The downside is that you need a lot more memory and bandwidth to create and use the g-buffer.
Raytracing is another technique that first creates a 3d representation of the scene. A commonly used structure for that is BVH (Bound Volume Hierarchy). For each pixel on the screen a ray (or a couple) is then shot into that structure and based on properties of the materials of the objects in it it is bounced around to determine if it hits any light source. There are many variations of this. You can shoot rays from the point of lights, from the objects themselves, but that's the basics.
As to what actual color a pixel gets is determined by what is called a BRDF (Bidirectional Reflectance Distribution Function) which, simplifying, is a mathematical equation that given a set of parameters determines the resulting color. There are a lot of different BRDFs out there. A sort of "hello world" BRDF is the Blinn-Phong Shading model and that's a good one to start with if you're learning.
Depending on a technique and BRDF you choose the way the data is structured and moved to the GPU will be different. For example in forward rendering you might want to sort and group together objects influenced by a single light or you might group close by lights so you can do lighting calculation for a couple of them in one shader invocation.
It's tough to choose an approach when you're learning, because there's just so many of them and there are good and bad aspects of each. There's no single answer to your questions. Over the last couple of decades graphics people, especially in the movies and games industries have come up with techniques not even the creators of the hardware they run on dreamed of and it is still going. Every year a new idea shows up in some title and the rest of the industry feeds on it and mutates it into something else.
-
As for a basic flow of an app, there are couple of stages:
First you have your model data sitting there on the hard drive. Depending on the lighting model you want to use you need to decide on the format. What data will you need - vertices, indices, material information. What information the materials will use - colors, roughness, normal data, depth, reflectance etc. All depending on the BRDF of your choosing.
Next is reading this into memory. For small scenes you can just put all your objects in an array and that's it, but for larger scenes you need something more advanced, like a BSP or quad-tree, so that you can quickly determine which objects are close to the camera and which fall out of the view or are obscured or too far. The goal is always to determine an absolute minimal set of objects to render, because running those shaders is expensive. For even larger scenes, like open world games you simply can't afford to store everything in memory. You need to then partition your world into chunks and stream them in and out as you move around. There's another set ot techniques around that topic.
Next, you can store a set of objects in a multitude of ways. For example you could store each object as an instance of some class and that's fine for small number of objects. But for a large sets this is very inefficient when it comes to memory and cache access, so there are techniques around that. A common approach is a component based system, where object is just a set of indices into large, uniform buffers holding each piece of data e.g. buffer of positions, normals etc. This is also good for streaming because you can read data of multiple objects as just one big chunk from the disk.
When you have your data buffers and objects that are to be rendered selected you can, again, draw them in many ways. You could go one by one but that's very inefficient. A common approach is indirect rendering i..e you bind common data buffers containing vertices, normals etc. and prepare an input buffer that describes which parts of these buffers correspond to objects you want to draw and feed that to the indirect call. It will then invoke appropriate shader stages for each objects without rebinding anything in-between. That's a much more efficient way to utilize GPU.
In general you can imagine GPU as being a big fat cargo train. It is really really inefficient for it to do small things. It's a huge waste to carry just a couple of small packages. It's a huge waste if it has to stop at every crossing. It takes long to pack up, start and stop, but if you pack it full, clear the tracks and get it started it can take massive cargos very far really fast.
That analogy works well with the GPUs. It's really really inefficient to use single draw calls like the old OpenGL APIs glVertex etc. It's like stoping a train every few meters to give it next small package. There are larger calls like glDrawArrays, even better ones like glDrawArraysIndirect or even faster OpenGL extensions like ARB_multi_draw_indirect, NV_command_list or the bindless approach.
The idea is to spend a little bit more time on the CPU - carefully pick and prepare data for rendering, structure it so that GPU can consume it as efficiently as possible, bind large amounts of data at once - geometry, materials and shader information, bind indirect buffers which tell the GPU what to do with these buffers and then issue as few actual drawing commands as possible. Pack the GPU full and let it go like a cargo train. -
I’m going to write some code to try something’s out now that I have more information.
-
Okay so I’m brain frozen here: how do I select which rendering style?
Like if I used phong glsl type of shading does that automatically use the bdrf rendering?
How would I use Ray tracing for simple lighting(shadows are precise shape of light blocking and all anything does is get brighter closer to the light it is), how could I “enable” differed rendering(is that drawing on a back buffer and then doing a swap), what about the first one that sounds like a good option for education of one’ self although indeed sounds entirely slow.
-
Like if I used phong glsl type of shading does that automatically use the bdrf rendering?
The method of rendering and a BRDF selection are two separate topics. All of them can use the same BRDF. To make it super simple BRDF is the equation that you have in your vertex/pixel shader for calculating a color. The rendering method is about how you get to invoke those shaders.
The first method is slow for certain scenarios, like tons of lights but is superb for others. Modern game engines usually have a mix of 2 or all 3 of these methods for different types of objects. For example you could have deferred lighting as a base method for most solid objects, forward pass for translucent stuff like glass and raytracing for global illumination (indirect lighting). Up to about a decade ago pretty much all popular engines used mostly forward renderers but nowadays everything is a hybrid of multiple techniques.
But all that can be a bit overwhelming, so don't try to jump on all of these at once. Pick one and implement that for a start.
how do I select which rendering style?
It's like choosing bubble sort or quick sort. They are just different algorithms suitable for different tasks, so you just pick one that is best for your situation. You can use both if you want.
Forward renderer is the easier one. For each light you set its properties as shader constants and just draw each object with a shader that implements e.g. Phong's equation using these constants. You draw that directly to the framebuffer (assuming you don't want to do any postprocessing later like blur, bloom, chromatic aberration etc.).
Deferred approach is a two-pass algorithm on the other hand. In the first pass you set up a framebuffer with multiple attachments, one texture for each element of a g-buffer. You render each object and a shader writes to these buffers information about the objects' materials - normals, color, roughness etc. In the second pass you take these textures and set them up as multitexture input, bind another buffer containing light information like positions, color, falloff etc. and then do a draw call with that. In that pass you use the same BRDF as in the forward rendering but he difference is that object information is encoded in the g-buffer textures and not as vertex attributes of individual draws. The second pass goes to the single output framebuffer (again assuming no postprocess passes).
Here's a tutorial on that technique: Deferred Shading.Shadows are yet another topic and I wouldn't try to do them until you have a grasp of basic lighting. In forward and deferred rendering they are an effect separate from lighting. Yes, I know, it sounds weird, but shadows are not part of lighting in those techniques. They are calculated as an entirely separate pass using techniques like shadow mapping, cascading maps, PCF, VSM, PCSS or others. For example some good explanation of VSM technique is shown in one of the GPU Gems books I mentioned: Summed-Area Variance Shadow Maps. But again - I wouldn't worry about shadows just yet. Do basic direct lighting first and get comfortable with it before you move on.
In raytracing shadows show up kinda naturally and automatically, because this technique is the closest to what real light does. Shadows are just places where the rays can't reach. Raytracing is a bit more advanced technique and I wouldn't start with it though, especially since it is now hardware accelerated on many GPUs, but OpenGL has no direct API support for it, so Vulcan or DirectX12 are better suited for that task..