It turns out there is another way to draw a line, specifically one line. Previously we used two points for each line, but often, you need to draw a chain of line segments, each segment starting where the last one ended. While you could just have each line segment two sets of points, almost half your data is redundant and unnecessary as long as the GPU understands “we’re making a chain, just send a list of points for every point in the chain and I’ll draw the line segments between them”. This format for drawing chains of line segments allows for half the memory for the same amount of geometry. Instead of a chain of 10 line segments needing 20 points to draw, it only needs 11 points. Storing the same things in more compressed formats will be a repeating theme here. It may seem stupid for 10 or even 100 line segments, but GPUs may need to draw trillions for a single frame of a scene… and all those point coordinates add up.
Now, for the colors, because the lines now look like they are blending between different colors. Each coordinate in the line has a different color, and each color is blended based on how far the pixel being drawn is from each coordinate. The style of blending is called linear interpolation. At point A it’s all the color of A. Point B is all the color of B, and a point perfectly between A and B is half A half B; at 1/4 the way from A to B, it’s 3/4 A, and 1/4 B. There are a few different styles of interpolation, each has a separate purpose but linear is the cheapest, so I’m going to be using that in examples for now.
For this style of interpolation to work, each coordinate needs to have a color related to it. All of our examples so far, have a color paired with each coordinate, but this is the first time the color has been blended between two coordinates. Even the prior line segment example above this one, technically had two colors, but the colors assigned to each point in the line segment was the same, and blending two identical colors together makes the same color. Coordinates in a chain of line segments we have defined so far is often called a “vertex” (another graphics term you’ve probably heard), but that’s also a bit misleading. In graphics, a vertex can contain significantly more information than just a position. In this case, the vertex also contains a color. If you have ever heard the term ‘vertex shader’ with graphics, this is what it means by ‘vertex’.
A “shader” is a small program (sometimes called a kernel) that is executed on a piece of a set of data and generates a similar piece of data (most of the time). In this case a vertex is consumed and then a new vertex is produced for other shaders. All kinds of shaders are responsible for massive parts of that crazy math I hinted at earlier, and vertex shaders are responsible for things like placing models in the world (I’ll come back to this later). Using of vertex shaders is difficult, since they can be used to do a whole bunch of things, and people have long debates on if it’s good to do those things with vertex shaders or some other similar technology or shader. However, to get data to those shaders, you normally use vertices (there are a few other ways as well, not getting into those). You use each vertex like a little box, normally storing position, color, and whatever else you need to make the vertex shader work.
So, does a vertex shader do that fancy color blending interpolation?… nope, it’s not dealing with that. The point of bringing up vertex shaders, is to bring up vertices store data like color, but a vertex shader is going to likely (again with the “people will argue this”) pass that onto the next shader. This gets a bit more complicated since vertex shaders are normally the first shader in a series of shaders, but we’re going to jump to the last shader, the pixel (or fragment) shader. Guess what? It’s a tiny program run on each pixel and generates a color to put in that pixel. I brought up that there are shaders between the vertex and pixel shaders, but they are optional and we’re not going to go into them. Each pixel shader requires a point, that was produced by (in this case) a vertex shader… Wait – didn’t we generate a vertex earlier? The pixel shader input format is the same as the vertex shader output’s format, so a position and a color in this case. But, between these steps, the rasterizer came in and picked the spot where the pixel shader needs to run. The rasterizer provides how far along the geometry (in this case a line) the pixel is, and each piece of data in the vertex shader output is interpolated and blended between the the two end of the line based on that distance. In this case, our pixel shader gets the interpolated coordinate of the pixel being drawn (which is the position of the pixel), and an interpolated color from the two endpoints of the line; from there it provides that color as the color of the pixel at that point. This means our pixel shader is dumb as bricks right now, it’s taking a piece of data provided as it’s input and directly providing it as output. Current AAA game pixel shaders can have hundreds of things going on. These shaders tend to be most complicated, as it’s what handles things like lighting and reflections. Each of these shaders also get run at least once for each pixel of your monitor. I mentioned Z-culling before, but that technique isn’t perfect and transparency is a thing, so many pixels get drawn multiple times. Most of the render time in a game, is running these pixel shaders.