How Do You Draw A Ray
Michael Walczyk
View My GitHub Profile
Ray Marching
If you've ever visited Shadertoy, you've probably seen ray marching in action. It'south an interesting technique that can be used to generate fully procedural environments entirely from a unmarried fragment shader. Unfortunately, there aren't a ton of dandy resources for learning these techniques. So, I wanted to create a quick tutorial on setting up a bones ray marching shader. I volition be using Derivative'due south TouchDesigner for rendering, just you lot should be able to port this to any other 3D surroundings fairly easily. In futurity blog posts, we volition build upon the topics covered in this tutorial to create more than interesting scenes!
If you lot are working with TouchDesigner, all you will need for this tutorial is a single GLSL Elevation. In the "Common" tab, I changed the "Output Resolution" parameter from "Use Input" to "Custom Resolution,"
720 x 720.
Signed distance Functions
Equally I hinted at already, when nosotros apply ray marching, we aren't dealing with polygons anymore. And then, a lot of the things that we're used to in a typical 3D toolkit - geometry, lights, cameras - are nonexistent. But if we don't have points, lines, triangles, and meshes, how do we render anything? The trick is to employ signed distance functions or SDFs (I will utilise these ii terms interchangeably throughout this mail service). These are mathematical functions that have a point in space and tell y'all how far that point is from a surface. This can exist somewhat disruptive to wrap your head effectually, so maybe an example will assistance.
Permit's say we are at a point p in 3D infinite. In GLSL, we can represent p as:
for some coordinates x, y, and z. Now, suppose nosotros have a sphere centered at the origin with unit of measurement radius. We want to respond the question, "what is the distance from p to the closest point on the sphere?" Information technology turns out that this distance corresponds to the length of the vector that points from p to the sphere's center, minus the sphere's radius. The following GLSL part implements this:
// params: // p: arbitrary indicate in 3D space // c: the center of our sphere // r: the radius of our sphere float distance_from_sphere ( in vec3 p , in vec3 c , float r ) { return length ( p - c ) - r ; } And if you aren't convinced, bank check out the diagram beneath. Here, ||p - c|| denotes the length or norm of the vector p - c, which points from the heart of the sphere c to p. Convince yourself that ||p - c|| - r does, in fact, give usa the altitude from p to the closest point on the sphere. The code snippet above is our first signed distance function. This part is "signed" because it returns a negative value, zip, or a positive value, depending on whether we are within, on the surface of, or outside the sphere, respectively. We tin can write these three cases:
Case 1
||p - c|| < r, which means we are somewhere inside the sphere. This implies that ||p - c|| - r is negative.
Case ii
||p - c|| = r, which means we are somewhere on the surface of the sphere. This implies that ||p - c|| - r is zero.
Case 3
||p - c|| > r, which means we are somewhere outside the sphere. This implies that ||p - c|| - r is positive.
Ok, so we have a bones SDF, and we know how to evaluate information technology at any indicate in 3D space. How do we really use this to return something?
Setting Up the Ray Marching Loop
In order to render our scene, we are going to utilize a technique called ray marching. At a high level, we will shoot out a bunch of imaginary rays from a virtual camera that is looking at our world. For each of these rays, we are going to "march" along the direction of the ray, and at each step, evaluate our SDF (the distance_from_sphere part that we wrote above). This will tell us: "from where I currently stand, how far am I from the closest point on the surface of our sphere?"
If you lot are familiar with ray tracing, you lot might wonder why nosotros tin can't only directly compute the point of intersection betwixt our ray and our sphere. For this simple scene, we actually could exercise this! However, every bit nosotros will see towards the finish of this tutorial, the real ability of ray marching lies in its ability to render shapes where this exact point of intersection is not equally obvious.
Now, our goal is to take steps along this ray until we are and so close to an object that nosotros tin can safely stop. But how should we movement along the ray? Ane approach would be to take small, equally spaced steps. In GLSL, this might look similar:
const float step_size = 0 . 1 ; for ( int i = 0 ; i < NUMBER_OF_STEPS ; ++ i ) { // Presume that `ro` and `rd` are divers elsewhere // and represent the ray's origin and direction, // respectively vec3 current_position = ro + ( i * step_size ) * rd ; // Some lawmaking to evaluate our SDF and decide whether or non // nosotros've hit a surface based on our current position... } It turns out that a much amend approach is to apply altitude-aided ray marching (sometimes called sphere tracing for reasons that we will see shortly). At each position along our ray, we evaluate our SDF, which, by definition, gives united states of america the distance to the closest object in our scene. If we treat this distance as the radius of a sphere centered around our current position, we know that we can safely movement forwards along our ray by that amount without overshooting or missing any objects. These "bounding spheres" are represented past the four grey, stroked circles in the diagram below. Note that in exercise, nosotros might step many more than times before terminating the ray.
The diagram below shows this procedure in activity. Our kickoff scene will be much simpler than this, merely for analogy purposes, it's helpful to visualize a more complex shape. Hither, we imagine that we have a top-down view of our world (we are viewing the XZ-airplane). The dark gray box towards the lesser of the epitome represents our virtual camera, and the red line emanating from its lens represents a single ray. The green, mountain-like structure surrounding the camera represents some circuitous object (we assume that we have a SDF that represents this shape) that we are trying to render. Each position where we evaluate our SDF is represented by a small, bluish circumvolve. These points would exist evaluated in the social club drawn below, bottom to top. Each red Ten represents the bespeak on the surface that is closest to the respective bluish circle along the ray.
There are two big optimizations that nosotros tin add to this. The first was already mentioned: if our SDF returns a very small distance (on the order of 0.001 for example), we will consider this a "hit" and break out of the loop early on. The second involves rays that never pass through any objects in our scene. In our ray marching routine, we can go on rails of the total distance traveled thus far and break out of the loop if we striking a certain threshold (say, g.0 units). So, our complete ray marching role might look something like:
vec3 ray_march ( in vec3 ro , in vec3 rd ) { float total_distance_traveled = 0 . 0 ; const int NUMBER_OF_STEPS = 32 ; const float MINIMUM_HIT_DISTANCE = 0 . 001 ; const float MAXIMUM_TRACE_DISTANCE = m . 0 ; for ( int i = 0 ; i < NUMBER_OF_STEPS ; ++ i ) { // Summate our current position along the ray vec3 current_position = ro + total_distance_traveled * rd ; // We wrote this function earlier in the tutorial - // assume that the sphere is centered at the origin // and has unit radius float distance_to_closest = distance_from_sphere ( current_position , vec3 ( 0 . 0 ), 1 . 0 ); if ( distance_to_closest < MINIMUM_HIT_DISTANCE ) // hit { // We hit something! Render red for now return vec3 ( one . 0 , 0 . 0 , 0 . 0 ); } if ( total_distance_traveled > MAXIMUM_TRACE_DISTANCE ) // miss { break ; } // accumulate the distance traveled thus far total_distance_traveled += distance_to_closest ; } // If we get hither, nosotros didn't hit anything so just // return a groundwork color (black) return vec3 ( 0 . 0 ); } So, we accept a function that performs ray marching along a given ray. The last thing we demand to effigy out is, how do nosotros generate our rays?
Generating Rays
We know that a ray has ii components: an origin and a management. We are going to imagine that each ray starts at the camera and passes through an imaginary "image plane" that sits somewhere in front end of our photographic camera. Call up that all of our code will execute inside of a single fragment shader, and so there is a chip of a jump, where we somehow accept to describe a 3D earth from the 2D plane that our fragment shader executes over. A fragment shader executes once for each pixel that makes up our final, rendered image. At each pixel location, we tin can derive a UV-coordinate in the range [0.0, 1.0]. In TouchDesigner, this is passed from the vertex shader to the fragment shader as the variable vUV.st.
If you lot are following forth with this tutorial and not using TouchDesigner, the specifics of where your UV-coordinates come up from will be implementation defined. Regardless of what framework / toolkit you're using, you only demand to be able to depict a total-screen quad and summate UV-coordinates, either by normalizing the built-in variable
gl_FragCoord.xy, or otherwise. Also, if you are rendering an image that is not square, you lot will need to take into business relationship your aspect ratio when deriving your UV-coordinates. For now, I'll leave this as an exercise…
Next, let's remap our UV-coordinates from the range [0.0, ane.0] to [-1.0, ane.0]. Nosotros practice this largely for convenience, since it places the pixel at the center of our image at (0.0, 0.0). Now, in our imaginary 3D infinite, permit'due south say that the camera is 5 units away from the origin, in the negative Z direction. To keep things simple, we aren't going to deal with things like FOV, camera rotation, etc., only we will probable comprehend some of these topics in the adjacent tutorial.
We already know that each ray originates from the photographic camera. It's direction vector tin can be thought of every bit tracing a line from the camera's position through a indicate on the image plane. The post-obit GLSL lawmaking snippet shows this process in action:
// TouchDesigner provides this variable for united states vec2 uv = vUV . st * ii . 0 - 1 . 0 ; vec3 camera_position = vec3 ( 0 . 0 , 0 . 0 , - five . 0 ); vec3 ro = camera_position ; vec3 rd = vec3 ( uv , one . 0 ); The Z-coordinate of rd acts sort of similar the camera'due south FOV, pushing the epitome plane closer to or further away from the photographic camera (y'all tin can endeavor adjusting this afterwards on to see the effect). For now, nosotros will leave this at 1.0.
And so, since each fragment has unique UV-coordinates, each execution of our fragment shader will generate a unique ray. Proceed in mind that considering of the manner shaders execute on your graphics card, all of these calculations volition be happening in parallel!
Now that nosotros have a style to generate a unique ray at each pixel, we are (finally) fix to ray march!
Putting It All Together
We at present accept all of the pieces we demand to write our outset, complete ray marching shader! At each pixel location, we will generate a ray, which nosotros will use within of our ray marching routine to map out our 3D environs. If you've been following along, the complete shader should look something like:
out vec4 o_color ; bladder distance_from_sphere ( in vec3 p , in vec3 c , bladder r ) { return length ( p - c ) - r ; } vec3 ray_march ( in vec3 ro , in vec3 rd ) { float total_distance_traveled = 0 . 0 ; const int NUMBER_OF_STEPS = 32 ; const float MINIMUM_HIT_DISTANCE = 0 . 001 ; const bladder MAXIMUM_TRACE_DISTANCE = thousand . 0 ; for ( int i = 0 ; i < NUMBER_OF_STEPS ; ++ i ) { vec3 current_position = ro + total_distance_traveled * rd ; bladder distance_to_closest = distance_from_sphere ( current_position , vec3 ( 0 . 0 ), 1 . 0 ); if ( distance_to_closest < MINIMUM_HIT_DISTANCE ) { render vec3 ( 1 . 0 , 0 . 0 , 0 . 0 ); } if ( total_distance_traveled > MAXIMUM_TRACE_DISTANCE ) { intermission ; } total_distance_traveled += distance_to_closest ; } render vec3 ( 0 . 0 ); } void main () { vec2 uv = vUV . st * two . 0 - 1 . 0 ; vec3 camera_position = vec3 ( 0 . 0 , 0 . 0 , - 5 . 0 ); vec3 ro = camera_position ; vec3 rd = vec3 ( uv , 1 . 0 ); vec3 shaded_color = ray_march ( ro , rd ); o_color = vec4 ( shaded_color , ane . 0 ); } Shading
Now that nosotros have our sphere, let's endeavor to calculate some basic shading so that we can confirm that information technology is, indeed, a 3D surface! If you are familiar with diffuse / specular lighting, you probably know that we need normal vectors to calculate shading. For a sphere, the normal at any point on the surface can exist calculated by only normalizing the vector c - p, where as before, c is the sphere's eye, and p is a point on the surface of the sphere. Nevertheless, this method is limiting (it certainly doesn't extend to other shapes), and as nosotros volition see in the next section, when we deform our SDF, we demand a more dynamic way of calculating normals.
The idea is, nosotros can "nudge" our signal p slightly in the positive and negative management along each of the X/Y/Z axes, recalculate our SDF, and see how the values alter. If you are familiar with vector calculus, we are essentially calculating the gradient of the distance field at p. In 2D, you lot might be familiar with the derivative, which gives the rate of alter of a office with respect to its input. You lot might also have seen this visualized every bit the slope of the line that lies tangent to the role at some point. The gradient is just the extension of this to functions of multiple dimensions (our SDF has iii dimensions, Ten/Y/Z). Normals should mostly exist unit vectors, so we'll normalize it as well. This method lets us to calculate normals for arbitrarily complex objects, provided we accept the appropriate SDF to correspond its surface.
I realize this explanation is a chip "hand-wavy" at the moment! Information technology took me a while to sympathise how and why this style of computing normals works. I recommend sitting with the code for a bit - after some reflection, it should start to brand sense. I am working on a better style to explain / illustrate this and will hopefully update this post in the futurity with a more concrete caption.
Before we calculate normals, let'due south quickly write a function that will allow united states to adapt more than i shape in preparation for futurity posts:
float map_the_world ( in vec3 p ) { bladder sphere_0 = distance_from_sphere ( p , vec3 ( 0 . 0 ), 1 . 0 ); // Afterwards we might have sphere_1, sphere_2, cube_3, etc... return sphere_0 ; } And modify the ray marching loop to call this function instead of the distance_from_sphere function directly:
... float distance_to_closest = map_the_world ( current_position ); ... At present, we will write our part to calculate normals, which will call map_the_world 6 times:
vec3 calculate_normal ( in vec3 p ) { const vec3 small_step = vec3 ( 0 . 001 , 0 . 0 , 0 . 0 ); float gradient_x = map_the_world ( p + small_step . xyy ) - map_the_world ( p - small_step . xyy ); bladder gradient_y = map_the_world ( p + small_step . yxy ) - map_the_world ( p - small_step . yxy ); float gradient_z = map_the_world ( p + small_step . yyx ) - map_the_world ( p - small_step . yyx ); vec3 normal = vec3 ( gradient_x , gradient_y , gradient_z ); return normalize ( normal ); } If you are unfamiliar with swizzling in GLSL, we are basically using some syntactic "sugar" to add together and decrease the X-coordinate of the variable small_step (which is 0.001) to each of the X/Y/Z coordinates of our original indicate p in succession. So the value of gradient_y, for case, is calculated by adding and subtracting 0.001 from just the Y-coordinate of p, and then calling map_the_world at these 2 new points. Now, back within of our ray marching loop, if we hitting an object, we can calculate the normal at that point. Let's visualize our normals as RGB colors to verify that the lawmaking is working as expected:
... if ( distance_to_closest < MINIMUM_HIT_DISTANCE ) { vec3 normal = calculate_normal ( current_position ); // Retrieve, each component of the normal will be in // the range -1..1, so for the purposes of visualizing // it every bit an RGB color, allow'southward remap it to the range // 0..1 return normal * 0 . 5 + 0 . 5 ; } ...
Nosotros can take this one step further and calculate some unproblematic diffuse lighting:
... if ( distance_to_closest < MINIMUM_HIT_DISTANCE ) { vec3 normal = calculate_normal ( current_position ); // For now, difficult-code the calorie-free's position in our scene vec3 light_position = vec3 ( 2 . 0 , - 5 . 0 , 3 . 0 ); // Calculate the unit management vector that points from // the point of intersection to the light source vec3 direction_to_light = normalize ( current_position - light_position ); float diffuse_intensity = max ( 0 . 0 , dot ( normal , direction_to_light )); return vec3 ( 1 . 0 , 0 . 0 , 0 . 0 ) * diffuse_intensity ; } ... So, your complete shader should look something like:
out vec4 o_color ; float distance_from_sphere ( in vec3 p , in vec3 c , float r ) { render length ( p - c ) - r ; } float map_the_world ( in vec3 p ) { float sphere_0 = distance_from_sphere ( p , vec3 ( 0 . 0 ), 1 . 0 ); return sphere_0 ; } vec3 calculate_normal ( in vec3 p ) { const vec3 small_step = vec3 ( 0 . 001 , 0 . 0 , 0 . 0 ); float gradient_x = map_the_world ( p + small_step . xyy ) - map_the_world ( p - small_step . xyy ); float gradient_y = map_the_world ( p + small_step . yxy ) - map_the_world ( p - small_step . yxy ); float gradient_z = map_the_world ( p + small_step . yyx ) - map_the_world ( p - small_step . yyx ); vec3 normal = vec3 ( gradient_x , gradient_y , gradient_z ); return normalize ( normal ); } vec3 ray_march ( in vec3 ro , in vec3 rd ) { float total_distance_traveled = 0 . 0 ; const int NUMBER_OF_STEPS = 32 ; const float MINIMUM_HIT_DISTANCE = 0 . 001 ; const float MAXIMUM_TRACE_DISTANCE = 1000 . 0 ; for ( int i = 0 ; i < NUMBER_OF_STEPS ; ++ i ) { vec3 current_position = ro + total_distance_traveled * rd ; bladder distance_to_closest = map_the_world ( current_position ); if ( distance_to_closest < MINIMUM_HIT_DISTANCE ) { vec3 normal = calculate_normal ( current_position ); vec3 light_position = vec3 ( ii . 0 , - 5 . 0 , 3 . 0 ); vec3 direction_to_light = normalize ( current_position - light_position ); float diffuse_intensity = max ( 0 . 0 , dot ( normal , direction_to_light )); render vec3 ( 1 . 0 , 0 . 0 , 0 . 0 ) * diffuse_intensity ; } if ( total_distance_traveled > MAXIMUM_TRACE_DISTANCE ) { break ; } total_distance_traveled += distance_to_closest ; } return vec3 ( 0 . 0 ); } void primary () { vec2 uv = vUV . st * 2 . 0 - i . 0 ; vec3 camera_position = vec3 ( 0 . 0 , 0 . 0 , - 5 . 0 ); vec3 ro = camera_position ; vec3 rd = vec3 ( uv , one . 0 ); vec3 shaded_color = ray_march ( ro , rd ); o_color = vec4 ( shaded_color , 1 . 0 ); }
Distorting the Distance Function
Now, for the absurd part! Once we have a bones ray marching setup, we can "nudge" or perturb our altitude functions to create more interesting shapes. For example, we can add together a sinusoidal distortion to our sphere's SDF by modifying the map_the_world function like so:
float map_the_world ( in vec3 p ) { float displacement = sin ( v . 0 * p . 10 ) * sin ( 5 . 0 * p . y ) * sin ( 5 . 0 * p . z ) * 0 . 25 ; float sphere_0 = distance_from_sphere ( p , vec3 ( 0 . 0 ), 1 . 0 ); return sphere_0 + deportation ; }
You can play around with this upshot: endeavor using other combinations of sin and cos based on the coordinates of the position vector p to generate a deportation value. If you are working with a toolkit that implements GLSL noise functions (or you want to implement them yourself), you tin attempt perturbing p with dissonance also.
Since we calculate our normals dynamically, the lighting should still be correct, regardless of how we deform our shape!
Conclusion
Anyways, I hope this helps you lot get started with ray marching! I realize that the terminate results aren't quite as stunning as you might've hoped, but now that we accept a solid framework to build upon, it's not likewise much more work to add things like:
- Multiple shapes
- CSG operations (unions, intersections, subtractions, etc.)
- Animated cameras
- Shadows and/or ambient occlusion
These are all things that I hope to cover in hereafter posts! If y'all desire to outset exploring on your own, I recommend taking a expect at Inigo Quilez's website for inspiration.
back
Source: https://michaelwalczyk.com/blog-ray-marching.html
Posted by: gonzalezclaying.blogspot.com

0 Response to "How Do You Draw A Ray"
Post a Comment