K
Laser beams in darkness
Photo by Clyde He on Unsplash

Ray Tracer on Web

First things first: demonstration. Try clicking the Run button below. It's going to take a bit while until the result pops up, so please be patient:)

(NOTE: Web Worker must be supported on your browser.)

⚠️ This operation blocks the main thread.

It didn't load the image but generated it on your device using the power of Web Assembly. Because it has some random factor, it produces slightly different images every time you run it.

It's cool, isn't it?

I'm first going to touch on what ray tracer / ray tracing is for those who are not familiar with it. Then look at details of how I ran it in the browser.

The source code is available on GitHub, and also published as a NPM package.

Contents

What's ray tracing?

Ray tracing is a technique used to render a scene. Pay close attention to the shadows and the reflection on water, walls, floors, and glasses in this video.

You may find the scene rendered by ray tracing more natural.

While ray tracing is computationaly more expensive than the widely used method rasterisation, it's been increasing its presence in movie and game rendering with the help of the latest advance in GPU technology.

Programatically, it calculates the colour of each pixel in the image from top left to right bottom tracing "rays" from a "camera" to the "scene".

Scene

A scene is the world where all the objects, for example cars, lights, buildings, and so on, are placed. My ray tracer only has simple spheres as they're easy to model, but you can put literally any objects in any positions if you manage to model it.

You can add other factors to objects to make it look more real. One such factor is a material the object is made of. Depending on the material, the object behaves differently when a ray hits. Materials supported by my version of ray tracer are:

Lambertian casts a ray to a random direction, Metal reflects, Dielectric refracts
Each material baheves differently when a ray hits it

Once you define a scene, the next thing you need to do is to specify where you look at the scene from.

Camera

A camera is your eyes. All the rays start their journey from the camera. It's defined with two main parameters:

In addition to them, you also need to configure (1) a viewport which has the same aspect ratio as the final image, think it as a window which you look at the scene through, and (2) the distance between the camera and the viewport, which is called "focal length".

Once the camera is positioned, it's ready to start casting rays.

Ray

A ray is casted from the camera to the direction of the pixel it's interested in at the time. The casted ray may or may not hit objects placed in the scene. If it hits something, it first records the color of the object surface. Then, it produces another ray, the direction of which depends on the material of the object as explained in the scene section, from the hit point.

The tracing of the ray stops either (1) when the ray goes deep into the shadow or (2) when it doesn't hit anything anymore, which means it reaches the ambient light (or sky so to speak). The final colour of the pixel is calculated at that point.

A ray is cast from a camera to the scene through a frame in between them calculating the final colour of the pixel
Tracing the journey of the ray

This is a quick explanation of what the ray tracer is. See the References section to continue reading more about ray tracing if you're interested.

Now, let's talk about what I did to run it on web browser.

Ray tracer + Web Assembly

The first thing I had to do was make a decision on how JavaScript and Rust should communicate each other; more precisely, how to pass images from Rust to JavaScript.

Canvas API

To draw an image on the screen, I used the <canvas> HTML element. Its putImageData() API receives input image as an ImageData object, and the image data is given as Uint8ClampedArray, which is expected to be an array containing the pixel data in the RGBA order.

Uint8ClampedArray is used for ImageData then passed to canvas HTML element.

Long story short, the goal is to return a properly serialised Uint8ClampedArray from the Rust programme.

wasm-bindgen

wasm-bindgen is a library that bridges the gap between JavaScript and the Web Assembly module written in Rust.

What I needed to do so the render function can be called from JavaScript and return Uint8ClampedArray was to add a #[wasm_bindgen] annotation and modify the return type:

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub fn render(width: u16, height: u16) -> Uint8ClampedArray {

  // ...setup a scene and a camera

  let mut img = vec![];

  // ...build image

  // convert from [u8] to Uint8ClampedArray using Into trait.
  img[..].into()
}

This is basically everything I needed to run the ray tracer in web browser, but there was one issue when I tried it: the browser got stuck while running the ray tracer. What I was missing was to follow this advice:

The main thread in a browser cannot block. This means that if you run WebAssembly code on the main thread you can never block, meaning you can't do so much as acquire a mutex. This is an extremely difficult limitation to work with on the web, although one workaround is to run wasm exclusively in web workers and run JS on the main thread. It is possible to run the same wasm across all threads, but you need to be extremely vigilant about synchronization with the main thread. —— Parallel Raytracing#caveats

Because ray tracing takes some time and blocks the thread where it runs, running it on the main thread is a no-no; I made a first encounter to Web Worker...!

Web Worker

Regardless of its unfriendly interface (Comlink can be a help for it), using Web Worker is not super complicated.

The main thread needs to (1) instantiate a worker specifying the path to the worker file, (2) invoke it passing arguments via postMessage API, and (3) listen to the result by registering an onmessage callback:

const w = new Worker("../path/to/the/worker/file");
w.postMessage([WIDTH, HEIGHT]);
w.onmessage = ({ data: rendered }) => {
  const imageData = new ImageData(rendered, WIDTH);
  canvas.getContext("2d").putImageData(imageData, 0, 0);
};

The worker in return defines an onmessage method, calls the wasm function, and pass back the result using postMessage API:

onmessage = async function (e) {
  const { render } = await import("wasm-raytracer");
  const rendered = render(e.data[0], e.data[1]);
  postMessage(rendered);
};

This was all I needed to run ray tracer on a dedicated thread.

Before closing the section, I address the performance issue I tried to solve.

Multi-threading

Even though now it's separated from the main thread, running it on a single thread takes a bit of time. To improve performance, I looked at options to run it on multiple threads.

The calculation of each pixel can be done in isolatation from each other, and fork-join parallelism, which is to fork to start a new thread and to join to wait for it to finish, can be applied using libraries such as Rayon.

Even though it wasn't difficult to turn the programme into the multi-thread version, there were a couple of challenges to run it in the browser environment:

For those reasons, I gave up multi-threading for now hoping I can introduce it in the near future.

Why ray tracer?

This is kind of a detour from my journey of the CPU experiment aimed at understanding what ray tracer is and how to implement it (+ not to mention learning Rust). Next challenge is to migrate the ray tracer programme to the Hack/Jack system I built while I was reading the book I introduced in the last section of the The Power of NAND post. Bye for now:)

References