🕓 21 min✏️ Published on

How to Create an Interactive Shader Editor

Using React Three Fiber, ThreeJS, and Codemirror

Puzzle

In this tutorial, we will create a fragment shader editor, similar to ShaderToy with React Three Fiber. We will use the awesome codemirror package and a few other neat tools to help make our lives easier. Along the way, we are also going to learn a bunch of things about shaders and how they work, so let's get started.

What is Shadertoy?

Shadertoy is a website where people can create and share shaders. It allows users to create fullscreen shaders and display them in a browser. It is a great place to learn about shaders and to see what is possible to do with them.

Table of contents

End Result

Here is what we are going to build: A shader editor complete with texture upload and everything, where you can write your own shaders and see them in action. You can even copy-paste in other people's shaders from ShaderToy, and for the most part, they should still work.

Code Editor

The first thing we need to do is to create a code editor. We will use the awesome codemirror package for this. Because the whole project is going to use React we can make our life much easier by using the @uiw/react-codemirror package, which is a wrapper around the codemirror project, that makes it very easy to integrate into a React app. First we have to install it.

npm i @uiw/react-codemirror

Then, we can create a simple editor component like this:

CodeEditor.tsx
import ReactCodeMirror from "@uiw/react-codemirror";
import { useCallback, useState } from "react";
 
export function CodeEditor({ code }: { code: string }) {
  const [value, setValue] = useState(code);
  const onChange = useCallback((val: string) => {
    setValue(val);
  }, []);
 
  return (
    <ReactCodeMirror
      value={value}
      onChange={onChange}
    />
  );
}

The component utilizes useState to keep track of the value of the text. It also makes use of useCallback to create a memoized version of the onChange function, so it doesn't get recreated on every re-render.

Next, we need to add some styles and syntax highlighting to make the editor look a bit nicer. The @uiw/react-codemirror package is very modular, which means it doesn't come with all the language support and various themes out of the box. We have to install them ourselves and add them as extension themes.

For the theme, we will use @uiw/codemirror-theme-vscode. If you want to use a different theme, you can find a list of available themes in the docs.

Because we will be writing shader code for WebGL, our editor should support GLSL, the OpenGL shading language. Unfortunately, there isn't any GLSL package available for codemirror. But we can use @codemirror/lang-cpp, which is close enough to GLSL for our purposes because GLSL is a c-like language.

We install both the theme and the language support like this:

npm i @uiw/codemirror-theme-vscode @codemirror/lang-cpp

And then add them to the editor.

CodeEditor.tsx
import { Canvas } from "@react-three/fiber";
import ReactCodeMirror from "@uiw/react-codemirror";
import { useCallback, useState } from "react";
 
// adding the new imports //
import { cppLanguage } from "@codemirror/lang-cpp"; 
import { vscodeDark } from "@uiw/codemirror-theme-vscode"; 
 
 
export function CodeEditor({ code }: { code: string }) {
  const [value, setValue] = useState(code);
  const onChange = useCallback((val: string) => {
    setValue(val);
  }, []);
 
  return (
    <ReactCodeMirror
      value={value}
      onChange={onChange}
 
      // add the new theme and language support //
      theme={vscodeDark}
      extensions={[cppLanguage]}
    />
  );
}

And this is already it. We now have a code editor that we can use to edit glsl code, with proper syntax highlighting and a nice theme. And we can use the state of the editor as the input for our shader program.

Fullscreen Shaders in React Three Fiber (r3f)

Now that we have a code editor where we can edit our shader code, we need some way of rendering it as a shader and displaying the results on the screen.

Writing shaders is a form of art, and there are lots and lots of things to learn about shaders.

For now, let's just say that a shader is a program that runs on the GPU and tells the GPU how to display "stuff" on the screen. There are many different types of shaders, but for this project, there are only two types that we care about: Vertex and Fragment Shaders.

Vertex Shaders

Vertex shaders are responsible for transforming the vertices of a mesh. They are run once for each vertex.

What is a Mesh and a vertex?

A mesh is a collection of points that define a three-dimensional structure. You can think of a mesh sort of like a "connect the dots" picture—but in three dimensions!

In graphics pipeline jargon, each "dot" is called a vertex (plural: vertices), and connecting these dots leads to a three-dimensional object made up of triangles.

It is quite common to have data attached to vertices as well, which helps give color or calculate the lighting for the object.

Vertex shaders take the vertex positions, which are 3D positions in "world space" and transform them into positions in "clip space". Clip space is a special coordinate system that is used to determine which vertices are visible on the screen and which are not.

There is usually a lot of matrix math involved, but for our Shadertoy editor, we are in luck because we don't need any of that. Because we don't have any vertices to transform. We are going to use a single rectangle that covers the whole screen as our mesh. This way, we can get away with a very simple vertex shader.

The only vertex shader we are going to use is the simplest imaginable:

shaders/vertexShader.glsl
void main() {
    gl_Position = vec4(position, 1.0);
}

This vertex shader just passes the coordinates of the vertices through. No transformations and no matrix math are necessary. We will see why we can get away with this in a moment. Note here that the gl_Position and position are supplied by the WebGL environment. We didn't have to define these variables ourselves. The main function is special as well because it is the entry point for shader programs in WebGL.

So something like this wouldn't work!

shaders/vertexShader.glsl
void myFunction() {
    myPosition = vec4(position, 1.0);
}

Fragment Shaders

Most of the magic in a Shadertoy program comes from the fragment shaders. Fragment shaders are responsible for coloring the pixels on the screen. The shader is run once for each pixel that is occupied by the mesh.

What about rasterization?

In the graphics rendering pipeline there is a complete step between the vertex shader and the fragment shader, called rasterization, that is turning three connected vertices into a triangle and then determines which pixels this triangle occupies on the screen... but for now we won't worry about the details of that.

The simplest fragment shader that we can write is one that colors all the pixels of the mesh the same. Say red. It looks like this:

shaders/fragmentShader.glsl
void main() {
    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

Notice how, again, we have a gl_* variable that is pre-defined by the WebGL environment. Whatever we store in that variable will be interpreted as the color for that pixel once the main function is done running.

If we had a mesh that covered the whole screen, we could use a fragment shader to color each pixel on the screen based on whatever logic we choose.

This is the whole idea and magic behind something like Shadertoy. Full-screen fragment shaders can be used to create any effect we want. The only limit is our imagination and skill in writing shaders. This is not the most performant way of doing 3D graphics, it is more of a creative way of doing graphics and a good tool or toy for practicing writing shaders in a more traditional 3D rendering pipeline.

The React Three Fiber Setup

In the world of r3f, we need to create a mesh with a material and a geometry that we can display.

For the geometry, we go with the simplest thing possible that can cover the full screen: a single rectangle. In r3f, a rectangle can be created by using a <planeGeometry />.

For the material, we can use the r3f <shaderMaterial /> which passes its arguments to ThreeJS's ShaderMaterial class. This material takes a vertex shader and a fragment shader as arguments and will run them on the GPU for us. There is also a shaderMaterial function by the drei library that we could use, but for now, we will stick with the simple r3f one.

The last trick here is to use an OrthographicCamera to make sure that the plane is covering the whole screen. In ThreeJS, there are different types of cameras, and the default is a PerspectiveCamera. As the name suggests, a PerspectiveCamera will apply perspective to the objects in our scene. For our purposes, we want the plane to be always the same size, no matter how far or close it is to the camera, so we use the OrthographicCamera instead.

For any of this to work, we need to install the @react-three/fiber and threepackages first, though.

npm i @react-three/fiber three

When we create the Canvas, we also need to make sure that the size of the viewport is the same as that of the screen. We can do this by setting the orthographic prop of the Canvas to true and then setting the left, right, top, and bottom props to -1, 1, 1, and -1 respectively. This will make sure that the 2 by 2 plane we are creating will cover the whole screen. We further position the camera 1 unit away so that the plane is not clipping into the camera.

Putting all of this together, we get something like this:

FullCanvasShader.tsx
import { useThree } from "@react-three/fiber";
import { ShaderMaterial, Vector2 } from "three";
import { Canvas } from "@react-three/fiber";
import fragmentShader from "./shaders/fragmentShader.glsl";
import vertexShader from "./shaders/vertexShader.glsl";
 
function FullCanvasShaderMesh() {
  return (
    <mesh>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
export function FullCanvasShader() {
  return <Canvas
    orthographic={true}
    camera={{
      left: -1, 
      right: 1,
      top: 1,
      bottom: -1,
      near: 0.1,
      far: 1000,
      position: [0, 0, 1],
    }}
  >
    <FullCanvasShaderMesh />
  </Canvas>
}

Adding Uniforms

To make more interesting shaders possible, we have to supply some uniforms to our shader program so that we can use them later.

What the heck are uniforms?

Uniforms are a way to pass data from the CPU to the GPU. Often, you will see uniforms containing things like time, the screen resolution, or the mouse position.

For now, we will use the same naming convention for our uniforms as the glslCanvas editor. This way, we can copy a lot of shaders from there and tutorials we find online and paste them into our editor without any modifications. The glslCanvas has a few default uniforms it specifies:

  • u_time: a float containing the elapsed time in seconds
  • u_resolution: a vec2 containing the dimensions of the canvas
  • u_mouse: a vec2 containing the position of the mouse
  • u_tex: an array of sampler2D textures

Adding these will help to show some of the concepts of how to add uniforms in general.

What is a vec2, vec3, vec4 and sampler2D?

These are data types in GLSL. vec2 is a vector with two components, vec3 has three, and so on. Vectors are used to store things like positions and colors in GLSL. sampler2D is a type that is used to represent a texture in GLSL.

Shadertoy Uniform Support

Later on, we will add more uniforms to make our editor compatible with ShaderToy as well; some of them are similar to those of the glslCanvas editor but have different names. Others have different data formats as well, like iMouse, which is a vec4 for ShaderToy and a vec2 for glslCanvas.

In the end, setting up all of this should allow us to use shaders from both sources without any modifications and play around with them. This way, we can copy/paste from a lot of tutorials, like the Book of Shaders and from Shadertoy examples as well!

First, we will add the time as a uniform to our shader program. This will allow us to animate things within our shaders as time passes. Time is a simple float value that we can increment every frame, and it measures the time in seconds since the program started. Luckily, r3f gives us a clock that we can use to get the elapsed time, we just have to pass it along to the uniforms.

In the glslCanvasEditor, the time is passed as u_time. And we will use the same name for our uniform 😊

FullCanvasShader.tsx
// importing the useFrame hook from the @react-three/fiber package //
import { Canvas, useFrame } from "@react-three/fiber"; 
import { useRef } from "react";
import { ShaderMaterial } from "three";
import fragmentShader from "./shaders/fragmentShader.glsl";
import vertexShader from "./shaders/vertexShader.glsl";
 
function FullCanvasShaderMesh() {
  const shaderRef = useRef<ShaderMaterial>(null!);
 
  // using the useFrame hook to update the uniform every frame //
  useFrame(({ clock }) => { 
    if (shaderRef.current) { 
      // and updating the uniform value in the shader with the elapsed time //
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime(); 
    } 
  }); 
 
  return (
    <mesh>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{ 
          // adding time as a uniform value, a simple float  //
          u_time: { value: 0 }, 
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
// ... rest of code remains the same

Now that we have u_time, let's also add u_resolution to our program! Resolution is a vec2 that contains the width and height of the screen. We can use this to make sure that our shader is always the right size, no matter the screen resolution. It is a very common uniform to have in a shader program.

FullCanvasShader.tsx
// Import the useThree hook from the library //
import { Canvas, useFrame, useThree } from "@react-three/fiber"; 
import { useRef } from "react";
import { ShaderMaterial, Vector2 } from "three";
import fragmentShader from "./shaders/fragmentShader.glsl";
import vertexShader from "./shaders/vertexShader.glsl";
 
function FullCanvasShaderMesh() {
  const shaderRef = useRef<ShaderMaterial>(null!);
  // Get the size of the canvas to pass to the shader as the resolution //
  const { size } = useThree(); 
 
  useFrame(({ clock }) => {
    if (shaderRef.current) {
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime();
    }
  });
 
  return (
    <mesh>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{
          u_time: { value: 0 },
          // Add the new uniform for the resolution //
          u_resolution: { value: new Vector2(size.width, size.height) }, 
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
// ... rest of code remains the same

Let's also add u_mouse! This is a vec2 that contains the position of the mouse. We can use this to make our shaders interactive. The mouse position we obtain from the pointer in r3f is centered and normalized, which means that it goes from -1 to 1 in both the x and y directions, where 0,0 is the center of the canvas.

FullCanvasShader.tsx
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useRef } from "react";
import { ShaderMaterial, Vector2 } from "three";
import fragmentShader from "./shaders/fragmentShader.glsl";
import vertexShader from "./shaders/vertexShader.glsl";
 
function FullCanvasShaderMesh() {
  const shaderRef = useRef<ShaderMaterial>(null!);
  // get the pointer from the useThree hook  //
  const { size } = useThree();  
 
  useFrame(({ clock, pointer }) => {
    if (shaderRef.current) {
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime();
      // Update the mouse uniform with the pointer value //
      shaderRef.current.uniforms.u_mouse.value.copy(pointer); 
    }
  });
 
  return (
    <mesh>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{
          u_time: { value: 0 },
          u_resolution: { value: new Vector2(size.width, size.height) },
          // Add a new uniform for mouse position //
          u_mouse: { value: new Vector2(0, 0) }, 
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
// rest of code stays the same!

And finally, let's implement u_tex. This one is going to be the trickiest uniform to implement because we will need a way to upload images to the shader program to then load them into textures. Luckily, r3f has some helpers for all of this.

Let's first update our FullCanvasShader file to allow for textures to be passed in.

FullCanvasShader.tsx
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useRef } from "react";
// adding the Texture import from Three for the type of the textures //
import { ShaderMaterial, Texture, Vector2 } from "three"; 
import fragmentShader from "./shaders/fragmentShader.glsl";
import vertexShader from "./shaders/vertexShader.glsl";
 
// Add a new "textures" prop to the FullCanvasShader //
export function FullCanvasShader({ textures }: { textures: Texture[] }) {  
  return (
    <Canvas
      orthographic
      camera={{
        left: -1,
        right: 1,
        top: 1,
        bottom: -1,
        near: 0.1,
        far: 1000,
        position: [0, 0, 1],
      }}
      className="w-full h-full"
    >
      {/* We use a random key to force re-rendering on changes to the textures // */}
      {/* And pass along the textures prop to the FullCanvasShaderMesh // */}
      <FullCanvasShaderMesh key={Math.random()} textures={textures} />
    </Canvas>
  );
}
 
// Add the new "textures" prop to the FullCanvasShaderMesh as well //
export function FullCanvasShaderMesh() { 
export function FullCanvasShaderMesh({ textures }: { textures: Texture[] }) { 
  const shaderRef = useRef<ShaderMaterial>(null!);
  const { size } = useThree();
 
  useFrame(({ clock, pointer }) => {
    if (shaderRef.current) {
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime();
      shaderRef.current.uniforms.u_mouse.value.copy(pointer);
    }
  });
 
  // Build texture uniforms as an object with the right keys (u_tex0, u_tex1, etc.) //
  const textureUniforms = textures.reduce((acc, texture, index) => {  
    acc[`u_tex${index}`] = { value: texture };  
    return acc;  
  }, {} as { [key: string]: { value: Texture } });  
 
  return (
    <mesh>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{
          u_time: { value: 0 },
          u_resolution: { value: new Vector2(size.width, size.height) },
          // when working with textures, devicePixelRatio starts to play a role //
          // so we pass it along as a uniform as well //
          u_pixelRatio: { value: window.devicePixelRatio }, 
          u_mouse: { value: new Vector2(0, 0) },
          // spread the textureUniforms object to add each of them as a uniform  //
          ...textureUniforms, 
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}

Then, we add a new file to handle the texture uploading and everything. We split this one into two components, one for the UI and one for the state management, so that we can combine them later with the shader program.

We'll be using the TextureLoader from ThreeJS to load the textures and then pass them along to the shader program.

ShaderWithTextureUpload.tsx
import React, { useCallback, useState } from "react";
import { Texture, TextureLoader } from "three";
import { FullCanvasShader } from "./FullCanvasShader";
import { TextureUploadMenu } from "./TextureUploadMenu";
 
export function ShaderWithTextureUpload() {
  const [textures, setTextures] = useState<Texture[]>([]);
  const [previewUrls, setPreviewUrls] = useState<string[]>([]);
 
  // Append new textures to the existing ones.
  const handleFileChange = useCallback(
    (event: ChangeEvent<HTMLInputElement>) => {
      const files = event.target.files;
      if (!files) return;
 
      const loader = new TextureLoader();
      const fileArray = Array.from(files);
 
      fileArray.forEach((file) => {
        const url = URL.createObjectURL(file);
        loader.load(url, (texture) => {
          setTextures((prev) => [...prev, texture]);
          setPreviewUrls((prev) => [...prev, url]);
        });
      });
    },
    []
  );
 
  // Delete texture at index.
  const handleDelete = useCallback((index: number) => {
    setTextures((prev) => prev.filter((_, i) => i !== index));
    setPreviewUrls((prev) => {
      URL.revokeObjectURL(prev[index]);
      return prev.filter((_, i) => i !== index);
    });
  }, []);
 
  // Update (change) an existing texture.
  const handleUpdate = useCallback((index: number, file: File) => {
    const loader = new TextureLoader();
    const url = URL.createObjectURL(file);
    loader.load(url, (texture) => {
      setTextures((prev) => {
        const newTextures = [...prev];
        newTextures[index] = texture;
        return newTextures;
      });
      setPreviewUrls((prev) => {
        const newUrls = [...prev];
        URL.revokeObjectURL(prev[index]);
        newUrls[index] = url;
        return newUrls;
      });
    });
  }, []);
 
  return (
    <div>
      <FullCanvasShader textures={textures} />
 
      <TextureUploadMenu
        previewUrls={previewUrls}
        onFileChange={handleFileChange}
        onDelete={handleDelete}
        onUpdate={handleUpdate}
      />
    </div>
  );
}

The inner workings for the TextureUploadMenu are not all that important for this tutorial, it's just a component with a UI that handles the uploading, updating, and deleting of textures so that we can use them in our shaders.

Still curious how it works?

Here's how.

The TextureUploadMenu UI uses the react-icons library for convenience. You have to install it with:

npm i react-icons

It also applies some styling with tailwind.css. For a guide on how to set up Tailwind, visit the tailwind.css website. This component supports uploading multiple textures and deleting and changing them as needed.

TextureUploadMenu.tsx
import { useState, ChangeEvent } from "react";
import { FaChevronDown, FaChevronUp, FaEdit, FaTrash } from "react-icons/fa";
import { PreviewUrl } from "./PreviewUrl";
 
type TextureUploadMenuProps = {
  previewUrls: string[];
  onFileChange: (e: ChangeEvent<HTMLInputElement>) => void;
  onDelete: (index: number) => void;
  onUpdate: (index: number, file: File) => void;
};
 
export function TextureUploadMenu({
  previewUrls,
  onFileChange,
 
  onDelete,
  onUpdate,
}: TextureUploadMenuProps) {
  const [menuOpen, setMenuOpen] = useState(true);
 
  const handleToggleMenu = () => {
    setMenuOpen(!menuOpen);
  };
 
  return (
    <div className="absolute top-2 right-2 bg-white bg-opacity-90 p-4 rounded shadow-lg max-h-[90vh] overflow-y-auto">
      <div className="flex items-center justify-between mb-4">
        <span className="font-bold">Uploaded Textures</span>
        <button onClick={handleToggleMenu} className="focus:outline-none">
          {menuOpen ? <FaChevronUp /> : <FaChevronDown />}
        </button>
      </div>
      {menuOpen && (
        <>
          <input
           type="file"
            multiple
           accept="image/*"
            onChange={onFileChange}
            className="block mb-4"
          />
          {previewUrls.length > 0 && (
            <ul className="space-y-2">
              {previewUrls.map((url, index) => (
                <PreviewUrl
                 key={index}
                  url={url}
                  onDelete={onDelete}
                  onUpdate={onUpdate}
                  index={index}
                />
              ))}
            </ul>
          )}
        </>
      )}
    </div>
  );
 }

We also need a component to display the state of a single texture. It should be able to handle updates and deletes and display the texture name and a little preview of the texture. It looks like this:

PreviewUrl.tsx
import React, { useState } from "react";
import { FaChevronDown, FaChevronUp, FaEdit, FaTrash } from "react-icons/fa";
 
 type PreviewUrlProps = {
  url: string;
  onDelete: (index: number) => void;
  onUpdate: (index: number, file: File) => void;
  index: number;
 };
 
 export function PreviewUrl({ url, onDelete, onUpdate, index }: PreviewUrlProps) {
  const handleUpdateFileChange = (
    index: number,
    event: ChangeEvent<HTMLInputElement>
  ) => {
    const files = event.target.files;
    if (files && files[0]) {
      onUpdate(index, files[0]);
    }
  };
  return (
    <li className="flex items-center space-x-2">
      {/* eslint-disable-next-line @next/next/no-img-element */}
      <img
       src={url}
        alt={`Texture ${index}`}
        className="w-12 h-12 object-cover rounded"
      />
      <span>{`u_tex${index}`}</span>
      <div className="flex space-x-2 ml-auto">
        <button
         onClick={() => onDelete(index)}
          className="bg-red-500 text-white px-2 py-1 rounded hover:bg-red-600"
          title="Delete texture"
        >
          <FaTrash />
        </button>
        <label
         htmlFor={`update-file-${index}`}
          className="bg-blue-500 text-white px-2 py-1 rounded cursor-pointer hover:bg-blue-600"
          title="Change texture"
        >
          <FaEdit />
        </label>
        <input
         id={`update-file-${index}`}
          type="file"
          accept="image/*"
          onChange={(e) => handleUpdateFileChange(index, e)}
          className="hidden"
        />
      </div>
    </li>
  );
}

Now, this component, ShaderWithTextureUpload is already very interesting. It allows us to input any fragment shader we like into a file called shaders/fragmentShader.glsl and then display whatever the shader does over the whole canvas.

We can even upload textures, use the mouse position, and animate things over time and make all of that screen resolution independent.

But we wanted to build an editor! So, let's combine this with our code editor component and use the code from the editor as the input for the fragment shader. This way, the user can control the output of the shader program by writing code in the editor.

Combining the Editor with the Shader Display

To combine the editor with the shader display, we need to pass the code from the editor to the shader program. Because this would lead to a lot of prop-drilling, passing along the textures and the code to our program, I think it is time to refactor things a little bit and extract the Editor state into a context.

Extracting Context

To do this, we create a new file that provides the context for the editor state. This context will contain the code and the textures that the user has uploaded.

EditorContextProvider.tsx
import {
  createContext,
  Dispatch,
  PropsWithChildren,
  SetStateAction,
  useContext,
  useState,
} from "react";
import { Texture } from "three";
 
type EditorContextType = {
  code: string;
  setCode: Dispatch<SetStateAction<string>>;
  textures: Texture[];
  setTextures: Dispatch<SetStateAction<Texture[]>>;
};
 
const EditorContext = createContext({} as EditorContextType);
 
export const useEditorContext = () => {
  return useContext(EditorContext);
};
 
export const EditorContextProvider = ({
  children,
  initialCode,
}: PropsWithChildren<{ initialCode: string }>) => {
  const [code, setCode] = useState(initialCode);
  const [textures, setTextures] = useState<Texture[]>([]);
  return (
    <EditorContext.Provider value={{ code, textures, setCode, setTextures }}>
      {children}
    </EditorContext.Provider>
  );
};

Now, we can update the CodeEditor, the TextureManagementUI, and the FullCanvasShader to consume the context instead of managing their state.

TextureManagementUI.tsx
import React, { useCallback, useState } from "react";
import { FaChevronDown, FaChevronUp } from "react-icons/fa";
import { TextureLoader } from "three";
import { PreviewUrl } from "./PreviewUrl";
import { useEditorContext } from "./EditorContextProvider"; 
 
export const TextureUploadUI = () => {
  const [textures, setTextures] = useState<Texture[]>([]); 
  const { setTextures } = useEditorContext(); 
  const [previewUrls, setPreviewUrls] = useState<string[]>([]);
 
  const handleFileChange = useCallback(
    (event: React.ChangeEvent<HTMLInputElement>) => {
      const files = event.target.files;
      if (!files) return;
 
      const loader = new TextureLoader();
      const fileArray = Array.from(files);
 
      fileArray.forEach((file) => {
        const url = URL.createObjectURL(file);
        loader.load(url, (texture) => {
          setTextures((prev) => [...prev, texture]);
          setPreviewUrls((prev) => [...prev, url]);
        });
      });
    },
    [] 
    [setTextures] 
  );
 
  const handleDelete = useCallback(
    (index: number) => {
      setTextures((prev) => prev.filter((_, i) => i !== index));
      setPreviewUrls((prev) => {
        URL.revokeObjectURL(prev[index]);
        return prev.filter((_, i) => i !== index);
      });
    },
    [] 
    [setTextures] 
  );
 
  const handleUpdate = useCallback(
    (index: number, file: File) => {
      const loader = new TextureLoader();
      const url = URL.createObjectURL(file);
      loader.load(url, (texture) => {
        setTextures((prev) => {
          const newTextures = [...prev];
          newTextures[index] = texture;
          return newTextures;
        });
        setPreviewUrls((prev) => {
          const newUrls = [...prev];
          URL.revokeObjectURL(prev[index]);
          newUrls[index] = url;
          return newUrls;
        });
      });
    },
    [] 
    [setTextures] 
  );
 
  const [menuOpen, setMenuOpen] = useState(true);
 
  const handleToggleMenu = () => {
    setMenuOpen(!menuOpen);
  };
 
  return (
    <div className="absolute top-2 right-2 bg-white bg-opacity-90 p-4 rounded shadow-lg max-h-[90vh] overflow-y-auto z-10">
      <div className="flex items-center justify-between mb-4">
        <span className="font-bold">Uploaded Textures</span>
        <button onClick={handleToggleMenu} className="focus:outline-none">
          {menuOpen ? <FaChevronUp /> : <FaChevronDown />}
        </button>
      </div>
      {menuOpen && (
        <>
          <input
            type="file"
            multiple
            accept="image/*"
            onChange={handleFileChange}
            className="block mb-4"
          />
          {previewUrls.length > 0 && (
            <ul className="space-y-2">
              {previewUrls.map((url, index) => (
                <PreviewUrl
                  key={index}
                  url={url}
                  onDelete={handleDelete}
                  onUpdate={handleUpdate}
                  index={index}
                />
              ))}
            </ul>
          )}
        </>
      )}
    </div>
  );
};
FullCanvasShader.tsx
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useMemo, useRef } from "react";
import { ShaderMaterial, Texture, Vector2 } from "three";
import { useEditorContext } from "./EditorContextProvider"; 
 
import fragmentShader from "./shaders/fragmentShader.glsl"; 
import vertexShader from "./shaders/vertexShader.glsl";
 
 
export function FullCanvasShaderMesh() {
  const { code: fragmentShader, textures } = useEditorContext(); 
 
  const shaderRef = useRef<ShaderMaterial>(null!);
  const { size } = useThree();
 
  useFrame(({ clock, pointer }) => {
    if (shaderRef.current) {
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime();
      shaderRef.current.uniforms.u_mouse.value.copy(pointer);
    }
  });
 
  const textureUniforms = useMemo(
    () =>
      textures.reduce((acc, texture, index) => {
        acc[`u_tex${index}`] = { value: texture };
        return acc;
      }, {} as { [key: string]: { value: Texture } }),
    [textures]
  );
 
  return (
    <mesh key={Math.random()}> // [!code ++]
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{
          u_time: { value: 0 },
          u_resolution: { value: new Vector2(size.width, size.height) },
          u_pixelRatio: { value: window.devicePixelRatio },
          u_mouse: { value: new Vector2(0, 0) },
          ...textureUniforms,
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
export function FullCanvasShaderMesh({ textures }: { textures: Texture[] }) { 
export function FullCanvasShader() { 
  return (
    <Canvas
      orthographic
      camera={{
        left: -1,
        right: 1,
        top: 1,
        bottom: -1,
        near: 0.1,
        far: 1000,
        position: [0, 0, 1],
      }}
    >
      <FullCanvasShaderMesh />
    </Canvas>
  );
}
CodeEditor.tsx
import { cppLanguage } from "@codemirror/lang-cpp";
import { vscodeDark } from "@uiw/codemirror-theme-vscode";
import ReactCodeMirror from "@uiw/react-codemirror";
import { useCallback } from "react";
import { useEditorContext } from "./EditorContextProvider"; 
 
export const CodeEditor = () => {
  const { code, setCode } = useEditorContext(); 
  const onChange = useCallback((val: string) => {
      setCode(val);
  }, [setCode]); 
  }, [setCode]); 
 
  return (
    <ReactCodeMirror
      value={code}
      onChange={onChange}
      height="100%"
      theme={vscodeDark}
      extensions={[cppLanguage]}
    />
  );
};

Finally, we add one more file with a component to combine everything.

ShaderEditor.tsx
import { CodeEditor } from "./CodeEditor";
import { EditorContextProvider } from "./EditorContextProvider";
import { FullCanvasShader } from "./FullCanvasShader";
import { TextureUploadUI } from "./TextureUploadUI";
 
import initialShader from "./shaders/fragmentShader.glsl";
 
export function CompleteShaderEditor() {
  return (
    <EditorContextProvider initialCode={initialShader}>
      <div className="grid grid-cols-2 h-screen">
        <CodeEditor />
        <FullCanvasShader />
        <TextureUploadUI />
      </div>
    </EditorContextProvider>
  );
}

And that's it. We now have a full-screen shader editor that allows us to write shaders in the editor and see the results on the screen. It even has proper uniform support for creating lots of interesting effects.

Making our Shader Editor ShaderToy Compatible

The uniform values we supply should be available by the same names as they are supplied in Shadertoy, too. This way, we can copy shaders from ShaderToy and paste them into our editor, and they should work... with some caveats. 1

Shadertoy has a couple more built-in uniforms that we have to add to our program as well.

ShaderToy Uniforms

  • iResolution (vec3): The viewport width, height, and pixel aspect ratio.
  • iTime (float): The elapsed time in seconds.
  • iTimeDelta (float): The time between the current and previous frame.
  • iFrame (int): The frame count.
  • iMouse (vec4): Mouse position and click information.
  • iDate (vec4): Date and time information (year, month, day, and seconds since midnight).
  • iSampleRate (float): The audio sample rate.
  • iChannel0, iChannel1, iChannel2, iChannel3 (sampler2D): Up to four texture channels.
  • iChannelTime (float[4]): Playback time for each texture channel.
  • iChannelResolution (vec3[4]): The resolution (and aspect ratio) of each texture channel.

So, let's implement them. For now, this is a bit of a barebones example, not supporting everything that Shader Toy can do. The audio, video, and shader input buffers are a bit harder to support correctly, and for now, we just provide the default values to these uniforms.

FullCanvasShader.tsx
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useMemo, useRef } from "react";
import { ShaderMaterial, Texture, Vector2, Vector3, Vector4 } from "three"; 
import { useEditorContext } from "./EditorContextProvider";
 
import vertexShader from "./shaders/vertexShader.glsl";
 
export function FullCanvasShaderMesh() {
  const { code: fragmentShader, textures } = useEditorContext();
 
  const shaderRef = useRef<ShaderMaterial>(null!);
  const { size } = useThree();
 
  useFrame(({ clock, pointer }) => { 
  useFrame(({ clock, pointer }, frameCount) => { 
    if (shaderRef.current) {
      shaderRef.current.uniforms.u_time.value = clock.getElapsedTime();
      shaderRef.current.uniforms.u_mouse.value.copy(pointer);
      shaderRef.current.uniforms.u_resolution.value.set(
        size.width,
        size.height
      );
      shaderRef.current.uniforms.u_pixelRatio.value = window.devicePixelRatio;
      shaderRef.current.uniforms.iResolution.value.set( 
        size.width, 
        size.height, 
        0.0
      ); 
      shaderRef.current.uniforms.iTime.value = clock.getElapsedTime(); 
      shaderRef.current.uniforms.iTimeDelta.value = clock.getDelta(); 
      const now = new Date(); 
      const year = now.getFullYear(); 
      const month = now.getMonth() + 1;  
      const day = now.getDate(); 
      const secondsSinceMidnight =
        now.getSeconds() + 60 * (now.getMinutes() + 60 * now.getHours()); 
      shaderRef.current.uniforms.iDate.value.set( 
        year, 
        month, 
        day, 
        secondsSinceMidnight 
      ); 
      shaderRef.current.uniforms.iFrame.value = frameCount; 
      shaderRef.current.uniforms.iMouse.value.set(pointer.x, pointer.y, 0, 0); 
    }
  });
 
  const textureUniforms = useMemo(
    () =>
      textures.reduce((acc, texture, index) => {
        acc[`u_tex${index}`] = { value: texture };
        acc[`iChannel${index}`] = { value: texture }; 
 
        return acc;
      }, {} as { [key: string]: { value: Texture } }),
    [textures]
  );
 
  return (
    <mesh key={Math.random()}>
      <planeGeometry args={[2, 2]} />
      <shaderMaterial
        ref={shaderRef}
        uniforms={{
          u_time: { value: 0 },
          u_resolution: { value: new Vector2(size.width, size.height) },
          u_pixelRatio: { value: window.devicePixelRatio },
          u_mouse: { value: new Vector2(0, 0) },
          iResolution: { 
            value: new Vector3( 
              size.width, 
              size.height, 
              window.devicePixelRatio 
            ), 
          }, 
          iTime: { value: 0 }, 
          iTimeDelta: { value: 0 }, 
          iDate: { value: new Vector4() }, 
          iFrame: { value: 0 }, 
          iMouse: { value: new Vector4(0, 0, 0, 0) }, 
          iChannelTime: { value: [0, 0, 0, 0] }, 
          iSampleRate: { value: 44100 }, 
          iChannelResolution: { 
            value: [new Vector3(), new Vector3(), new Vector3(), new Vector3()], 
          },  
          ...textureUniforms,
        }}
        vertexShader={vertexShader}
        fragmentShader={fragmentShader}
      />
    </mesh>
  );
}
 
// rest of code stays the same

We also need to update our shader to make use of these new uniforms "automatically" because pasting in shader programs from shader toy should work without any modifications. First, we pass in define all the uniforms we need, and then we have to include the main function of the shader program to call mainImage with the correct parameters. This is because Shadertoy shaders only include the mainImage function and not the main function like we do in our editor.

shaders/shadertoyFragDefs.glsl
// This is the default shader code we have to include for copy-pasted shadertoy 
// examples to work, it is defining the uniforms and the main function that calls mainImage
uniform vec3    iResolution;  
uniform float     iTime;      
uniform float     iTimeDelta;  
uniform float     iFrameRate;   
uniform int       iFrame;   
uniform float     iChannelTime[4];
uniform vec3      iChannelResolution[4];
uniform vec4      iMouse;
uniform sampler2D iChannel0;
uniform sampler2D iChannel1;
uniform sampler2D iChannel2;
uniform sampler2D iChannel3;
uniform vec4      iDate;
uniform float     iSampleRate;
 
void mainImage( out vec4 c, in vec2 f );
void main( void )
{
    vec4 color = vec4(0.0,0.0,0.0,0.0);
    // This is necessary to make sure the uv coordinates are resolution-independent!
 
    vec2 scaledUv = vec2(gl_FragCoord.x / iResolution.z, gl_FragCoord.y / iResolution.z); 
    mainImage( color, scaledUv);
    gl_FragColor = vec4(color);
}

Lastly, we need to include this piece of fragment shader with the rest of our shader editor, but conditionally.

We want it to be used only if the user pasted in a shader from Shadertoy. So, let's create a toggle based on the text content of the editor.

If the editor includes the code void mainImage we will include the Shadertoy uniforms as well.

FullCanvasShader.tsx
import { Canvas, useFrame, useThree } from "@react-three/fiber";
import { useMemo, useRef } from "react";
import { ShaderMaterial, Texture, Vector2, Vector3, Vector4 } from "three";
import { useEditorContext } from "./EditorContextProvider";
 
import vertexShader from "./shaders/vertexShader.glsl";
import shadertoyDefinitions from "./shaders/shadertoyDefinitions.glsl"; 
 
export function FullCanvasShaderMesh() {
  const { code: fragmentShader, textures} = useEditorContext(); 
  const { code, textures } = useEditorContext(); 
 
  const isShaderToy = useMemo(() => code.includes("void mainImage"), [code]); 
  const fragmentShader = useMemo( 
    () => (isShaderToy ? shadertoyDefinitions + "\n" + code : code), 
    [code, isShaderToy] 
  ); 
 
  const shaderRef = useRef<ShaderMaterial>(null!);
  const { size } = useThree();
 
// rest of code stays the same

And this is it. Let's test it out by copy-pasting a shader from ShaderToy into our editor and see if it works. Let's take this epic example from ShaderToy and see the beauty of it working. You can play around with the example as well! If the screen goes blank, open the console to see if there are any errors in your shader code.

And that's it for this tutorial. We have built a full screen shader editor that allows us to write shaders in the editor and see the results on the screen. It even has proper uniform support for creating lots of interesting effects.

I hope to write more tutorials on how to create interesting effects with shaders in the future. If you have any questions or suggestions, feel free to reach out to me on Twitter.

Footnotes

Footnotes

  1. The main difference between ShaderToy and our editor is this: Our editor is based on ThreeJS, which is based on WebGL, and this means it is using the main function as an entry point to the shader program. WebGL also defines a gl_FragColor variable to be used as the output of the fragment shader and a gl_FragCoord variable as the pixel coordinate input. In Shadertoy, none of these is used. Instead, there is a mainImage function that provides two arguments for setting up those output and input variables...