Skip to content
HamletFernandez edited this page Sep 4, 2021 · 22 revisions

Three.js Documentation

Setting up directories

What I found has worked best so far is to have a dedicated directory for whichever project is using the Three.js library. So far, our modules are localized and our projects have not been deployed in any other environment, therefore isolating Three.js projects into their own directory makes things much easier.

An example:

three/

      ->Project 1/

              -> index.html

              -> main.js

              -> style.css

              -> etc.

The Vite App

For starting purposes, we use the Vite app to create our own local server to run our three.js applications. The Vite app, for starters, also provides our own index.html, main.js, and style.css files we can work with. To install Vite, you can type the following command in the command prompt. It is recommended to use this command inside your project folder(Ex: inside Project 1).

npm init vite@latest

Following this command, it will prompt you for a project name, which can be the same as the name of your current working directory. Next, it may or may not prompt you for the package name which we do not need for right now. You can simply re-enter the project name. Lastly, it will confirm your version of javascript, for our research purposes we will be using vanilla js. Here is an example of what this process looks like:

`Hamlets-Air:testProject hamletfernandez$ npm init vite@latest npx: installed 6 in 3.116s

✔ Project name: … testProject

✔ Package name: … testProject

✔ Select a framework: › vanilla

✔ Select a variant: › vanilla`

This process created a new folder inside your working directory called whatever you named your Project, in this case it would be testProject. This folder will hold all your relevant Vite files that will affect your resulting application. This folder should include a node_modules subdirectory, an index.html, a style.css, and a main.js file among other things.

The process will also present the following commands. These can be followed as directed.

cd testEmpty npm install npm run dev

"cd testEmpty" will take you inside the projct folder created by Vite.

"npm install" will install npm inside the project folder. This should only be done once per machine and per project.

"npm run dev" will run your Vite application! This establishes a local server, something like http://localhost:3000/. Typing this inside your preferred search engine will take you to your application. This command should be run whenever you would like to start up your application, that is, unlike "npm install" you may run this command multiple times over the course of the project's development.

The Vite environment

After creating the Vite application you will notice that the you have your classic html, css, and javascript files alongside a package.json file, a package-lock.json file, a favicon.svg image, and a node modules folder. We do not really have to worry about these right now, and the most important one is the package.json file. This file keeps track of packages we manually install and so we'll use it later when we install packages like dat.gui for user interfaces. For now, we'll focus on the main.js file. Main.js might have some code already but we can delete whatever was found there already and start on a clean slate.

The Three Environment

###Installing Three Inside the main.js file, installing three.js requires you to use javascript's "import" keyword to import the library and its functions. Your first line inside the main.js file could look something like:

import * as THREE from 'https://cdn.skypack.dev/three@<VERSION>';

You will want to substitute <VERSION> for the latest three.js version. The above example uses a what's called a "content delivery network" (cdn), and in this case skypack, to wrap three.js in an easy to import package. Using skypack you can find the latest three.js version to import. You can find more on skypack here and on using three.js with skypack here.

Three Fundamentals: Scene, Camera, Renderer

The Scene

You can read more about the fundamentals of creating a scene here. Essentially, scenes are the 3d environments we working in and around. There is a tree-like concept called the sceneograph which essentially defines a scene to be the parent of all objects to be rendered. That is, scenes are objects themselves but all cubes, light sources, textures, anything you'd like to have appear can be traced back to the scene object. Thinking of it like a tree is helpful because eventially, with more complex projects, organizing different parts of the environment into multiple scenes could help organize objects.

Scenes are usually one of the most important variables you could declare in a three.js environment and oftentimes change many times throughout your code. Additionally, in this case, we have multiple scenes, so I declared one variable called mainScene that would hold the entirety of the project. I have it set as a global variable because I find myself accessing the mainScene often to add or remove objects.

let mainScene

init();

...

function init() {

...

//Scene init

mainScene = new THREE.Scene();

_Notice how unlike vanilla javascript's camelcase (Ex: thisIsCamelCase) Three.js methods capitalize the first letter too. _

The Camera

You can read more about the fundamentals of using a camera in three.js here. Cameras give our environment perspective, and allow us to see all or parts of our scene. Cameras are objects with positions in 3d space and can view things in front of them in the shape of a frustum, which is depicted in the above link. Cameras are critical to viewing objects we create and manipulate in three.js, particularly because they affect how we view things. A THREE.PerspectiveCamera() object is much more different than a THREE.OrthographicCamera(). The former changes how big or small an object _ appears _ to be depending on distance, while the latter keeps things constant. Perspective cameras are great for modeling realistic 3d environments while Orthographic cameras are great for capturing 2d objects like character sprites or using an image as the background.

We'll be focusing more on using the PerspectiveCamera(). This method takes 4 parameters,

//Camera init

fov = 75;

aspectRatio = innerWidth/innerHeight;

near = 0.1;

far = 1000;

mainCamera = new THREE.PerspectiveCamera(fov, aspectRatio, near, far);

it can be helpful to understand:

fov describes the vertical height of the near plane which affects the angles of perspective. For example, having a large fov, small near distance and large far distance will allow the camera to capture everything in between the near and far planes but "stretched" or distorted along that distance.

aspectRatio should always be relative to your screen

near describes how close the camera should start capturing objects

far describes how far away the camera should keep capturing objects

Lastly, you can also set the camera's position. By default it starts at 0,0,0 which, for three.js is right in the center of the scene, center respective along the x, y and z axis. However, when you create objects they will also be created with a default location of x = 0, y = 0, z = 0 which the camera will not be able to view without manipulating the camera's or the objects' positions.

The easiest way to just move the camera slightly off center is

mainCamera.position.set(1,0,0);

This sets the camera's position to be 1 along the positive x axis. This saves us from slight panic when we don't see objects we create along the default position (0, 0, 0).

Renderer

The renderer is perhaps one of the most important objects to any three.js environment given that the object allows us to see what we have created. Without the renderer, we would not be able to see what the camera is pointing to or what objects we have created in our scene. Aside from the scene and camera, the renderer is the third object required in order to set up an environment in three.js

Creating a renderer can be done like:

const renderer = new THREE.WebGlRenderer();

The above creates a canvas element which can be added to our html document. a canvas and a scene are similar in that they both can hold whatever we create in our 3d environment, however, the canvas is part of the DOM like any text, buttons, links, etc on the website while a scene is part of the javascript code housed within the canvas. In other words, we can change the size of the canvas and it would take up more space on our website but if we change the sizes of al the objects in our scene the canvas would just cut off what's too big to be rendered. Retrieving the canvas element within our javascript can be done with

const canvas = renderer.domElement;.

This allows us to append the canvas to our document by calling

document.body.appendChild(canvas);

This should create a black box on our webpage hosted locally. This black box is the space in which our renderer can begin to render and that can be done by calling this line:

renderer.render(scene, camera).

The renderer also has many helpful methods and properties that can be, and usually are, called prior to calling .render(). Below I'll briefly list out and explain the ones I use most often.

renderer.setSize(innerWidth, innerHeight);

This line of code sets the size for the renderer which can be different than the size of the canvas. However, as the canvas is an element of the webpage it can easily be edited using css, so simply matching the size of the renderer to the canvas will make developing the document simpler.

renderer.autoClearColor = false;

The above line changes the renderer's autoClearColor property to false which stops the renderer from redrawing the background every frame. This makes it such that any moving object leaves behind a trail which is a fun and powerful effect for drawing with three.js!

renderer.setPixelRatio(devicePixelRatio);

The setPixelRatio method is an example of responsive design which smoothens edges corners and lines by computing the renderer's pixel ratio using the devices. In other words, the renderer responds to the device settings to make the scene appear cleaner.

When creating the renderer you can also:

`const renderer = new THREE.WebGlRenderer({

       preserveDrawingBuffer: true,

       alpha: true,

})

Setting the alpha property to true will make the renderer's background transparent. The following is the example of how I created my renderer in my project using the above methods:

//Renderer init

renderer = new THREE.WebGLRenderer({

`preserveDrawingBuffer: true`

});

//renderer.autoClearColor = false;

canvas = renderer.domElement;

renderer.setSize(innerWidth, innerHeight);

renderer.setPixelRatio(devicePixelRatio);

document.body.appendChild(canvas);

Resizing the renderer

Oftentimes the renderer will need to change the size of the canvas depending on the size of the webpage, like when expanding or minimizing the page or even inspecting the page. Therefore, I use the method window.addEventListener() to call the function onWindowResize whenever the page resizes. I found this code here.

window.addEventListener( 'resize', onWindowResize );

function onWindowResize() {

mainCamera.aspect = window.innerWidth / window.innerHeight;

mainCamera.updateProjectionMatrix();

renderer.setSize( window.innerWidth, window.innerHeight );

}

Animations

Animations in three.js are not unlike creating animations with other javascript libraries. While three.js has an Animation System, creating animations by rotating and changing the position of objects can be done with a simple loop that a function at each frame. With javascript, we can achieve this loop using requestAnimationFrame(function).

First, I'll go over a simple example of animations then use an example from my scene. Say we have our scene, camera, and renderer set up with a cube (a "box" in three.js) in the middle of our scene. We have our init() function that creates our scene, camera, and renderer, and adds a red cube to the scene at a position of (0, 0, 0) 👍🏼

let scene, camera, renderer, canvas;`

let box;

//call init

init();

function init() {

      //Scene init

      scene = new THREE.Scene();

      //Camera init

      fov = 75;

      aspectRatio = innerWidth/innerHeight;

      near = 0.1;

      far = 1000;

      camera = new THREE.PerspectiveCamera(fov, aspectRatio, near, far);

      //Renderer init

      renderer = new THREE.WebGLRenderer();

      canvas = renderer.domElement;

      renderer.setSize(innerWidth, innerHeight);

      renderer.setPixelRatio(devicePixelRatio);

      document.body.appendChild(canvas);

      const geometry = new THREE.BoxGeometry( 1, 1, 1 );

      const material = new THREE.MeshBasicMaterial( {color: 0x00ff00} );

      box = new THREE.Mesh( geometry, material );

      scene.add( cube ); }

function render() {

      renderer.render(scene, camera);

}

`

The above code will create a static box in or three.js scene. To animate it, we could then create an animate() function which we call at each frame. Then, we change the box's rotation parameters slightly between each frame.

animate();

function animate() {

      requestAnimationFrame(animate);

      render();

      box.rotation.x += 0.01;

      box.rotation.y += 0.01;

      box.rotation.z += 0.01;

}

An important thing to note with animation is that the renderer must be called to render the scene at each frame, so it is usually good practice to place a render() function, like the one above, in the same scope you call requestAnimationFrame().

The Van Gogh Box

As the above example attemps to convey, when creating animations it is important to establish the object to be animated in a scope where it can be initialized in the scene, and accessed in animate(). Usually this can be done by making the object a global variable or inserting the animate function inside init().

In my scene, I declared a box that mapped a renderTarget as a texture with an animating scene inside. While I will dive deeper into this technique within the RenderTarget section, simply put: you can see rotating boxes all along the sides of a static box. The texture of the box is showcasing a scene and this scene contains a box with its own texture, an image of van gogh.

This is a globally scoped variable: let goghBox

This code is inside my init() function. So the following is called once before my animate call. {

//// this line creates "goghBox" using the class "Art" which creates a new scene, camera, and EffectComposer but as a mappable texture. Refer to the Art section for more details. //// goghBox = new Art();

`const ambientLight = new THREE.AmbientLight( 0xffffff, 1 );`

`goghBox.loadTexture('./images/gogh.jpg');`

///////// the code between these comments affects the texture wrapping and ////////

///////// how the render targets map the cube in the mainScene and the cube inside the artpiece ///////

`goghBox.texture.anisotropy = renderer.capabilities.getMaxAnisotropy();`
`goghBox.texture.matrixAutoUpdate = false;`
`goghBox.texture.wrapS = goghBox.texture.wrapT = THREE.RepeatWrapping;`
`goghBox.texture.matrix.scale(4, 4);`

`goghBox.composer.renderTarget2.texture.anisotropy = renderer.capabilities.getMaxAnisotropy();`
`goghBox.composer.renderTarget2.texture.matrixAutoUpdate = false;`
`goghBox.composer.renderTarget2.texture.wrapS = goghBox.composer.renderTarget2.texture.wrapT = THREE.RepeatWrapping;`
`goghBox.composer.renderTarget2.texture.matrix.scale(4, 4);`

////////// Refer to the render target section for more details on textures ////////////////////////////

`const geometry = new THREE.BoxGeometry(10, 10, 10);`
`const material = new THREE.MeshPhongMaterial({`
  `map: goghBox.texture,`
  `side: THREE.DoubleSide`
`});`
`const innerBox = new THREE.Mesh(geometry, material);`

///// these next two lines add objects to the innerScene, the scene that is rendered on the texture. We have lighting, the ambient light, such that the inner box appears in our scene. We also have innerBox which holds the Van gogh texture and will be called on to rotate. goghBox.scene.add(ambientLight); goghBox.scene.add(innerBox);

`const boxGeometry = new THREE.BoxGeometry(5, 5, 5);`
`const boxMaterial = new THREE.MeshPhongMaterial({`
  `map: goghBox.composer.renderTarget2.texture,`
  `side: THREE.DoubleSide,`
`})`
`const box = new THREE.Mesh(boxGeometry, boxMaterial);`
`box.position.set(0, 5, 15);`

`mainScene.add(box);`

}

Then, in our animation function, I call

goghBox.scene.children[1].rotation.y += 0.002;

Breaking down this line:

I access goghBox which was declared as a global variable and initialized using the Art() class. This class created a scene property which I then access using .scene. All three.js scenes have children properties which are arrays of all the children contained

dat.gui

dat.gui is a library that easily adds a graphic user interface (GUI) to our three.js application. I have it installed as a dependency for the project which means instead of importing the library through a source url (like I did with THREE.js and other modules), I installed it using npm and included it in the json metadata.

To install, call the following line in your terminal while inside your project folder. In the same folder you would run npm run dev, you would use npm to install dat.gui (and any other dependencies you may add).

npm install dat.gui

This will install the dat.gui library for your project. You will then have to go over to your package.json file and include it as a dependency. The package.json file was created upon creating the Vite app and it is a json file in the same directory as your index.html and your main.js files. Here's what mine looks like:

{

"name": "room1",

"version": "0.0.0",

"scripts": {

`"dev": "vite",`

`"build": "vite build",`

`"serve": "vite preview"`

},

"devDependencies": {

`"vite": "^2.4.3"`

},

"dependencies": {

`"dat.gui": "^0.7.7"`

}

}

Creating the GUI

points to document: gui creation, how to include dat.gui as a dependency, adding to the gui, adding folders to the gui, adding to folders, allowing for color, and accessing parameters

OrbitControls

OrbitControls is a module of the three.js library which we can import to add camera movement, rotation, panning, and zooming within our scene. With just the default OrbitControls we can gain a lot of functionality but this document will also go through some other interesting methods that allow us to refine our OrbitControls.

The Setup

To import Orbit Controls, you can use the keyword import, just like when importing the THREE library, while linking the url to the module.

import {OrbitControls} from 'https://threejsfundamentals.org/threejs/resources/threejs/r127/examples/jsm/controls/OrbitControls.js';

Once imported, you can begin to use the Orbit Controls. To do so, declare a variable, controls as a new OrbitControls(camera, canvas); The camera will be your the camera you'd like to apply the effects to, and the canvas is your rendering canvas. Remember: your canvas can either be created in your html file, or once you have your renderer accessed by calling const canvas = renderer.domElement. Since OrbitControls takes a camera as input this means you can actually change which camera is being affected by the OrbitControls at a time, and this can lead to very interesting effects. I will provide an example later on in this section.

Part of the setup for OrbitControls includes the frequently used controls.update() method. This allows the camera to respond to input when rotating, zooming, panning, etc. In fact, without controls.update(), our OrbitControls would not move the camera at all. As such, it is important to include the line controls.update() in your animate functions. This enables the camera to smoothly respond to input from the user.

###Helpful Properties and Methods. More documentation on this can be found here.

controls.enabled This property enables or disables the controls. It takes a boolean true or false value.

controls.target.set(x, y, z) The property "target" allows you to focus the camera at a certain point such that zooming, panning, and rotating are relative to this target. The default is 0.

controls.enableRotate This boolean property enables or disables user input affecting the camera's rotation. By default, this property is set to true, but setting it to false can be useful. For example, not allowing a user to rotate around a 2d texture could help boost immersion as the user would only be able to view the 2d texture from the front like they would art pieces at a gallery in reality.

controls.minDistance and controls.maxDistance. These two properties affect how far the camera can zoom. Setting a min distance greater than 0 can stop users from zooming past objects at a position of (x = 0, y = 0, z = 0) and setting a max distance can prevent users from losing the object altogether.

controls.autoRotate This boolean property enables or disables autoRotate for the camera. If set to true, the camera will automatically rotate around the target position with a speed of controls.autoRotateSpeed.

controls.autoRotateSpeed This changes the camera's autoRotateSpeed. The default is 2.0 can be increased or decreased to make the camera auto-rotate faster or slower respectively.

controls.update() This method is required when making changes to the OrbitControls. Usually, this method is called once in an animate function such that user input like touchpad, arrow keys, or mouse moving and dragging affects the camera. However, this method should also be called after changing the camera's target or position during initialization. This is important because the camera and the rbit Controls each have their set defaults and changing the starting values will require a controls.update() in order to start with desired values before calling the animate function.

Render Targets

Render Targets are textures to where the renderer can render instead of the entire canvas. These textures can host anything that can be rendered such as scenes, cameras, objects, animations, but these textures can be mapped to objects. Some examples of render targets can be found here;

The way I use render targets is to allow the EffectComposer to

Textures

SkyBox

What did not work

BackLog (To Do's)

  • Adding Spotlight Helpers so that users can change spotlight parameters and see what they are changing. Spotlight helpers would be toggleable.

  • Adding Camera Helpers so that users can change the parameters of the camera and see what they are changing. The camera helper would be toggleable.

  • LUTshaders

  • Fix bug where 2d images do not render to proper size

  • Fix bug/quality of life issue where the last pass in the effect composer must be enabled so that the other passes render to screen. This is a matter of the last pass always requiring the property renderToScreen = true.

Resources

Three.Js starting documentation:

https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene

Example tutorial: Three.js tutorial to get started. The following video will visually explain many of the things stated above, and will grant the user experience with gsap, dat.gui, and the basics of the three.js library.

https://youtu.be/YK1Sw_hnm58

Vite app

https://vitejs.dev/guide/#scaffolding-your-first-vite-project

First Person Shooter Example

https://threejs.org/examples/games_fps.html

Drawing Canvas Example

https://threejs.org/examples/?q=canva#webgl_materials_texture_canvas

Color Mapping using 3dLUTs

https://threejsfundamentals.org/threejs/lessons/threejs-post-processing-3dlut.html

PostProcessing (this helps understand EffectComposer and Passes)

https://threejsfundamentals.org/threejs/lessons/threejs-post-processing.html

List of three.js built in shaders

https://github.com/mrdoob/three.js/tree/master/examples/js/shaders

WebGLRenderer

https://threejs.org/docs/#api/en/renderers/WebGLRenderer