The Canvas: Design Phase

0 min read

If you were browsing the web circa 1998, splash pages were everywhere. Flash-powered intros with dramatic music, spinning logos, and “Welcome to our website!” animations were a standard part of the design process. Those days are long gone, but they haven't vanished entirely. Brands still use splash pages for age verification, country and language selection, or simply to create a bold first impression.

Traditionally, canvas-based tools use canvas technologies like Canvas2D and WebGL to render their in-browser visualizations. Although these technologies are great for applications that require high-performance graphic rendering, such as video games, scientific visualizations, or complex animations, they aren’t optimized to integrate seamlessly with React’s declarative and component-based architecture.

Framer is known for its true WYSIWYG experience: what you see on the canvas during the design phase is a 1:1 representation of the deployed application. This WYSIWYG experience, combined with support for interactive multimedia elements and a seamless integration with React, goes beyond the capabilities of traditional canvas technologies.

Luckily, Framer has an intelligent solution to this: a custom DOM-based canvas. This approach facilitates a seamless integration with React and ensures a true 1:1 WYSIWYG experience, enabling users to easily create high-performance, production-ready websites right within their browser.

In this article, we’ll explore how Framer’s DOM-based approach works and how this benefits users. But first, let’s see how traditional canvas technologies work, and explore the reason behind Framer’s need for a custom approach.

Traditional Canvas Technologies

The concept of an in-browser canvas goes back to the year 1984 with the introduction of MacPaint, developed by Bill Atkinson for Apple’s Macintosh. MacPaint was revolutionary for its time, offering a pixel-based canvas where users could create shapes, brush strokes, and colors all with their mouse.

Today, the web offers multiple canvas technologies, including HTML5 Canvas2D and WebGL. Each serves a different need in graphics rendering; Canvas2D is a great choice for 2D graphics and animations, while WebGL focuses on complex 3D graphics.

Canvas2D and WebGL are both raster-based technologies, meaning they work by manipulating individual pixels on a grid to create an image. Raster graphics are made up of pixels, where each pixel is assigned a specific color value. When combined, these pixels form the overall image or graphic.


Canvas2D provides a set of JavaScript APIs for drawing 2D shapes, images, and text onto this pixel-based canvas. This includes methods for creating paths, applying transformations, gradients, and patterns. These methods are vector-based, meaning they create graphics using math formulas to define shapes and lines, rather than coloring in individual pixels.

Developers use commands like moveTo and lineTo to draw paths, which are then turned into complete images with stroke and fill commands that outline and color the shapes.

const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');

ctx.fillStyle = 'rgb(0,0,0)';
ctx.fillRect(10, 10, 50, 50);

ctx.beginPath();
ctx.moveTo(70, 50);
ctx.lineTo(200, 50);
ctx.strokeStyle = 'rgb(0, 0, 0)';
ctx.stroke();

ctx.fillStyle = 'rgb(0, 0, 0)';
ctx.font = '20px Helvetica';
ctx.fillText('Hello, Framer!', 70, 40);

While the methods are vector-based, the ultimate rendering on the screen is in raster format. This process involves converting the vector descriptions into a grid of pixels, each colored to represent the visual end result.

Canvas2D works well for static shapes and text, but it gets tricky when working with dynamic visuals or effects that need more computing power like 3D graphics.

WebGL

WebGL enables hardware-accelerated 3D rendering with a canvas, similar to what is used to render video games. This allows developers to efficiently create detailed and complex visuals directly within their apps, levering the same type of graphical processing found in game development. The low-level API gives developers direct access to the graphics rendering pipeline.

const canvas = document.getElementById('myCanvas');
const gl = canvas.getContext('webgl');

const vertices = [
  -1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, -1, -1, 1, 1, -1, 1, 1, 1, 1, -1,
  1, 1, -1, -1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, 1, 1,
  1, 1, -1, 1, -1, -1, -1, -1, -1, 1, 1, -1, 1, 1, -1, -1, -1, 1, -1, -1, 1, 1,
  1, 1, 1, 1, 1, -1,
];

const colors = [
  5, 3, 7, 5, 3, 7, 5, 3, 7, 5, 3, 7, 1, 1, 3, 1, 1, 3, 1, 1, 3, 1, 1, 3, 0, 0,
  1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1,
  1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0,
];

const indices = [
  0, 1, 2, 0, 2, 3, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 13, 14, 12, 14,
  15, 16, 17, 18, 16, 18, 19, 20, 21, 22, 20, 22, 23,
];

const vertex_buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);

const color_buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);

const index_buffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, index_buffer);
gl.bufferData(
  gl.ELEMENT_ARRAY_BUFFER,
  new Uint16Array(indices),
  gl.STATIC_DRAW
);

const vertCode =
  'attribute vec3 position;' +
  'uniform mat4 Pmatrix;' +
  'uniform mat4 Vmatrix;' +
  'uniform mat4 Mmatrix;' +
  'attribute vec3 color;' +
  'varying vec3 vColor;' +
  'void main(void) { ' +
  'gl_Position = Pmatrix*Vmatrix*Mmatrix*vec4(position, 1.);' +
  'vColor = color;' +
  '}';

const fragCode =
  'precision mediump float;' +
  'varying vec3 vColor;' +
  'void main(void) {' +
  'gl_FragColor = vec4(vColor, 1.);' +
  '}';

const vertShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertShader, vertCode);
gl.compileShader(vertShader);

const fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragShader, fragCode);
gl.compileShader(fragShader);

const shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertShader);
gl.attachShader(shaderProgram, fragShader);
gl.linkProgram(shaderProgram);

const Pmatrix = gl.getUniformLocation(shaderProgram, 'Pmatrix');
const Vmatrix = gl.getUniformLocation(shaderProgram, 'Vmatrix');
const Mmatrix = gl.getUniformLocation(shaderProgram, 'Mmatrix');

gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
const position = gl.getAttribLocation(shaderProgram, 'position');
gl.vertexAttribPointer(position, 3, gl.FLOAT, false, 0, 0);

gl.enableVertexAttribArray(position);
gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);
const color = gl.getAttribLocation(shaderProgram, 'color');
gl.vertexAttribPointer(color, 3, gl.FLOAT, false, 0, 0);

gl.enableVertexAttribArray(color);
gl.useProgram(shaderProgram);

function get_projection(angle, a, zMin, zMax) {
  const ang = Math.tan((angle * 0.5 * Math.PI) / 180); //angle*.5
  return [
    0.5 / ang,
    0,
    0,
    0,
    0,
    (0.5 * a) / ang,
    0,
    0,
    0,
    0,
    -(zMax + zMin) / (zMax - zMin),
    -1,
    0,
    0,
    (-2 * zMax * zMin) / (zMax - zMin),
    0,
  ];
}

const proj_matrix = get_projection(40, canvas.width / canvas.height, 1, 100);

const mov_matrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];
const view_matrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];

view_matrix[14] = view_matrix[14] - 6; //zoom

function rotateZ(m, angle) {
  const c = Math.cos(angle);
  const s = Math.sin(angle);
  const mv0 = m[0],
    mv4 = m[4],
    mv8 = m[8];

  m[0] = c * m[0] - s * m[1];
  m[4] = c * m[4] - s * m[5];
  m[8] = c * m[8] - s * m[9];

  m[1] = c * m[1] + s * mv0;
  m[5] = c * m[5] + s * mv4;
  m[9] = c * m[9] + s * mv8;
}

function rotateX(m, angle) {
  const c = Math.cos(angle);
  const s = Math.sin(angle);
  const mv1 = m[1],
    mv5 = m[5],
    mv9 = m[9];

  m[1] = m[1] * c - m[2] * s;
  m[5] = m[5] * c - m[6] * s;
  m[9] = m[9] * c - m[10] * s;

  m[2] = m[2] * c + mv1 * s;
  m[6] = m[6] * c + mv5 * s;
  m[10] = m[10] * c + mv9 * s;
}

function rotateY(m, angle) {
  const c = Math.cos(angle);
  const s = Math.sin(angle);
  const mv0 = m[0],
    mv4 = m[4],
    mv8 = m[8];

  m[0] = c * m[0] + s * m[2];
  m[4] = c * m[4] + s * m[6];
  m[8] = c * m[8] + s * m[10];

  m[2] = c * m[2] - s * mv0;
  m[6] = c * m[6] - s * mv4;
  m[10] = c * m[10] - s * mv8;
}

/*================= Drawing ===========================*/
let time_old = 0;

const animate = function (time) {
  const dt = time - time_old;
  rotateZ(mov_matrix, dt * 0.0005);
  rotateY(mov_matrix, dt * 0.0002);
  rotateX(mov_matrix, dt * 0.0003);
  time_old = time;

  gl.enable(gl.DEPTH_TEST);
  gl.depthFunc(gl.LEQUAL);
  gl.clearColor(0.5, 0.5, 0.5, 0.9);
  gl.clearDepth(1.0);

  gl.viewport(0.0, 0.0, canvas.width, canvas.height);
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
  gl.uniformMatrix4fv(Pmatrix, false, proj_matrix);
  gl.uniformMatrix4fv(Vmatrix, false, view_matrix);
  gl.uniformMatrix4fv(Mmatrix, false, mov_matrix);
  gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, index_buffer);
  gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);

  window.requestAnimationFrame(animate);
};

animate(0);

However, WebGL can be quite challenging because it operates at a very low level. For instance, WebGL doesn't even know how to render basic shapes like squares or circles; it only renders triangles. This is because GPUs are optimized to process triangles quickly and efficiently, which is why they are so fast.

One of the biggest challenges developers face with WebGL is handling fonts. WebGL doesn't provide built-in support for text rendering, so you have to write your own font renderer. This involves a significant amount of work, from loading font data to rendering each character as a series of triangles, making text rendering in WebGL a complex task.

Click here to learn more about WebGL’s inner workings

Challenges with Traditional Canvas Technologies

While Canvas2D and WebGL are powerful tools, they require detailed rendering control. Rendering React components, like Framer does, would require manually handling all aspects of rendering and state management, translating between React’s declarative approach and the imperative APIs of Canvas2D and WebGL.

For instance, React components are re-rendered based on state changes, but Canvas2D and WebGL require explicit commands to draw and update the visual content. Framer is true React, which means that an extra layer of complexity would be required to bridge this gap by manually managing canvas redraws and sync timing between React’s virtual DOM and the rendering process of the canvas.

Framer’s DOM-Based Canvas

Instead, Framer uses a custom canvas approach that enables a seamless integration with React. Rather than using HTML5’s <canvas> element, Framer uses a DOM-based solution. Elements created in Framer correspond to actual DOM elements, managed by React.

Framer’s DOM-based approach makes it possible for the component instances on the canvas to be a true 1:1 representation of the published website, which would have been near-impossible with a regular canvas approach.

For a true WYSIWYG website builder experience, pixel-perfect output accuracy is essential. Framer’s DOM-based approach guarantees that the design rendered to the canvas is an exact representation of the final output. This is unlike techniques that try to translate WebGL renderings to the DOM, which can struggle to accurately reproduce the intended design due to differing rendering models, often resulting in visual differences.

ES Modules and Import Maps

Whereas some components are just simple frames, others are complex nested components with image backgrounds that in turn render more complicated components managing the state of the image to best handle image decoding.

Each component on the canvas points to an ES module. ES Modules provide a way to structure and load JavaScript in separate files. Rather than keeping all of the code in a single JavaScript file, we can split it into smaller, manageable pieces, each contained in its own file.

Each file defines a module, which can export specific pieces of code like functions, objects, or classes, and import them wherever needed. Framer uses this to export the specific components on a module-basis.

In addition to ES Modules, Framer uses import maps to map imports to specific files. Import maps allow developers to specify the URLs where the browser should locate the imported modules.

Whenever a component updates, Framer automatically updates this URL - the module reference - to ensure that the most current version of the module is used. This process keeps the canvas in sync with the latest changes, always rendering the updated component instances effectively.

This architecture also makes it possible to pass props to components in these ES Modules, just like developers would do when manually developing a website.

Canvas Optimizations

However, the custom approach also introduces some performance challenges, especially around GPU usage and memory management. Framer uses several strategies to optimize performance and achieve a smooth 60fps experience.

Resolution Scaling and Freeform Scrolling

Resolution scaling and freeform scrolling can significantly impact GPU utilization. Framer uses a technique similar to “visibility culling” used in 3D games, where only nodes visible within the current viewport are rendered. This selective rendering (culling) minimizes GPU load by reducing the number of elements rendered at any given time.

Optimizing React Renders

React renders are computationally expensive operations that can slow down the performance if not managed properly.

To address this, Framer avoids React renders unless absolutely necessary. The framework tracks changes in the descendants of each node and decides when to re-render nodes based on these changes. Nodes are only re-rendered if the descendants of the ground node have had changes, similar to React’s rendering pipeline. This selective rendering approach ensures that only the necessary parts of the canvas are updated, improving the overall performance and responsiveness of the canvas.

Handling Canvas Zooming and Panning

During canvas zooming and panning operations, Framer uses CSS transforms accelerated by the GPU, ensuring smooth and responsive interactions. This method is particularly effective as it circumvents the need for virtual tiling, which is a technique that’s often used in rendering engines to manage large canvas areas.

Virtual tiling, or tiled rendering, breaks up the canvas into smaller, manageable tiles. This allows for more efficient rendering and memory management, especially for large or complex scenes.

By using CSS transforms, Framer approach ensures a smooth and responsive zooming and panning experience without the overhead of layout recalculations or React renders.

Optimizing Rendering Sizes

Framer calculates the optimal rendering size for rendering layers to maximize GPU performance. It adjusts layer dimensions to specific power-of-two dimensions, such as 64, 128, or 256 pixels, as GPUs are designed to handle these dimensions more efficiently. This technique leverages the GPU's design to smoothly process and display complex graphics, visuals, and animations.

By aligning the rendering sizes to these optimal dimensions, Framer ensures that the GPU can handle the rendering tasks more effectively, resulting in smoother animations and interactions.

Security Challenges

Besides rendering performance, there are also security challenges related to handling user-generated code. Framer allows users to write and incorporate their own custom code, but running untrusted code can pose security risks.

Framer uses a technique called "sandboxing" where the user's code is isolated and run separately from the main Framer application. This isolation is achieved by running the user's code inside separate iframes, which act like secure containers that prevent the code from having access to the main application. This helps protect against potential attacks like cross-site scripting (XSS) and unauthorized access to user data stored in cookies.

However, since the user's code within the iframe runs separately from the main application, there needs to be a secure way for these two parts to communicate. This secure communication is achieved through the postMessage function, which allows messages to be sent securely between the iframe and the main application. Both sides set up event listeners to handle messages, which ensures that only trusted sources are processed. This way, they can safely send and receive data without compromising security.

To keep the user's code synchronized with the canvas, Framer continuously sends information about the canvas's current position and zoom level to the iframe on every frame. This way, the user's code can accurately render and interact with the canvas based on the user's actions in real-time, while still being safely isolated from the rest of the application.

Although this sandboxing approach adds some overhead due to the constant communication between contexts, it creates an important security layer that protects the application from potential vulnerabilities from running untrusted user-generated code.

Performance Considerations

Performance issues on the canvas don't always stem from the content itself. One example is rendering selection boxes, which are the visual indicators that appear on the canvas when a user selects a component.

If Framer were to render a large selection box as a single <div> with a solid border around a layer that's 10,000px x 10,000px, it would be creating a layer for the browser to paint with an area of 100,000,000 pixels. Simply doing this can significantly degrade the canvas's responsiveness when selecting a layer, since it takes a substantial amount of time for the browser to paint such a large filled area for the new selection box div.

Instead, Framer uses a more efficient approach. Instead of rendering a single large <div>, it draws four separate 1-pixel-wide <div> elements, and positions them along the edges of the selected layer using CSS transforms.

This technique minimizes the area that the browser needs to paint, as it only has to render the outline of the selection box rather than the entire filled area. By reducing the memory usage and painting overhead, Framer ensures a snappy and responsive experience when selecting layers, even for very large elements or groups.

This technique minimizes the area that the browser needs to paint, as it only has to render the outline of the selection box rather than the entire filled area. By reducing the memory usage and painting overhead, Framer ensures a snappy and responsive experience when selecting layers, even for very large elements or groups.

Conclusion

By architecting the entire canvas as an extension of React and the DOM, Framer bridges the gap between declarative UI frameworks and conventional canvas technologies. By integrating React directly into the canvas, Framer ensures that what you see on the canvas directly reflects the final published output 1:1.

Explore what’s possible with Framer and see how it can transform your web development process, making it easier to create pixel-perfect, dynamic, and interactive web experiences. Start with Framer today and enjoy the power of a truly integrated design and development tool.

Step into the future of design

Join thousands using Framer to build high-performing websites fast.

Related articles
Hand pointing at a glowing framer logo
Hand pointing at a glowing framer logo
Hand pointing at a glowing framer logo

Step into the future of design

Join thousands of designers and teams using Framer to turn ideas into high-performing websites, fast.