← Back to Content

Browser Fingerprinting Techniques: How Each Signal Works (With Code)

Learn how canvas, WebGL, audio, and TLS browser fingerprinting techniques work — with example JavaScript code

Robin
fingerprintingprivacydigital identity
Browser Fingerprinting Techniques: How Each Signal Works (With Code)

Browser fingerprinting techniques identify visitors by reading browser and hardware characteristics that persist across sessions — no cookies required. Unlike session cookies, which break the moment someone clears their history or opens an incognito tab, device fingerprinting keeps working. This guide walks through the major techniques, how each is implemented in JavaScript, and which signals do the most work in a real fingerprint.

Browser fingerprinting techniques fall into two categories — active (canvas, WebGL, audio) and passive (user agent, screen, timezone). The most commonly used are:

  1. Canvas fingerprinting

  2. WebGL fingerprinting

  3. Audio fingerprinting

  4. Font detection

  5. TLS fingerprinting (server-side)


Active vs. Passive Fingerprinting

Passive techniques read attributes the browser already exposes by default — no prompting needed. Things like the user agent string, screen resolution, timezone, and language preference are available to any JavaScript running on the page.

Active techniques go a step further: they run code that probes how the device actually renders graphics or processes audio. The key insight is that the same code produces subtly different output on different hardware — different GPU (graphics processing unit), different audio chip, different font rendering engine. Those differences are what get turned into a fingerprint component.

TLS (Transport Layer Security) fingerprinting is a different animal entirely. It happens at the network layer before any JavaScript runs, by inspecting the metadata your browser sends when opening an HTTPS connection. That makes it a server-side technique — nothing runs in the browser at all.


Canvas Fingerprinting

Of all the client-side signals, canvas fingerprinting tends to produce the most distinctive result. The idea is straightforward: render some text and shapes onto a hidden canvas element, then read back the pixel data and hash it.

What makes this useful is that the rendering isn't pixel-perfect across devices. Your GPU model, graphics driver, OS font rendering, and anti-aliasing settings all influence the exact output. Two machines running the same browser and the same JavaScript will produce images that look identical to the human eye but differ at the pixel level — enough to hash reliably.

Here's what a basic implementation looks like. The key is drawing text with mixed styles and colours, which maximises the rendering variation across devices:

function getCanvasFingerprint() {
  const canvas = document.createElement('canvas');
  canvas.width = 200;
  canvas.height = 50;
  const ctx = canvas.getContext('2d');

  ctx.textBaseline = 'top';
  ctx.font = '14px Arial';
  ctx.fillStyle = '#f60';
  ctx.fillRect(125, 1, 62, 20);
  ctx.fillStyle = '#069';
  ctx.fillText('ThumbmarkJS fingerprint 🌍', 2, 15);
  ctx.fillStyle = 'rgba(102, 204, 0, 0.7)';
  ctx.fillText('ThumbmarkJS fingerprint 🌍', 4, 17);

  return canvas.toDataURL();
}

const canvasHash = getCanvasFingerprint();

For a production-ready implementation, see the canvas component in the ThumbmarkJS open-source library.

One thing to be aware of: Firefox with privacy.resistFingerprinting and Brave with shields enabled both deliberately add noise to canvas output. Safari also limits precision in some configurations. For users on those browsers, canvas alone isn't reliable — which is why combining it with other signals matters.


WebGL Fingerprinting

WebGL gives you two useful signals. The first is the GPU model itself — there's an extension called WEBGL_debug_renderer_info that returns the unmasked renderer and vendor strings your graphics card reports. The second is the pixel output of a rendered 3D scene, which varies across hardware in the same way canvas does.

The GPU string is particularly valuable because it's stable. Hardware doesn't change often, and the renderer string — something like ANGLE (NVIDIA GeForce RTX 3070 Direct3D11 vs_5_0 ps_5_0) — stays consistent across browser updates.

Reading the renderer string is a few lines of code. The getExtension call is the key part — without it, browsers return a generic masked string that's the same across devices:

function getWebGLFingerprint() {
  const canvas = document.createElement('canvas');
  const gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
  if (!gl) return null;

  const ext = gl.getExtension('WEBGL_debug_renderer_info');
  if (!ext) return null;

  return {
    renderer: gl.getParameter(ext.UNMASKED_RENDERER_WEBGL),
    vendor: gl.getParameter(ext.UNMASKED_VENDOR_WEBGL),
  };
}

const webgl = getWebGLFingerprint();
// Example output: { renderer: "ANGLE (Apple M2, APPLE M2, OpenGL 4.1)", vendor: "Google Inc. (Apple)" }

For a production-ready implementation, see the webgl component in the ThumbmarkJS open-source library.

Combining the renderer string with a rendered scene hash gives you a signal that's both distinctive and stable — one of the stronger components in a production fingerprint.


Audio Fingerprinting

The Web Audio API lets browsers process audio in JavaScript. A side effect of this is that the floating-point output varies slightly depending on the device's audio hardware and how the OS handles audio processing — and that variation is consistent enough to use as a fingerprint signal.

The technique is to create an inaudible audio signal, render it through an OfflineAudioContext (which processes audio without playing it), then read back the output buffer. No sound is played. You're just reading numbers that happen to differ device-to-device.

The snippet below creates a triangle wave oscillator, runs it through a compressor, and sums the output buffer into a single fingerprint value:

async function getAudioFingerprint() {
  const ctx = new OfflineAudioContext(1, 44100, 44100);
  const oscillator = ctx.createOscillator();
  const compressor = ctx.createDynamicsCompressor();

  oscillator.type = 'triangle';
  oscillator.frequency.value = 10000;
  oscillator.connect(compressor);
  compressor.connect(ctx.destination);
  oscillator.start(0);

  const buffer = await ctx.startRendering();
  const data = buffer.getChannelData(0);

  return data.slice(4500, 5000).reduce((acc, val) => acc + Math.abs(val), 0);
}

const audioHash = await getAudioFingerprint();

For a production-ready implementation, see the audio component in the ThumbmarkJS open-source library.

Firefox blocks OfflineAudioContext entirely in private browsing mode. Safari limits the precision of the output. Like canvas, audio fingerprinting is most useful as one component among several, not as a standalone identifier.


Font Detection

Font detection works by checking which fonts are actually installed on a device. The approach is indirect: render a string in a target font alongside a known fallback font, then compare the widths. If the widths differ, the target font is installed.

On desktop systems — where users often have design tools, office software, or developer utilities installed — the combination of installed fonts can be fairly distinctive. Modern browsers have increasingly restricted font enumeration, so this signal is less reliable than it used to be, but it still contributes to the overall picture.

The function below takes a list of fonts to check and returns only the ones that are installed, using canvas text measurement to detect each:

function detectFonts(fontList) {
  const baseFonts = ['monospace', 'sans-serif', 'serif'];
  const testString = 'mmmmmmmmmmlli';
  const testSize = '72px';
  const canvas = document.createElement('canvas');
  const ctx = canvas.getContext('2d');

  const baselines = {};
  for (const base of baseFonts) {
    ctx.font = `${testSize} ${base}`;
    baselines[base] = ctx.measureText(testString).width;
  }

  return fontList.filter(font => {
    return baseFonts.some(base => {
      ctx.font = `${testSize} '${font}', ${base}`;
      return ctx.measureText(testString).width !== baselines[base];
    });
  });
}

For a production-ready implementation, see the fonts component in the ThumbmarkJS open-source library.

Font detection is more common in older fingerprinting implementations. It's worth including as a supporting signal, but it's not doing the heavy lifting in a modern stack.


Passive Signals: User Agent, Timezone, Language, Screen

Passive signals don't require any rendering or computation — JavaScript reads them directly from the browser. On their own, none of them are particularly identifying. Timezone covers entire continents. Screen resolution is shared by millions of devices. But they're stable and free to collect, and in combination they meaningfully narrow things down.

The standard set covers:

Stacked together with active signals, these passive attributes contribute real narrowing power. A fingerprint that matches on user agent, timezone, screen resolution, pixel ratio, and core count — alongside canvas and WebGL — is far more distinctive than any one of those alone.


TLS Fingerprinting

When your browser opens an HTTPS connection, it sends a message called a ClientHello that advertises which encryption methods it supports — cipher suites, extensions, elliptic curves — and in what order. That order is determined by the browser implementation, not the user, which means Chrome, Firefox, Safari, and Edge each produce a characteristic pattern.

This is available on the server before any page content is served. Libraries like JA3 and JA4 hash these ClientHello attributes into a short fingerprint that reliably identifies the browser and TLS stack. Because it happens at the transport layer, it's unaffected by JavaScript-based evasion.

It's particularly good at catching spoofing. A bot that sets its User-Agent header to look like Chrome but produces a non-Chrome TLS fingerprint is immediately inconsistent. The ThumbmarkJS API uses server-side TLS collection to add this layer on top of browser fingerprints generated client-side.


Combining Signals: Building a Stable Fingerprint

Each technique above produces a piece of data. Combining them into a single hash is where you get a fingerprint you can actually act on — but the combination strategy matters.

The straightforward approach is to collect all signals in parallel (most are async anyway), then hash the combined result. Running them concurrently with Promise.all keeps the total collection time close to the slowest single signal rather than the sum of all of them:

async function buildFingerprint() {
  const [canvas, audio, webgl, fonts] = await Promise.all([
    getCanvasFingerprint(),
    getAudioFingerprint(),
    getWebGLFingerprint(),
    detectFonts(fontList),
  ]);

  const passive = {
    userAgent: navigator.userAgent,
    timezone: Intl.DateTimeFormat().resolvedOptions().timeZone,
    language: navigator.language,
    screen: `${screen.width}x${screen.height}x${screen.colorDepth}`,
    hardwareConcurrency: navigator.hardwareConcurrency,
    deviceMemory: navigator.deviceMemory,
    devicePixelRatio: window.devicePixelRatio,
  };

  const components = JSON.stringify({ canvas, audio, webgl, fonts, ...passive });
  return hashString(components); // murmur3, SHA-256, etc.
}

Before deciding how to weight each signal, it's worth understanding their stability over time. Canvas and WebGL renderer strings are close to permanent — they only change when hardware changes. Audio output is similarly stable. The user agent string, on the other hand, changes every browser update, which can cause a fingerprint to shift even when the device hasn't. Screen resolution is stable until someone connects a new monitor.

A production implementation has to take into account fingerprint shift somehow, for example by accepting partial matches or otherwise. However, it is a balance between distinguishing two separate devices and identifying them as the same.

If you'd rather not build this from scratch, the ThumbmarkJS open-source library handles signal collection, hashing, and API-side enrichment out of the box. It runs in production at over 1 billion fingerprints per month and handles the edge cases — browser resistance, signal degradation, async timing — that tend to surface only once you've shipped something.


Which Signals Contribute the Most

Not all signals are equally useful. Based on research from the EFF's (Electronic Frontier Foundation) Panopticlick project and ThumbmarkJS's own production data:

ThumbmarkJS's observed data puts client-side-only fingerprinting at around 80% uniqueness — roughly 1 in 5 visitors shares a fingerprint with at least one other visitor in the dataset. Adding TLS fingerprinting and other server-side signals improves this. For fraud prevention, combining fingerprinting signals with behavioural data is standard.

Browser support for each technique varies meaningfully — some signals are masked or randomised, others blocked, depending on the browser and privacy mode. The table below shows how each signal behaves across the four major desktop browsers:

Signal

Chrome

Firefox

Brave

Safari

Canvas

Full access

Noise (Strict/private mode)

Farbled per session (shields on)

Noise injected

WebGL renderer

Full access

Masked (Strict/private mode)

Farbled per session (shields on)

Noise injected

Audio

Full access

Noise (Strict/private mode)

Farbled per session (shields on)

Noise (private mode; all modes in Safari 26+)

Font detection

Restricted

Restricted

Randomised (shields on)

Restricted

TLS fingerprint

Stable

Stable

Stable

Stable

Sources: Firefox fingerprinting protection · Brave fingerprinting protections · Brave: fingerprint randomization · WebKit: Private Browsing 2.0


Conclusion

Canvas, WebGL, and audio fingerprinting carry the most weight in a stable, distinctive device identifier. Passive signals add breadth but aren't enough on their own. TLS fingerprinting adds a server-side layer that JavaScript evasion can't touch.

For implementers: collect active and passive signals in parallel, store component values separately to handle signal drift over time, and don't rely on the user agent string for long-term stability. If you want a production-ready starting point rather than building from scratch, ThumbmarkJS open-source library covers all the signals above under an MIT licence. The ThumbmarkJS API adds signals from the connection the browser can't access otherwise and has mechanisms to handle fingerprint drift for added stability.