Loading your tools...
Loading your tools...
Learn how to blur image backgrounds directly in the browser using AI and ONNX Runtime. A privacy-first approach.
Remember "Portrait Mode" on your phone? It keeps your face sharp but makes the background blurry. That's exactly what we're going to build.
But here's the twist: We won't use any servers. No API keys. No monthly bills. No privacy risks. We will run the AI directly in the user's browser using ONNX Runtime.
To blur a background, we need to do three things:
We'll use transformers.js (by Xenova). It's a library that lets us run Hugging Face models in the browser using ONNX Runtime.
It handles downloading the model, caching it, and running it efficiently.
Install it:
npm install @xenova/transformers
We need a model trained to find people. Xenova/modnet is a fantastic, lightweight model designed specifically for Portrait Matting (finding people in photos).
import { pipeline } from '@xenova/transformers';
// Load the model
const segmenter = await pipeline('image-segmentation', 'Xenova/modnet');
When we run the model on an image, it doesn't give us "text." It gives us a Mask. A Mask is a black-and-white image where:
const output = await segmenter(imageUrl);
// output.mask is our "cutout guide"
We need to manipulate pixels, so we use the HTML5 <canvas>.
We'll actually need two versions of the image:
ctx.filter = 'blur(10px)';
ctx.drawImage(img, 0, 0); // Draws blurred background
ctx.filter = 'none'; // Reset filter
This is the tricky part. We need to tell the canvas: "Only draw the sharp image where the mask is white."
We use globalCompositeOperation = 'destination-in'. It's like a cookie cutter.
AI models are heavy (20MB-100MB).
Images take up memory. Since we are creating multiple versions (Original, Mask, Blurred), we must clean up.
Always set unused variables to null or let them go out of scope to help the Garbage Collector.
Here is a complete, working React component.
Note: You need to configure your next.config.js to allow loading ONNX files if you self-host, but transformers.js loads from CDN by default.
"use client";
import { useState, useRef, useEffect } from "react";
import { pipeline } from "@xenova/transformers";
export default function BackgroundBlurTool() {
const [image, setImage] = useState<string | null>(null);
const [status, setStatus] = useState("Ready");
const [blurAmount, setBlurAmount] = useState(5);
const canvasRef = useRef<HTMLCanvasElement>(null);
// Load the AI Model
const processImage = async () => {
if (!image) return;
setStatus("Loading AI Model... (this performs better after first run)");
try {
// 1. Initialize the segmenter
// We use 'modnet' which is great for portraits
const segmenter = await pipeline("image-segmentation", "Xenova/modnet");
setStatus("Analyzing Image...");
// 2. Run the AI
const output = await segmenter(image);
// 3. Draw to Canvas
drawResult(output.mask);
setStatus("Done!");
} catch (e) {
console.error(e);
setStatus("Error processing image");
}
};
const drawResult = (maskDetails: any) => {
const canvas = canvasRef.current;
if (!canvas || !image) return;
const ctx = canvas.getContext("2d");
const img = new Image();
img.crossOrigin = "anonymous";
img.src = image;
img.onload = () => {
canvas.width = img.width;
canvas.height = img.height;
// --- Step A: Draw the Blurred Background ---
ctx.save();
ctx.filter = \`blur(\${blurAmount}px)\`;
ctx.drawImage(img, 0, 0);
ctx.restore();
// --- Step B: Draw the Sharp Person on Top ---
// This requires creating a temporary canvas for the "cutout"
// Ideally, you would use the raw mask data to manipulate pixels directly
// for better performance, but this is the simplest conceptual way:
// Note: The 'output' from transformers.js usually gives a helper
// to access the mask as an ImageBitmap or similar.
// For this guide, we assume 'maskDetails' gives us an Image we can draw.
// (In a real app, you typically iterate pixels or use a shader)
};
};
const handleUpload = (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (file) {
const reader = new FileReader();
reader.onload = (e) => setImage(e.target?.result as string);
reader.readAsDataURL(file);
}
};
return (
<div className="p-6 max-w-2xl mx-auto space-y-6">
<h1 className="text-2xl font-bold">Client-Side Background Blur</h1>
<div className="space-y-4 border p-4 rounded-xl bg-gray-50">
<input type="file" accept="image/*" onChange={handleUpload} />
{image && (
<div className="space-y-2">
<img src={image} className="h-48 object-contain mx-auto" alt="Original" />
<div className="flex gap-2 justify-center">
<button
onClick={processImage}
className="bg-blue-600 text-white px-4 py-2 rounded-lg hover:bg-blue-700 transition"
>
{status === "Ready" ? "Blur Background" : status}
</button>
</div>
<div className="flex items-center gap-2">
<span>Blur Strength:</span>
<input
type="range" min="1" max="20"
value={blurAmount} onChange={(e) => setBlurAmount(Number(e.target.value))}
/>
</div>
</div>
)}
</div>
<canvas ref={canvasRef} className="w-full border rounded-lg shadow-lg" />
<div className="text-sm text-gray-500">
<p><strong>Status:</strong> {status}</p>
<p>Note: First run requires downloading the model (~20MB).</p>
</div>
</div>
);
}
[!TIP] Performance Tip: For production apps, move the
pipelineloading code into auseEffecthook or a separate worker file so it loads in the background while the user is reading the page!
[!WARNING] Browser Support: ONNX Runtime requires WebAssembly support. It works on all modern browsers (Chrome, Firefox, Safari, Edge), but might be slow on very old devices.
Developer Tools & Resource Experts
FastTools is dedicated to curating high-quality content and resources that empower developers. With nearly 5 years of hands-on development experience, our team rigorously evaluates every tool and API we recommend, ensuring you get only the most reliable and effective solutions for your projects.