Cody Bontecou

Client-side AI with Nuxt Workers + Transformers.js

March 24, 2025 · 3 minute read · ai,nuxt,transformers,transformers.js

Youtube video for those that prefer video content.

This post walks you through an implementation of NLLB-200, Facebook's text-to-text translation model, in the browser.

Core Tools

  • Nuxt Workers: Offload AI tasks to Web Workers to prevent main-thread blocking.
  • Transformers.js: Run pre-trained models (translation, sentiment analysis) directly in the browser.

Project Setup

Let's start by creating a new project with the necessary dependencies.

  • Create the project:
npm create nuxt <project-name>
  • Change into project directory:
cd <project-name>

Dependencies

  • Install runtime dependencies:
npm install @xenova/transformers
  • Install nuxt-workers via module:
npx nuxi@latest module add nuxt-workers

Web-worker

Nuxt Workers looks for your web workers in the ~/workers/ directory.

Create translate.ts:

// workers/translate.ts

import { pipeline } from '@xenova/transformers'

const task = 'translation'
const model = 'Xenova/nllb-200-distilled-600M'
const translator = await pipeline(task, model)

export async function translate(input: string) {
    const translation = await translator(input, {
        tgt_lang: 'spa_Latn',
        src_lang: 'eng_Latn',
    })

    return translation
}

Transformers.js pulls the model from HuggingFace and installs it into your browser session. Now you can interact with it via the pipeline function.

You can view task options on Huggingface.

nllb-200 relies on FLORES-200 language codes. See here for the full list of languages and their corresponding codes.

The Performance Challenge

This implementation looks straightforward. However, there's a significant performance bottleneck hidden in this code. Let's break down what happens when you call the translate function:

  1. Model Loading: The pipeline function downloads and initializes a large machine learning model.
  2. Resource Intensive: This process involves:
    • Downloading the model weights (potentially megabytes of data)
    • Parsing and initializing the model in the browser
    • Setting up the computational graph
    • Allocating memory for the model

In the current implementation, if you call translate() multiple times:

  • The model will be downloaded and initialized every single time

Introducing the Singleton Pattern

The Singleton Pattern provides an elegant solution to prevent repeated model initialization:

// workers/translate.ts

import { pipeline, TranslationPipeline } from '@xenova/transformers'

class TranslationSingleton {
    private static instance: TranslationSingleton;
    private pipelinePromise: Promise<TranslationPipeline> | null = null;

    private constructor() {}

    public static getInstance(): TranslationSingleton {
        if (!TranslationSingleton.instance) {
            TranslationSingleton.instance = new TranslationSingleton();
        }
        return TranslationSingleton.instance;
    }

    public async getTranslator(): Promise<TranslationPipeline> {
        if (!this.pipelinePromise) {
            this.pipelinePromise = pipeline('translation', 'Xenova/nllb-200-distilled-600M') as Promise<TranslationPipeline>;
        }
        return this.pipelinePromise;
    }
}

export async function translate(input: string) {
    const singleton = TranslationSingleton.getInstance();
    const translator = await singleton.getTranslator();

    const translation = await translator(input, {
        tgt_lang: 'spa_Latn',
        src_lang: 'eng_Latn',
    });

    return translation;
}

How the Singleton Solves Our Problem

  1. One-Time Initialization: The model is loaded only once, the first time getTranslator() is called.
  2. Cached Pipeline: Subsequent calls return the same pipeline instance.
  3. Lazy Loading: The model is only loaded when first needed.
  4. Performance Optimization: Reduces unnecessary network requests and computational overhead.

UI Integration

To use the pages directory, replace app.vue with the following:

// app.vue

<template>
    <NuxtPage />
</template>

Then create an index.vue file, hooking into our translate function:

// pages/index.vue

<script setup lang="ts">
const input = ref('Hello')
const message = ref()

async function runTranslate() {
    message.value = await translate(input.value)
}
</script>

<template>
    {{ message }}
    <input type="text" v-model="input" />
    <button @click="runTranslate">Translate</button>
</template>

Showcase

Simple and effective

Beyond Translation

Swap the model and task for:

Reponuxt-workers-transformersjs

Newsletter

Subscribe to get my latest content. No spam.