Skip to content

Node.js

TalaDB's Node.js integration uses a prebuilt native .node module produced by napi-rs. The Rust engine runs natively — no WASM, no subprocess — so performance is identical to embedding the Rust library directly.

How it works

Your Node.js process
        │  N-API (native ABI)

@taladb/node (.node native module)


taladb-core (Rust) + redb (file on disk)

Because the native module links directly into the Node.js process, reads and writes are synchronous at the Rust level. The JavaScript API wraps them in Promises for consistency with the browser adapter, but there is no async overhead beyond V8's microtask scheduling.

Prerequisites

  • Node.js 18 or later
  • A supported OS / architecture:
    • linux-x64-gnu
    • linux-arm64-gnu
    • darwin-x64
    • darwin-arm64 (Apple Silicon)
    • win32-x64-msvc

Installation

bash
pnpm add taladb @taladb/node

Use openDB from taladb. The API is async and identical to the browser version — code can be shared between Node.js and browser projects:

ts
import { openDB } from 'taladb'
const db = await openDB('./myapp.db')

Option B — Standalone (Node.js only)

bash
pnpm add @taladb/node

Import TalaDBNode directly for a synchronous API with one fewer dependency:

ts
import { TalaDBNode } from '@taladb/node'
const db = TalaDBNode.open('./myapp.db')
const id = db.collection('users').insert({ name: 'Alice' })  // no await

The @taladb/node package ships platform-specific prebuilt binaries. @napi-rs/cli selects the correct .node file for your platform automatically.

Opening a database

ts
import { openDB } from 'taladb'

// Opens (or creates) a redb database file at the given path
const db = await openDB('./data/myapp.db')

The database file is created if it does not exist. Parent directories must exist. The .db extension is conventional but not required.

In-memory database

For testing or ephemeral use:

ts
import { TalaDBNode } from '@taladb/node'

const db = TalaDBNode.openInMemory()

An in-memory database is not persisted to disk and is discarded when the process exits.

TypeScript setup

ts
// tsconfig.json — recommended settings
{
  "compilerOptions": {
    "module": "Node16",
    "moduleResolution": "Node16",
    "target": "ES2022",
    "strict": true
  }
}

Basic CRUD

ts
import { openDB } from 'taladb'

interface Task {
  _id?: string
  title: string
  done: boolean
  priority: 1 | 2 | 3
  createdAt: number
}

const db = await openDB('./tasks.db')
const tasks = db.collection<Task>('tasks')

// Indexes — create once at startup (idempotent)
await tasks.createIndex('done')
await tasks.createIndex('priority')

// Insert
const id = await tasks.insert({
  title: 'Write documentation',
  done: false,
  priority: 1,
  createdAt: Date.now(),
})

// Find all undone tasks with priority 1
const urgent = await tasks.find({
  $and: [{ done: false }, { priority: 1 }],
})

// Find one by ID
const task = await tasks.findOne({ _id: id })

// Mark done
await tasks.updateOne({ _id: id }, { $set: { done: true } })

// Delete completed tasks
const removed = await tasks.deleteMany({ done: true })

Using the low-level native API directly

If you need synchronous access or want to avoid the taladb wrapper, import @taladb/node directly:

ts
import { TalaDBNode } from '@taladb/node'

const db = TalaDBNode.open('./myapp.db')
const col = db.collection('users')

// Synchronous at the Rust level — Promise resolves in the same microtask
col.createIndex('email')
const id = col.insert({ name: 'Alice', email: 'alice@example.com' })
const alice = col.findOne({ email: 'alice@example.com' })

db.close()

Server usage example

ts
// server.ts — Express + TalaDB
import express from 'express'
import { openDB } from 'taladb'

interface Event {
  _id?: string
  type: string
  payload: Record<string, unknown>
  ts: number
}

const app = express()
app.use(express.json())

const db = await openDB('./events.db')
const events = db.collection<Event>('events')
await events.createIndex('type')
await events.createIndex('ts')

app.post('/events', async (req, res) => {
  const id = await events.insert({
    type: req.body.type,
    payload: req.body.payload ?? {},
    ts: Date.now(),
  })
  res.json({ id })
})

app.get('/events', async (req, res) => {
  const { type, since } = req.query
  const filter: object = {}
  if (type) Object.assign(filter, { type })
  if (since) Object.assign(filter, { ts: { $gte: Number(since) } })
  const docs = await events.find(filter)
  res.json(docs)
})

app.listen(3000)

TalaDB's vector index works identically on Node.js. Pair it with any embedding library that runs in Node — the OpenAI SDK, a local ONNX model, or Hugging Face Transformers.

Setup — embedding function

ts
// Option A: OpenAI (remote, requires API key)
import OpenAI from 'openai'
const openai = new OpenAI()
async function embed(text: string): Promise<number[]> {
  const res = await openai.embeddings.create({
    model: 'text-embedding-3-small',
    input: text,
  })
  return res.data[0].embedding  // 1536 dimensions
}

// Option B: local model with @xenova/transformers (no API key, runs in-process)
import { pipeline } from '@xenova/transformers'
const embedder = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2')
async function embed(text: string): Promise<number[]> {
  const out = await embedder(text, { pooling: 'mean', normalize: true })
  return Array.from(out.data) as number[]
}

Create a vector index

ts
interface Doc {
  _id?: string
  content: string
  source: string
  embedding: number[]
}

const docs = db.collection<Doc>('docs')

// Call once at startup — idempotent
await docs.createVectorIndex('embedding', { dimensions: 1536 }) // OpenAI
// or
await docs.createVectorIndex('embedding', { dimensions: 384 })  // MiniLM

Insert with embedding

ts
const content = 'TalaDB stores documents and vectors on-device.'
await docs.insert({
  content,
  source: 'readme',
  embedding: await embed(content),
})
ts
const query = await embed('embedded local database')
const results = await docs.findNearest('embedding', query, 5)

for (const { document, score } of results) {
  console.log(score.toFixed(3), document.content)
}
ts
// Find the 3 most relevant docs from 'readme' source only
const results = await docs.findNearest('embedding', query, 3, {
  source: 'readme',
})

Ingestion script example

ts
import { openDB } from 'taladb'
import fs from 'node:fs/promises'

const db = await openDB('./knowledge.db')
const col = db.collection<Doc>('docs')
await col.createVectorIndex('embedding', { dimensions: 1536 })

const files = await fs.readdir('./content')
for (const file of files) {
  const content = await fs.readFile(`./content/${file}`, 'utf8')
  await col.insert({
    content,
    source: file,
    embedding: await embed(content),
  })
}

console.log(`Indexed ${files.length} documents`)
await db.close()

Migrations

ts
const db = await openDB('./myapp.db', {
  migrations: [
    {
      version: 1,
      description: 'Index users by email',
      up: async (db) => {
        await db.collection('users').createIndex('email')
      },
    },
  ],
})

Migrations run at open time, in version order, inside a single atomic transaction.

Snapshot export / import

ts
import fs from 'node:fs/promises'

// Export
const bytes = await db.exportSnapshot()
await fs.writeFile('backup.taladb', bytes)

// Restore
const data = await fs.readFile('backup.taladb')
const restored = await Database.restoreFromSnapshot(data)

Closing

ts
await db.close()

Always close the database before the process exits to flush any pending writes and release the file lock.

Testing with an in-memory database

ts
// vitest / jest
import { TalaDBNode } from '@taladb/node'

beforeEach(() => {
  db = TalaDBNode.openInMemory()
})

afterEach(() => {
  db.close()
})

Using an in-memory database in tests means no file system cleanup and no interference between test runs.

CLI

The taladb-cli binary can inspect any redb database file produced by TalaDB. Download the pre-built binary for your platform from the GitHub Releases page, then:

bash
taladb inspect ./myapp.db
taladb export  ./myapp.db
taladb count   ./myapp.db users
taladb drop    ./myapp.db sessions

Released under the MIT License.