Serverless driver
Connect to Prisma Postgres from serverless and edge environments
The Prisma Postgres serverless driver (@prisma/ppg) is a lightweight client for connecting to Prisma Postgres using raw SQL. It uses HTTP and WebSocket protocols instead of traditional TCP connections, enabling database access in constrained environments where native PostgreSQL drivers cannot run.
The Prisma Postgres serverless driver is currently in Early Access and not yet recommended for production scenarios.
Key features
The serverless driver uses HTTP and WebSocket protocols instead of TCP, enabling database access in environments where traditional PostgreSQL drivers cannot run:
- Compatible with Cloudflare Workers, Vercel Edge Functions, Deno Deploy, AWS Lambda, Bun, and browsers
- Stream results row-by-row to handle large datasets with constant memory usage
- Pipeline multiple queries over a single connection, reducing latency by up to 3x
- SQL template literals with automatic parameterization and full TypeScript support
- Built-in transactions, batch operations, and extensible type system
- Automatic connection pooling across all available Prisma Postgres regions for optimal performance
Use this driver for edge/serverless environments without full Node.js support, or when working with large result sets that benefit from streaming.
For standard Node.js environments, use the node-postgres driver for lower latency with direct TCP connections.
Prerequisite: Get your connection string
The serverless driver requires a Prisma Postgres Direct TCP connection URL:
postgres://identifier:key@db.prisma.io:5432/postgres?sslmode=requireFind this in the API Keys section of your Prisma Postgres dashboard. The connection string is used only to extract authentication credentials. No direct TCP connection is made from the client.
If you don't have a Prisma Postgres database, create one using the create-db CLI tool:
npx prisma create-dbInstallation
Install the appropriate package based on your use case:
npm install @prisma/ppgUsage
Query with SQL template literals
Use the prismaPostgres() high-level API with SQL template literals and automatic parameterization:
import { prismaPostgres, defaultClientConfig } from "@prisma/ppg";
const ppg = prismaPostgres(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!));
type User = { id: number; name: string; email: string };
// SQL template literals with automatic parameterization
const users = await ppg.sql<User>`
SELECT * FROM users WHERE email = ${"user@example.com"}
`.collect();
console.log(users[0].name);Use with Prisma ORM
Use the PrismaPostgresAdapter to connect Prisma Client via the serverless driver:
import { PrismaClient } from "../generated/prisma/client";
import { PrismaPostgresAdapter } from "@prisma/adapter-ppg";
const prisma = new PrismaClient({
adapter: new PrismaPostgresAdapter({
connectionString: process.env.PRISMA_DIRECT_TCP_URL,
}),
});
const users = await prisma.user.findMany();Stream results
Results are returned as CollectableIterator<T>. Stream rows one at a time for constant memory usage, or collect all rows into an array:
type User = { id: number; name: string; email: string };
// Stream rows one at a time (constant memory usage)
for await (const user of ppg.sql<User>`SELECT * FROM users`) {
console.log(user.name);
}
// Or collect all rows into an array
const allUsers = await ppg.sql<User>`SELECT * FROM users`.collect();Pipeline queries
Send multiple queries over a single WebSocket connection without waiting for responses. Queries are sent immediately and results arrive in FIFO order:
import { client, defaultClientConfig } from "@prisma/ppg";
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!));
const session = await cl.newSession();
// Send all queries immediately (pipelined)
const [usersResult, ordersResult, productsResult] = await Promise.all([
session.query("SELECT * FROM users"),
session.query("SELECT * FROM orders"),
session.query("SELECT * FROM products"),
]);
session.close();With 100ms network latency, 3 sequential queries take 300ms (3 x RTT), but pipelined queries take only 100ms (1 x RTT).
Parameter streaming
Parameters over 1KB are automatically streamed without buffering in memory. For large binary parameters, you must use boundedByteStreamParameter() which creates a BoundedByteStreamParameter object that carries the total byte size, required by the PostgreSQL protocol:
import { client, defaultClientConfig, boundedByteStreamParameter, BINARY } from "@prisma/ppg";
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!));
// Large binary data (e.g., file content)
const stream = getReadableStream(); // Your ReadableStream source
const totalSize = 1024 * 1024; // Total size must be known in advance
// Create a bounded byte stream parameter
const streamParam = boundedByteStreamParameter(stream, BINARY, totalSize);
// Automatically streamed - constant memory usage
await cl.query("INSERT INTO files (data) VALUES ($1)", streamParam);For Uint8Array data, use byteArrayParameter():
import { client, defaultClientConfig, byteArrayParameter, BINARY } from "@prisma/ppg";
const cl = client(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!));
const bytes = new Uint8Array([1, 2, 3, 4]);
const param = byteArrayParameter(bytes, BINARY);
await cl.query("INSERT INTO files (data) VALUES ($1)", param);The boundedByteStreamParameter() function is provided by the @prisma/ppg library and requires the total byte size to be known in advance due to PostgreSQL protocol requirements.
Transactions and batch operations
Transactions automatically handle BEGIN, COMMIT, and ROLLBACK:
const result = await ppg.transaction(async (tx) => {
await tx.sql.exec`INSERT INTO users (name) VALUES ('Alice')`;
const users = await tx.sql<User>`SELECT * FROM users WHERE name = 'Alice'`.collect();
return users[0].name;
});Batch operations execute multiple statements in a single round-trip within an automatic transaction:
const [users, affected] = await ppg.batch<[User[], number]>(
{ query: "SELECT * FROM users WHERE id < $1", parameters: [5] },
{ exec: "INSERT INTO users (name) VALUES ($1)", parameters: ["Charlie"] },
);Type handling
When using defaultClientConfig(), common PostgreSQL types are automatically parsed (boolean, int2, int4, int8, float4, float8, text, varchar, json, jsonb, date, timestamp, timestamptz):
import { prismaPostgres, defaultClientConfig } from "@prisma/ppg";
const ppg = prismaPostgres(defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!));
// JSON/JSONB automatically parsed
const rows = await ppg.sql<{ data: { key: string } }>`
SELECT '{"key": "value"}'::jsonb as data
`.collect();
console.log(rows[0].data.key); // "value"
// BigInt parsed to JavaScript BigInt
const bigints = await ppg.sql<{
big: bigint;
}>`SELECT 9007199254740991::int8 as big`.collect();
// Dates parsed to Date objects
const dates = await ppg.sql<{
created: Date;
}>`SELECT NOW() as created`.collect();Custom parsers and serializers
Extend or override the type system with custom parsers (by PostgreSQL OID) and serializers (by type guard):
import { client, defaultClientConfig } from "@prisma/ppg";
import type { ValueParser } from "@prisma/ppg";
// Custom parser for UUID type
const uuidParser: ValueParser<string> = {
oid: 2950,
parse: (value) => (value ? value.toUpperCase() : null),
};
const config = defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!);
const cl = client({
...config,
parsers: [...config.parsers, uuidParser], // Append to defaults
});For custom serializers, place them before defaults so they take precedence:
import type { ValueSerializer } from "@prisma/ppg";
class Point {
constructor(
public x: number,
public y: number,
) {}
}
const pointSerializer: ValueSerializer<Point> = {
supports: (value: unknown): value is Point => value instanceof Point,
serialize: (value: Point) => `(${value.x},${value.y})`,
};
const config = defaultClientConfig(process.env.PRISMA_DIRECT_TCP_URL!);
const cl = client({
...config,
serializers: [pointSerializer, ...config.serializers], // Your serializer first
});
await cl.query("INSERT INTO locations (point) VALUES ($1)", new Point(10, 20));See the npm package documentation for more details.
Platform compatibility
The driver works in any environment with fetch and WebSocket APIs:
| Platform | HTTP Transport | WebSocket Transport |
|---|---|---|
| Cloudflare Workers | ✅ | ✅ |
| Vercel Edge Functions | ✅ | ✅ |
| AWS Lambda | ✅ | ✅ |
| Deno Deploy | ✅ | ✅ |
| Bun | ✅ | ✅ |
| Node.js 18+ | ✅ | ✅ |
| Browsers | ✅ | ✅ (with CORS) |
Transport modes
- HTTP transport (stateless): Each query is an independent HTTP request. Best for simple queries and edge functions.
- WebSocket transport (stateful): Persistent connection for multiplexed queries. Best for transactions, pipelining, and multiple queries. Create a session with
client().newSession().
API overview
prismaPostgres(config)
High-level API with SQL template literals, transactions, and batch operations. Recommended for most use cases.
client(config)
Low-level API with explicit parameter passing and session management. Use when you need fine-grained control.
See the npm package for complete API documentation.
Error handling
Structured error types are provided: DatabaseError, HttpResponseError, WebSocketError, ValidationError.
import { DatabaseError } from "@prisma/ppg";
try {
await ppg.sql`SELECT * FROM invalid_table`.collect();
} catch (error) {
if (error instanceof DatabaseError) {
console.log(error.code);
}
}Connection pooling enabled by default
The serverless driver automatically uses connection pooling across all available Prisma Postgres regions for optimal performance and resource utilization.
Connection pooling is enabled by default and requires no additional configuration.
This ensures efficient database connections regardless of your deployment region, reducing connection overhead and improving query performance.
Limitations
- Requires a Prisma Postgres instance and does not work with local development databases
- Currently in Early Access and not yet recommended for production