包详细信息

orange-orm

alfateam66.6kISC4.7.12

Object Relational Mapper

typescript orm, orm, sql, mySql

自述文件

Orange ORM Logo

The ultimate Object Relational Mapper for Node.js, Bun and Deno, offering seamless integration with a variety of popular databases. Orange ORM supports both TypeScript and JavaScript, including both CommonJS and ECMAScript.

npm version Build status Coverage Badge Github GitHub Discussions Discord YouTube Video Views

Key Features

  • Rich Querying Model: Orange provides a powerful and intuitive querying model, making it easy to retrieve, filter, and manipulate data from your databases.
  • Active Record: With a concise and expressive syntax, Orange enables you to interact with your database using the Active Record Pattern.
  • No Code Generation Required: Enjoy full IntelliSense, even in table mappings, without the need for cumbersome code generation.
  • TypeScript and JavaScript Support: Orange fully supports both TypeScript and JavaScript, allowing you to leverage the benefits of static typing and modern ECMAScript features.
  • Works in the Browser: You can securely use Orange in the browser by utilizing the Express.js plugin, which serves to safeguard sensitive database credentials from exposure at the client level and protect against SQL injection. This method mirrors a traditional REST API, augmented with advanced TypeScript tooling for enhanced functionality.

Supported Databases and Runtimes

Node Deno Bun Cloudflare Web
Postgres
PGlite
MS SQL
MySQL
Oracle
SAP ASE
SQLite
Cloudflare D1

This is the Modern Typescript Documentation. Are you looking for the Classic Documentation ?

Sponsorship

If you value the hard work behind Orange and wish to see it evolve further, consider sponsoring. Your support fuels the journey of refining and expanding this tool for our developer community.

Installation

npm install orange-orm

Example

Watch the tutorial video on YouTube

Relations diagram

📄 map.ts

import orange from 'orange-orm';

const map = orange.map(x => ({
  customer: x.table('customer').map(({ column }) => ({
    id: column('id').numeric().primary().notNullExceptInsert(),
    name: column('name').string(),
    balance: column('balance').numeric(),
    isActive: column('isActive').boolean(),
  })),

  order: x.table('_order').map(({ column }) => ({
    id: column('id').numeric().primary().notNullExceptInsert(),
    orderDate: column('orderDate').date().notNull(),
    customerId: column('customerId').numeric().notNullExceptInsert(),
  })),

  orderLine: x.table('orderLine').map(({ column }) => ({
    id: column('id').numeric().primary(),
    orderId: column('orderId').numeric(),
    product: column('product').string(),
    amount: column('amount').numeric(),
  })),

  package: x.table('package').map(({ column }) => ({
    id: column('packageId').numeric().primary().notNullExceptInsert(),
    lineId: column('lineId').numeric().notNullExceptInsert(),
    sscc: column('sscc').string() //the barcode
  })),

  deliveryAddress: x.table('deliveryAddress').map(({ column }) => ({
    id: column('id').numeric().primary(),
    orderId: column('orderId').numeric(),
    name: column('name').string(),
    street: column('street').string(),
    postalCode: column('postalCode').string(),
    postalPlace: column('postalPlace').string(),
    countryCode: column('countryCode').string(),
  }))

})).map(x => ({
  orderLine: x.orderLine.map(({ hasMany }) => ({
    packages: hasMany(x.package).by('lineId')
  })),
  order: x.order.map(v => ({
    customer: v.references(x.customer).by('customerId'),
    lines: v.hasMany(x.orderLine).by('orderId'),
    deliveryAddress: v.hasOne(x.deliveryAddress).by('orderId'),
  }))
}));

export default map;

📄 update.ts

import map from './map';
const db = map.sqlite('demo.db');

updateRow();

async function updateRow() {
  const order = await db.order.getById(2, {
    lines: true
  });
  order.lines.push({
    product: 'broomstick',
    amount: 300
  });

  await order.saveChanges();
}

📄 filter.ts

import map from './map';
const db = map.sqlite('demo.db');

getRows();

async function getRows() {
  const orders = await db.order.getAll({
    where: x => x.lines.any(line => line.product.contains('broomstick'))
      .and(x.customer.name.startsWith('Harry')),
    lines: {
      packages: true
    },
    deliveryAddress: true,    
    customer: true
  });  
}

API

<summary>Mapping tables</summary>

To define a mapping, you employ the map() method, linking your tables and columns to corresponding object properties. You provide a callback function that engages with a parameter representing a database table. Each column within your database table is designated by using the column() method, in which you specify its name. This action generates a reference to a column object that enables you to articulate further column properties like its data type or if it serves as a primary key. Relationships between tables can also be outlined. By using methods like hasOne, hasMany, and references, you can establish connections that reflect the relationships in your data schema. In the example below, an 'order' is linked to a 'customer' reference, a 'deliveryAddress', and multiple 'lines'. The hasMany and hasOne relations represents ownership - the tables 'deliveryAddress' and 'orderLine' are owned by the 'order' table, and therefore, they contain the 'orderId' column referring to their parent table, which is 'order'. The similar relationship exists between orderLine and package - hence the packages are owned by the orderLine. Conversely, the customer table is independent and can exist without any knowledge of the 'order' table. Therefore we say that the order table references the customer table - necessitating the existence of a 'customerId' column in the 'order' table.

📄 map.ts javascript import orange from 'orange-orm'; const map = orange.map(x => ({ customer: x.table('customer').map(({ column }) => ({ id: column('id').numeric().primary().notNullExceptInsert(), name: column('name').string(), balance: column('balance').numeric(), isActive: column('isActive').boolean(), })), order: x.table('_order').map(({ column }) => ({ id: column('id').numeric().primary().notNullExceptInsert(), orderDate: column('orderDate').date().notNull(), customerId: column('customerId').numeric().notNullExceptInsert(), })), orderLine: x.table('orderLine').map(({ column }) => ({ id: column('id').numeric().primary(), orderId: column('orderId').numeric(), product: column('product').string(), })), package: x.table('package').map(({ column }) => ({ id: column('packageId').numeric().primary().notNullExceptInsert(), lineId: column('lineId').numeric().notNullExceptInsert(), sscc: column('sscc').string() //the barcode })), deliveryAddress: x.table('deliveryAddress').map(({ column }) => ({ id: column('id').numeric().primary(), orderId: column('orderId').numeric(), name: column('name').string(), street: column('street').string(), postalCode: column('postalCode').string(), postalPlace: column('postalPlace').string(), countryCode: column('countryCode').string(), })) })).map(x => ({ orderLine: x.orderLine.map(({ hasMany }) => ({ packages: hasMany(x.package).by('lineId') })), order: x.order.map(({ hasOne, hasMany, references }) => ({ customer: references(x.customer).by('customerId'), deliveryAddress: hasOne(x.deliveryAddress).by('orderId'), lines: hasMany(x.orderLine).by('orderId') })) })); export default map; The init.ts script resets our SQLite database. It's worth noting that SQLite databases are represented as single files, which makes them wonderfully straightforward to manage. At the start of the script, we import our database mapping from the map.ts file. This gives us access to the db object, which we'll use to interact with our SQLite database. Then, we define a SQL string. This string outlines the structure of our SQLite database. It first specifies to drop existing tables named 'deliveryAddress', 'package', 'orderLine', '_order', and 'customer' if they exist. This ensures we have a clean slate. Then, it dictates how to create these tables anew with the necessary columns and constraints. Because of a peculiarity in SQLite, which only allows one statement execution at a time, we split this SQL string into separate statements. We do this using the split() method, which breaks up the string at every semicolon.
📄 init.ts javascript import map from './map'; const db = map.sqlite('demo.db'); const sql = `DROP TABLE IF EXISTS deliveryAddress; DROP TABLE IF EXISTS package; DROP TABLE IF EXISTS orderLine; DROP TABLE IF EXISTS _order; DROP TABLE IF EXISTS customer; CREATE TABLE customer ( id INTEGER PRIMARY KEY, name TEXT, balance NUMERIC, isActive INTEGER ); CREATE TABLE _order ( id INTEGER PRIMARY KEY, orderDate TEXT, customerId INTEGER REFERENCES customer ); CREATE TABLE orderLine ( id INTEGER PRIMARY KEY, orderId INTEGER REFERENCES _order, product TEXT, amount NUMERIC(10,2) ); CREATE TABLE package ( packageId INTEGER PRIMARY KEY, lineId INTEGER REFERENCES orderLine, sscc TEXT ); CREATE TABLE deliveryAddress ( id INTEGER PRIMARY KEY, orderId INTEGER REFERENCES _order, name TEXT, street TEXT, postalCode TEXT, postalPlace TEXT, countryCode TEXT ) `; async function init() { const statements = sql.split(';'); for (let i = 0; i < statements.length; i++) { await db.query(statements[i]); } } export default init; In SQLite, columns with the INTEGER PRIMARY KEY attribute are designed to autoincrement by default. This means that each time a new record is inserted into the table, SQLite automatically produces a numeric key for the id column that is one greater than the largest existing key. This mechanism is particularly handy when you want to create unique identifiers for your table rows without manually entering each id.
<summary>Connecting</summary> SQLite
When running Node.js 21 and earlier, you need to install the sqlite3 dependency.
When running Node.js 22 and later, Bun, or Deno, you don't need it as it is built-in.
bash npm install sqlite3
javascript import map from './map'; const db = map.sqlite('demo.db'); // … use the database … // IMPORTANT for serverless functions: await db.close(); // closes the client connection With connection pool bash npm install sqlite3
javascript import map from './map'; const db = map.sqlite('demo.db', { size: 10 }); // … use the pool … // IMPORTANT for serverless functions: await pool.close(); // closes all pooled connections Why close ?
In serverless environments (e.g. AWS Lambda, Vercel, Cloudflare Workers) execution contexts are frequently frozen and resumed. Explicitly closing the client or pool ensures that file handles are released promptly and prevents “database locked” errors between invocations.
From the browser
You can securely use Orange from the browser by utilizing the Express plugin, which serves to safeguard sensitive database credentials from exposure at the client level. This technique bypasses the need to transmit raw SQL queries directly from the client to the server. Instead, it logs method calls initiated by the client, which are later replayed and authenticated on the server. This not only reinforces security by preventing the disclosure of raw SQL queries on the client side but also facilitates a smoother operation. Essentially, this method mirrors a traditional REST API, augmented with advanced TypeScript tooling for enhanced functionality. You can read more about it in the section called In the browser
📄 server.ts javascript import map from './map'; import { json } from 'body-parser'; import express from 'express'; import cors from 'cors'; const db = map.sqlite('demo.db'); express().disable('x-powered-by') .use(json({ limit: '100mb' })) .use(cors()) //for demonstrational purposes, authentication middleware is not shown here. .use('/orange', db.express()) .listen(3000, () => console.log('Example app listening on port 3000!')); 📄 browser.ts javascript import map from './map'; const db = map.http('http://localhost:3000/orange'); MySQL bash $ npm install mysql2
javascript import map from './map'; const db = map.mysql('mysql://test:test@mysql/test'); MS SQL bash npm install tedious
javascript import map from './map'; const db = map.mssql({ server: 'mssql', options: { encrypt: false, database: 'test' }, authentication: { type: 'default', options: { userName: 'sa', password: 'P@assword123', } } }); PostgreSQL
With Bun, you don't need to install the pg package as PostgreSQL support is built-in. bash npm install pg
javascript import map from './map'; const db = map.postgres('postgres://postgres:postgres@postgres/postgres'); With schema javascript import map from './map'; const db = map.postgres('postgres://postgres:postgres@postgres/postgres?search_path=custom'); PGlite
bash npm install @electric-sql/pglite
In this example we use the in-memory Postgres.
Read more about PGLite connection configs.
javascript import map from './map'; const db = map.pglite( /* config? : PGliteOptions */); Cloudflare D1
📄 wrangler.toml
toml name = "d1-tutorial" main = "src/index.ts" compatibility_date = "2025-02-04" # Bind a D1 database. D1 is Cloudflare’s native serverless SQL database. # Docs: https://developers.cloudflare.com/workers/wrangler/configuration/#d1-databases [[d1_databases]] binding = "DB" database_name = "<your-name-for-the-database>" database_id = "<your-guid-for-the-database>" 📄 src/index.ts
javascript import map from './map'; export interface Env { // Must match the binding name in wrangler.toml DB: D1Database; } export default { async fetch(request, env): Promise<Response> { const db = map.d1(env.DB); const customers = await db.customer.getAll(); return Response.json(customers); }, } satisfies ExportedHandler<Env>; Oracle bash npm install oracledb
javascript import map from './map'; const db = map.oracle({ user: 'sys', password: 'P@assword123', connectString: 'oracle/XE', privilege: 2 }); SAP Adaptive Server
Even though msnodesqlv8 was developed for MS SQL, it also works for SAP ASE as it is ODBC compliant.
bash npm install msnodesqlv8
javascript import { fileURLToPath } from 'url'; import { dirname } from 'path'; import map from './map'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); //download odbc driver from sap web pages const db = map.sap(`Driver=${__dirname}/libsybdrvodb.so;SERVER=sapase;Port=5000;UID=sa;PWD=sybase;DATABASE=test`);
<summary>Inserting rows</summary>

In the code below, we initially import the table-mapping feature "map.ts" and the setup script "init.ts", both of which were defined in the preceding step. The setup script executes a raw query that creates the necessary tables. Subsequently, we insert two customers, named "George" and "Harry", into the customer table, and this is achieved through calling "db.customer.insert". Next, we insert an array of two orders in the order table. Each order contains an orderDate, customer information, deliveryAddress, and lines for the order items. We use the customer constants "george" and "harry" from previous inserts. Observe that we don't pass in any primary keys. This is because all tables here have autoincremental keys. The second argument to "db.order.insert" specifies a fetching strategy. This fetching strategy plays a critical role in determining the depth of the data retrieved from the database after insertion. The fetching strategy specifies which associated data should be retrieved and included in the resulting orders object. In this case, the fetching strategy instructs the database to retrieve the customer, deliveryAddress, and lines for each order. Without a fetching strategy, "db.order.insert" would only return the root level of each order. In that case you would only get the id, orderDate, and customerId for each order.

javascript import map from './map'; const db = map.sqlite('demo.db'); import init from './init'; insertRows(); async function insertRows() { await init(); const george = await db.customer.insert({ name: 'George', balance: 177, isActive: true }); const harry = await db.customer.insert({ name: 'Harry', balance: 200, isActive: true }); const orders = await db.order.insert([ { orderDate: new Date(2022, 0, 11, 9, 24, 47), customer: george, deliveryAddress: { name: 'George', street: 'Node street 1', postalCode: '7059', postalPlace: 'Jakobsli', countryCode: 'NO' }, lines: [ { product: 'Bicycle', amount: 250 }, { product: 'Small guitar', amount: 150 } ] }, { customer: harry, orderDate: new Date(2021, 0, 11, 12, 22, 45), deliveryAddress: { name: 'Harry Potter', street: '4 Privet Drive, Little Whinging', postalCode: 'GU4', postalPlace: 'Surrey', countryCode: 'UK' }, lines: [ { product: 'Magic wand', amount: 300 } ] } ], {customer: true, deliveryAddress: true, lines: true}); //fetching strategy } Conflict resolution
By default, the strategy for inserting rows is set to an optimistic approach. In this case, if a row is being inserted with an already existing primary key, the database raises an exception. Currently, there are three concurrency strategies: - optimistic Raises an exception if another row was already inserted on that primary key. - overwrite Overwrites the property, regardless of changes by others. - skipOnConflict Silently avoids updating the property if another user has modified it in the interim. The concurrency option can be set either for the whole table or individually for each column. In the example below, we've set the concurrency strategy on vendor table to overwrite except for the column balance which uses the skipOnConflict strategy. In this particular case, a row with id: 1 already exists, the name and isActive fields will be overwritten, but the balance will remain the same as in the original record, demonstrating the effectiveness of combining multiple concurrency strategies. javascript import map from './map'; const db = map.sqlite('demo.db'); insertRows(); async function insertRows() { db2 = db({ vendor: { balance: { concurrency: 'skipOnConflict' }, concurrency: 'overwrite' } }); await db2.vendor.insert({ id: 1, name: 'John', balance: 100, isActive: true }); //this will overwrite all fields but balance const george = await db2.vendor.insert({ id: 1, name: 'George', balance: 177, isActive: false }); console.dir(george, {depth: Infinity}); // { // id: 1, // name: 'George', // balance: 100, // isActive: false // } }
<summary>Fetching rows</summary>

Orange has a rich querying model. As you navigate through, you'll learn about the various methods available to retrieve data from your tables, whether you want to fetch all rows, many rows with specific criteria, or a single row based on a primary key.
The fetching strategy in Orange is optional, and its use is influenced by your specific needs. You can define the fetching strategy either on the table level or the column level. This granularity gives you the freedom to decide how much related data you want to pull along with your primary request.

All rows javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getAll({ customer: true, deliveryAddress: true, lines: { packages: true } }); } Limit, offset and order by
This script demonstrates how to fetch orders with customer, lines, packages and deliveryAddress, limiting the results to 10, skipping the first row, and sorting the data based on the orderDate in descending order followed by id. The lines are sorted by product.
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getAll({ offset: 1, orderBy: ['orderDate desc', 'id'], limit: 10, customer: true, deliveryAddress: true, lines: { packages: true, orderBy: 'product' }, }); } With aggregated results
You can count records and aggregate numerical columns. The following operators are supported: - count - sum - min - max
- avg
You can also elevate associated data to a parent level for easier access. In the example below, balance of the customer is elevated to the root level. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getAll({ numberOfLines: x => x.count(x => x.lines.id), totalAmount: x => x.sum(x => lines.amount), balance: x => x.customer.balance }); } Many rows filtered javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getAll({ where: x => x.lines.any(line => line.product.contains('i')) .and(x.customer.balance.greaterThan(180)), customer: true, deliveryAddress: true, lines: true }); } You can also use the alternative syntax for the where-filter. This way, the filter can be constructed independently from the fetching strategy. Keep in mind that you must use the getMany method instead of the getAll method.
It is also possible to combine where-filter with the independent filter when using the getMany method.
javascript async function getRows() { const filter = db.order.lines.any(line => line.product.contains('i')) .and(db.order.customer.balance.greaterThan(180)); const orders = await db.order.getMany(filter, { //where: x => ... can be combined as well customer: true, deliveryAddress: true, lines: true }); } Single row filtered javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const order = await db.order.getOne(undefined /* optional filter */, { where: x => x.customer(customer => customer.isActive.eq(true) .and(customer.startsWith('Harr'))), customer: true, deliveryAddress: true, lines: true }); } You can use also the alternative syntax for the where-filter. This way, the filter can be constructed independently from the fetching strategy.
It is also possible to combine where-filter with the independent filter when using the getOne method.
javascript async function getRows() { const filter = db.order.customer(customer => customer.isActive.eq(true) .and(customer.startsWith('Harr'))); //equivalent, but creates slighly different sql: // const filter = db.order.customer.isActive.eq(true).and(db.order.customer.startsWith('Harr')); const order = await db.order.getOne(filter, { customer: true, deliveryAddress: true, lines: true }); } Single row by primary key javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const order = await db.order.getById(1, { customer: true, deliveryAddress: true, lines: true }); } Many rows by primary key javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getMany([ {id: 1}, {id: 2} ], { customer: true, deliveryAddress: true, lines: true }); }
<summary>Updating rows</summary>

To update rows, modify the property values and invoke the method saveChanges(). The function updates only the modified columns, not the entire row. Rows in child relations can also be updated as long as the parent order owns the child tables. In our illustration, the order table owns both the deliveryAddress and the lines tables because they're part of a hasOne/hasMany relationship. Contrastingly, the customer is part of a reference relationship and thus can't be updated here. But you can detach the reference to the customer by assigning it to null or undefined. (Setting order.customerId to null or undefined achieves the same result.)

Updating a single row javascript import map from './map'; const db = map.sqlite('demo.db'); update(); async function update() { const order = await db.order.getById(1, { customer: true, deliveryAddress: true, lines: true }); order.orderDate = new Date(); order.deliveryAddress = null; order.lines.push({product: 'Cloak of invisibility', amount: 600}); await order.saveChanges(); } Updating many rows javascript import map from './map'; const db = map.sqlite('demo.db'); update(); async function update() { let orders = await db.order.getAll({ orderBy: 'id', lines: true, deliveryAddress: true, customer: true }); orders[0].orderDate = new Date(); orders[0].deliveryAddress.street = 'Node street 2'; orders[0].lines[1].product = 'Big guitar'; orders[1].orderDate = '2023-07-14T12:00:00'; //iso-string is allowed orders[1].deliveryAddress = null; orders[1].customer = null; orders[1].lines.push({product: 'Cloak of invisibility', amount: 600}); await orders.saveChanges(); } Selective updates
The update method is ideal for updating specific columns and relationships across one or multiple rows. You must provide a where filter to specify which rows to target. If you include a fetching strategy, the affected rows and their related data will be returned; otherwise, no data is returned. javascript import map from './map'; const db = map.sqlite('demo.db'); update(); async function update() { const propsToBeModified = { orderDate: new Date(), customerId: 2, lines: [ { id: 1, product: 'Bicycle', amount: 250 }, //already existing line { id: 2, product: 'Small guitar', amount: 150 }, //already existing line { product: 'Piano', amount: 800 } //the new line to be inserted ] }; const strategy = {customer: true, deliveryAddress: true, lines: true}; const orders = await db.order.update(propsToBeModified, { where: x => x.id.eq(1) }, strategy); } Replacing a row from JSON
The replace method is suitable when a complete overwrite is required from a JSON object - typically in a REST API. However, it's important to consider that this method replaces the entire row and it's children, which might not always be desirable in a multi-user environment. javascript import map from './map'; const db = map.sqlite('demo.db'); replace(); async function replace() { const modified = { id: 1, orderDate: '2023-07-14T12:00:00', customer: { id: 2 }, deliveryAddress: { name: 'Roger', //modified name street: 'Node street 1', postalCode: '7059', postalPlace: 'Jakobsli', countryCode: 'NO' }, lines: [ { id: 1, product: 'Bicycle', amount: 250 }, { id: 2, product: 'Small guitar', amount: 150 }, { product: 'Piano', amount: 800 } //the new line to be inserted ] }; const order = await db.order.replace(modified, {customer: true, deliveryAddress: true, lines: true}); } Partially updating from JSON
The updateChanges method applies a partial update based on difference between original and modified row. It is often preferable because it minimizes the risk of unintentionally overwriting data that may have been altered by other users in the meantime. To do so, you need to pass in the original row object before modification as well. javascript import map from './map'; const db = map.sqlite('demo.db'); update(); async function update() { const original = { id: 1, orderDate: '2023-07-14T12:00:00', customer: { id: 2 }, deliveryAddress: { id: 1, name: 'George', street: 'Node street 1', postalCode: '7059', postalPlace: 'Jakobsli', countryCode: 'NO' }, lines: [ { id: 1, product: 'Bicycle', amount: 250 }, { id: 2, product: 'Small guitar', amount: 150 } ] }; const modified = JSON.parse(JSON.stringify(original)); modified.deliveryAddress.name = 'Roger'; modified.lines.push({ product: 'Piano', amount: 800 }); const order = await db.order.updateChanges(modified, original, { customer: true, deliveryAddress: true, lines: true }); } Conflict resolution
Rows get updated using an optimistic concurrency approach by default. This means if a property being edited was meanwhile altered, an exception is raised, indicating the row was modified by a different user. You can change the concurrency strategy either at the table or column level. Currently, there are three concurrency strategies: - optimistic Raises an exception if another user changes the property during an update. - overwrite Overwrites the property, regardless of changes by others. - skipOnConflict Silently avoids updating the property if another user has modified it in the interim. In the example below, we've set the concurrency strategy for orderDate to 'overwrite'. This implies that if other users modify orderDate while you're making changes, their updates will be overwritten. javascript import map from './map'; const db = map.sqlite('demo.db'); update(); async function update() { const order = await db.order.getById(1, { customer: true, deliveryAddress: true, lines: true }); order.orderDate = new Date(); order.deliveryAddress = null; order.lines.push({product: 'Cloak of invisibility', amount: 600}); await order.saveChanges( { orderDate: { concurrency: 'overwrite' }}); }
<summary>Upserting rows</summary> It is possible to perform 'upserts' by taking advantage of the 'overwrite' strategy. Currently, there are three concurrency strategies: - optimistic Raises an exception if another row was already inserted on that primary key. - overwrite Overwrites the property, regardless of changes by others. - skipOnConflict Silently avoids updating the property if another user has modified it in the interim. The concurrency option can be set either for the whole table or individually for each column. In the example below, we've set the concurrency strategy on vendor table to overwrite except for the column balance which uses the skipOnConflict strategy. In this particular case, a row with id: 1 already exists, the name and isActive fields will be overwritten, but the balance will remain the same as in the original record, demonstrating the effectiveness of combining multiple concurrency strategies. javascript import map from './map'; const db = map.sqlite('demo.db'); insertRows(); async function insertRows() { db2 = db({ vendor: { balance: { concurrency: 'skipOnConflict' }, concurrency: 'overwrite' } }); await db2.vendor.insert({ id: 1, name: 'John', balance: 100, isActive: true }); //this will overwrite all fields but balance const george = await db2.vendor.insert({ id: 1, name: 'George', balance: 177, isActive: false }); console.dir(george, {depth: Infinity}); // { // id: 1, // name: 'George', // balance: 100, // isActive: false // } }
<summary>Deleting rows</summary>

Rows in owner tables cascade deletes to their child tables. In essence, if a table has ownership over other tables through hasOne and hasMany relationships, removing a record from the parent table also removes its corresponding records in its child tables. This approach safeguards against leaving orphaned records and upholds data integrity. On the contrary, tables that are merely referenced, through reference relationships , remain unaffected upon deletions. For a deeper dive into these relationships and behaviors, refer to the section on Mapping tables.

Deleting a single row javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRow(); async function deleteRow() { const order = await db.order.getById(1); await order.delete(); //will also delete deliveryAddress and lines //but not customer } Deleting a row in an array
A common workflow involves retrieving multiple rows, followed by the need to delete a specific row from an array. This operation is straightforward to do with Orange, which allow for the updating, inserting, and deleting of multiple rows in a single transaction. To modify the array, simply add, update, or remove elements, and then invoke the saveChanges() method on the array to persist the changes. javascript import map from './map'; const db = map.sqlite('demo.db'); updateInsertDelete(); async function updateInsertDelete() { const orders = await db.order.getAll({ customer: true, deliveryAddress: true, lines: true }); //will add line to the first order orders[0].lines.push({ product: 'secret weapon', amount: 355 }); //will delete second row orders.splice(1, 1); //will insert a new order with lines, deliveryAddress and set customerId orders.push({ orderDate: new Date(2022, 0, 11, 9, 24, 47), customer: { id: 1 }, deliveryAddress: { name: 'George', street: 'Node street 1', postalCode: '7059', postalPlace: 'Jakobsli', countryCode: 'NO' }, lines: [ { product: 'Magic tent', amount: 349 } ] }); await orders.saveChanges(); } Deleting many rows javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRows(); async function deleteRows() { let orders = await db.order.getAll({ where: x => x.customer.name.eq('George') }); await orders.delete(); } Deleting with concurrency
Concurrent operations can lead to conflicts. When you still want to proceed with the deletion regardless of potential interim changes, the 'overwrite' concurrency strategy can be used. This example demonstrates deleting rows even if the "delivery address" has been modified in the meantime. You can read more about concurrency strategies in Updating rows.
javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRows(); async function deleteRows() { let orders = await db.order.getAll({ where: x => x.deliveryAddress.name.eq('George'), customer: true, deliveryAddress: true, lines: true }); await orders.delete({ deliveryAddress: { concurrency: 'overwrite' } }); } Batch delete When removing a large number of records based on a certain condition, batch deletion can be efficient.
However, it's worth noting that batch deletes don't follow the cascade delete behavior by default. To achieve cascading in batch deletes, you must explicitly call the deleteCascade method.
javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRows(); async function deleteRows() { const filter = db.order.deliveryAddress.name.eq('George'); await db.order.delete(filter); } Batch delete cascade When deleting records, sometimes associated data in related tables also needs to be removed. This cascade delete helps maintain database integrity.
javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRows(); async function deleteRows() { const filter = db.order.deliveryAddress.name.eq('George'); await db.order.deleteCascade(filter); } Batch delete by primary key For efficiency, you can also delete records directly if you know their primary keys.
javascript import map from './map'; const db = map.sqlite('demo.db'); deleteRows(); async function deleteRows() { db.customer.delete([{id: 1}, {id: 2}]); }
<summary>In the browser</summary>

You can use Orange in the browser by using the adapter for Express. Instead of sending raw SQL queries from the client to the server, this approach records the method calls in the client. These method calls are then replayed at the server, ensuring a higher level of security by not exposing raw SQL on the client side.
Raw sql queries, raw sql filters and transactions are disabled at the http client due to security reasons. If you would like Orange to support other web frameworks, like nestJs, fastify, etc, please let me know.

📄 server.ts javascript import map from './map'; import { json } from 'body-parser'; import express from 'express'; import cors from 'cors'; const db = map.sqlite('demo.db'); express().disable('x-powered-by') .use(json({ limit: '100mb' })) .use(cors()) //for demonstrational purposes, authentication middleware is not shown here. .use('/orange', db.express()) .listen(3000, () => console.log('Example app listening on port 3000!')); 📄 browser.ts javascript import map from './map'; const db = map.http('http://localhost:3000/orange'); updateRows(); async function updateRows() { const order = await db.order.getOne(undefined, { where: x => x.lines.any(line => line.product.startsWith('Magic wand')) .and(x.customer.name.startsWith('Harry'), lines: true }); order.lines.push({ product: 'broomstick', amount: 300, }); await order.saveChanges(); } Interceptors and base filter In the next setup, axios interceptors are employed on the client side to add an Authorization header of requests. Meanwhile, on the server side, an Express middleware (validateToken) is utilized to ensure the presence of the Authorization header, while a base filter is applied on the order table to filter incoming requests based on the customerId extracted from this header. This combined approach enhances security by ensuring that users can only access data relevant to their authorization level and that every request is accompanied by a token. In real-world applications, it's advisable to use a more comprehensive token system and expand error handling to manage a wider range of potential issues.
One notable side effect compared to the previous example, is that only the order table is exposed for interaction, while all other potential tables in the database remain shielded from direct client access (except for related tables). If you want to expose a table without a baseFilter, just set the tableName to an empty object.
📄 server.ts javascript import map from './map'; import { json } from 'body-parser'; import express from 'express'; import cors from 'cors'; const db = map.sqlite('demo.db'); express().disable('x-powered-by') .use(json({ limit: '100mb' })) .use(cors()) .use('/orange', validateToken) .use('/orange', db.express({ order: { baseFilter: (db, req, _res) => { const customerId = Number.parseInt(req.headers.authorization.split(' ')[1]); //Bearer 2 return db.order.customerId.eq(Number.parseInt(customerId)); } } })) .listen(3000, () => console.log('Example app listening on port 3000!')); function validateToken(req, res, next) { // For demo purposes, we're just checking against existence of authorization header // In a real-world scenario, this would be a dangerous approach because it bypasses signature validation const authHeader = req.headers.authorization; if (authHeader) return next(); else return res.status(401).json({ error: 'Authorization header missing' }); } 📄 browser.ts javascript import map from './map'; const db = map.http('http://localhost:3000/orange'); updateRows(); async function updateRows() { db.interceptors.request.use((config) => { // For demo purposes, we're just adding hardcoded token // In a real-world scenario, use a proper JSON web token config.headers.Authorization = 'Bearer 2' //customerId return config; }); db.interceptors.response.use( response => response, (error) => { if (error.response && error.response.status === 401) { console.dir('Unauthorized, dispatch a login action'); //redirectToLogin(); } return Promise.reject(error); } ); const order = await db.order.getOne(undefined, { where: x => x.lines.any(line => line.product.startsWith('Magic wand')) .and(db.order.customer.name.startsWith('Harry')), lines: true }); order.lines.push({ product: 'broomstick', amount: 300 }); await order.saveChanges(); }
<summary>Fetching strategies</summary>

Efficient data retrieval is crucial for the performance and scalability of applications. The fetching strategy gives you the freedom to decide how much related data you want to pull along with your primary request. Below are examples of common fetching strategies, including fetching entire relations and subsets of columns. When no fetching strategy is present, it will fetch all columns without its relations.

Including a relation
This example fetches orders and their corresponding delivery addresses, including all columns from both entities. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ deliveryAddress: true }); } Including a subset of columns
In scenarios where only specific fields are required, you can specify a subset of columns to include. In the example below, orderDate is explicitly excluded, so all other columns in the order table are included by default. For the deliveryAddress relation, only countryCode and name are included, excluding all other columns. If you have a mix of explicitly included and excluded columns, all other columns will be excluded from that table. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ orderDate: false, deliveryAddress: { countryCode: true, name: true } }); }

<summary>Basic filters</summary>

Filters are a versatile tool for both data retrieval and bulk deletions. They allow for precise targeting of records based on specific criteria and can be combined with operators like any and exists and even raw sql for more nuanced control. Filters can also be nested to any depth, enabling complex queries that can efficiently manage and manipulate large datasets. This dual functionality enhances database management by ensuring data relevance and optimizing performance.

Equal
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.customer.getAll({ where x => x.name.equal('Harry') }); } Not equal
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.customer.getAll({ where x => x.name.notEqual('Harry') }); } Contains
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.customer.getAll({ where: x => x.name.contains('arr') }); } Starts with
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const filter = db.customer.name.startsWith('Harr'); const rows = await db.customer.getAll({ where: x => x.name.startsWith('Harr') }); } Ends with
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.customer.getAll({ where: x => x.name.endsWith('arry') }); } Greater than
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.orderDate.greaterThan('2023-07-14T12:00:00') }); } Greater than or equal
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.orderDate.greaterThanOrEqual('2023-07-14T12:00:00') }); } Less than
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.orderDate.lessThan('2023-07-14T12:00:00') }); } Less than or equal
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.orderDate.lessThanOrEqual('2023-07-14T12:00:00') }); } Between
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.orderDate.between('2023-07-14T12:00:00', '2024-07-14T12:00:00') }); } In
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.customer.name.in('George', 'Harry') }); } Raw sql filter
You can use the raw SQL filter alone or in combination with a regular filter. Here the raw filter queries for customer with name ending with "arry". The composite filter combines the raw SQL filter and a regular filter that checks for a customer balance greater than 100. It is important to note that due to security precautions aimed at preventing SQL injection attacks, using raw SQL filters directly via browser inputs is not allowed. Attempting to do so will result in an HTTP status 403 (Forbidden) being returned. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rawFilter = { sql: 'name like ?', parameters: ['%arry'] }; const rowsWithRaw = await db.customer.getAll({ where: () => rawFilter }); const rowsWithCombined = await db.customer.getAll({ where: x => x.balance.greaterThan(100).and(rawFilter) }); }
<summary>Relation filters</summary>

Relation filters offer a dynamic approach to selectively include or exclude related data based on specific criteria. In the provided example, all orders are retrieved, yet it filters the order lines to only include those that feature products with "broomstick" in their description. By setting deliveryAddress and customer to true, we also ensure the inclusion of these related entities in our result set.

javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const orders = await db.order.getAll({ lines: { where: x => x.product.contains('broomstick') }, deliveryAddress: true, customer: true }); }
<summary>And, or, not, exists</summary>

These operators serve as the backbone for constructing complex queries that allow for more granular control over the data fetched from the database. The examples provided below are self-explanatory for anyone familiar with basic programming concepts and database operations. The design philosophy underscores the importance of clear, readable code that doesn't sacrifice power for simplicity.

And
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.customer.name.equal('Harry') .and(x.orderDate.greaterThan('2023-07-14T12:00:00')) }); } Or
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: y => y.customer( x => x.name.equal('George') .or(x.name.equal('Harry'))) }); } Not
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { //Neither George nor Harry const rows = await db.order.getAll({ where: y => y.customer(x => x.name.equal('George') .or(x.name.equal('Harry'))) .not() }); } Exists
javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: x => x.deliveryAddress.exists() }); }
<summary>Any, all, none</summary>

These operators are used in scenarios involving relationships within database records.

Any
The any operator is employed when the objective is to find records where at least one item in a collection meets the specified criteria. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: y => y.lines.any(x => x.product.contains('guitar')) //equivalent syntax: //where: x => x.lines.product.contains('guitar') }); } All
Conversely, the all operator ensures that every item in a collection adheres to the defined condition. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: y => y.lines.all(x => x.product.contains('a')) }); } None
The none operator, as the name suggests, is used to select records where not a single item in a collection meets the condition. javascript import map from './map'; const db = map.sqlite('demo.db'); getRows(); async function getRows() { const rows = await db.order.getAll({ where: y => y.lines.none(x => x.product.equal('Magic wand')) }); }
<summary>Transactions</summary>

We initiate a database transaction using db.transaction. Within the transaction, a customer is retrieved and its balance updated using the tx object to ensure operations are transactional. An error is deliberately thrown to demonstrate a rollback, ensuring all previous changes within the transaction are reverted. Always use the provided tx object for operations within the transaction to maintain data integrity.

(NOTE: Transactions are not supported for Cloudflare D1)

javascript import map from './map'; const db = map.sqlite('demo.db'); execute(); async function execute() { await db.transaction(async tx => { const customer = await tx.customer.getById(1); customer.balance = 100; await customer.saveChanges(); throw new Error('This will rollback'); }); }
<summary>Data types</summary>

Orange is database agnostic - meaning it can work with multiple database systems without being specifically tied to any one of them. When the ORM behaves consistently across various databases, developers don't need to remember specific quirks or differences when switching between databases. They can rely on the ORM to provide the same mapping behavior, which reduces the cognitive load and potential for errors. There are currently 8 column types in Orange:

- string maps to VARCHAR or TEXT in sql - numeric maps to INTEGER, DECIMAL, NUMERIC, TINYINT FLOAT/REAL or DOUBLE in sql. - bigint maps to INTEGER, BIGINT in sql. - boolean maps to BIT, TINYINT(1) or INTEGER in sql. - uuid is represented as string in javascript and maps to UUID, GUID or VARCHAR in sql. - date is represented as ISO 8601 string in javascript and maps to DATE, DATETIME, TIMESTAMP or DAY in sql. Representing datetime values as ISO 8601 strings, rather than relying on JavaScript's native Date object, has multiple advantages, especially when dealing with databases and servers in different time zones. The datetime values are inherently accompanied by their respective time zones. This ensures that the datetime value remains consistent regardless of where it's being viewed or interpreted. On the other hand, JavaScript's Date object is typically tied to the time zone of the environment in which it's executed, which could lead to inconsistencies between the client and the database server. - dateWithTimeZone is represented as ISO 8601 string in javascript and maps to TIMESTAMP WITH TIME ZONE in postgres and DATETIMEOFFSET in ms sql.
Contrary to what its name might imply, timestamptz (TIMESTAMP WITH TIME ZONE) in postgres doesn't store the time zone data. Instead, it adjusts the provided time value to UTC (Coordinated Universal Time) before storing it. When a timestamptz value is retrieved, PostgreSQL will automatically adjust the date-time to the time zone setting of the PostgreSQL session (often the server's timezone, unless changed by the user). The primary benefit of DATETIMEOFFSET in ms sql is its ability to keep track of the time zone context. If you're dealing with global applications where understanding the original time zone context is critical (like for coordinating meetings across time zones or logging events), DATETIMEOFFSET is incredibly valuable. - binary is represented as a base64 string in javascript and maps to BLOB, BYTEA or VARBINARY(max) in sql. - json and jsonOf<T> are represented as an object or array in javascript and maps to JSON, JSONB, NVARCHAR(max) or TEXT (sqlite) in sql. 📄 map.ts javascript import orange from 'orange-orm'; interface Pet { name: string; kind: string; } const map = orange.map(x => ({ demo: x.table('demo').map(x => ({ id: x.column('id').uuid().primary().notNull(), name: x.column('name').string(), balance: x.column('balance').numeric(), discordId: x.column('balance').bigint(), regularDate: x.column('regularDate').date(), tzDate: x.column('tzDate').dateWithTimeZone(), picture: x.column('picture').binary(), pet: x.column('pet').jsonOf<Pet>(), //generic pet2: x.column('pet2').json(), //non-generic })) })); 📄 map.js javascript import orange from 'orange-orm'; /** * @typedef {Object} Pet * @property {string} name - The name of the pet. * @property {string} kind - The kind of pet */ /** @type {Pet} */ let pet; const map = orange.map(x => ({ demo: x.table('demo').map(x => ({ id: x.column('id').uuid().primary().notNull(), name: x.column('name').string(), balance: x.column('balance').numeric(), regularDate: x.column('regularDate').date(), tzDate: x.column('tzDate').dateWithTimeZone(), picture: x.column('picture').binary(), pet: x.column('pet').jsonOf(pet), //generic pet2: x.column('pet2').json(), //non-generic })) }));
<summary>Default values</summary>

Utilizing default values can be especially useful for automatically populating these fields when the underlying database doesn't offer native support for default value generation.
In the provided code, the id column's default value is set to a UUID generated by crypto.randomUUID(), and the isActive column's default is set to true.

javascript import orange from 'orange-orm'; import crypto 'crypto'; const map = orange.map(x => ({ myTable: x.table('myTable').map(({ column }) => ({ id: column('id').uuid().primary().default(() => crypto.randomUUID()), name: column('name').string(), balance: column('balance').numeric(), isActive: column('isActive').boolean().default(true), })) })); export default map;
<summary>Validation</summary>

In the previous sections you have already seen the notNull() validator being used on some columns. This will not only generate correct typescript mapping, but also throw an error if value is set to null or undefined. However, sometimes we do not want the notNull-validator to be run on inserts. Typically, when we have an autoincremental key or server generated uuid, it does not make sense to check for null on insert. This is where notNullExceptInsert() comes to rescue. You can also create your own custom validator as shown below. The last kind of validator, is the ajv JSON schema validator. This can be used on json columns as well as any other column type.

📄 map.ts javascript import orange from 'orange-orm'; interface Pet { name: string; kind: string; } let petSchema = { "properties": { "name": { "type": "string" }, "kind": { "type": "string" } } }; function validateName(value?: string) { if (value && value.length > 10) throw new Error('Length cannot exceed 10 characters'); } const map = orange.map(x => ({ demo: x.table('demo').map(x => ({ id: x.column('id').uuid().primary().notNullExceptInsert(), name: x.column('name').string().validate(validateName), pet: x.column('pet').jsonOf<Pet>().JSONSchema(petSchema) })) })); export default map; 📄 map.js javascript import orange from 'orange-orm'; /** * @typedef {Object} Pet * @property {string} name - The name of the pet. * @property {string} kind - The kind of pet */ /** @type {Pet} */ let pet; let petSchema = { "properties": { "name": { "type": "string" }, "kind": { "type": "string" } } }; function validateName(value) { if (value && value.length > 10) throw new Error('Length cannot exceed 10 characters'); } const map = orange.map(x => ({ demo: x.table('demo').map(x => ({ id: x.column('id').uuid().primary().notNullExceptInsert(), name: x.column('name').string().validate(validateName), pet: x.column('pet').jsonOf(pet).JSONSchema(petSchema) })) })); export default map;
<summary>Composite keys</summary>

A composite key is defined by marking multiple columns as primary keys. This is done using the ".primary()"" method on each column that is part of the composite key. Consider a scenario where we have orders and order lines, and each order line is uniquely identified by combining the order type, order number, and line number.

javascript import orange from 'orange-orm'; const map = orange.map(x => ({ order: x.table('_order').map(({ column }) => ({ orderType: column('orderType').string().primary().notNull(), orderNo: column('orderNo').numeric().primary().notNull(), orderDate: column('orderDate').date().notNull(), })), orderLine: x.table('orderLine').map(({ column }) => ({ orderType: column('orderType').string().primary().notNull(), orderNo: column('orderNo').numeric().primary().notNull(), lineNo: column('lineNo').numeric().primary().notNull(), product: column('product').string(), })) })).map(x => ({ order: x.order.map(v => ({ lines: v.hasMany(x.orderLine).by('orderType', 'orderNo'), })) })); export default map;
<summary>Column discriminators</summary>

Column discriminators are used to distinguish between different types of data in the same table. Think of them as labels that identify whether a record is one category or another. In the example, the client_type column serves as the discriminator that labels records as customer or vendor in the 'client' table. On inserts, the column will automatically be given the correct discriminator value. Similarly, when fetching and deleting, the discrimiminator will be added to the WHERE clause.

javascript import orange from 'orange-orm'; const map = orange.map(x => ({ customer: x.table('client').map(({ column }) => ({ id: column('id').numeric().primary(), name: column('name').string() })).columnDiscriminators(`client_type='customer'`), vendor: x.table('client').map(({ column }) => ({ id: column('id').numeric().primary(), name: column('name').string() })).columnDiscriminators(`client_type='vendor'`), })); export default map;

<summary>Formula discriminators</summary>

Formula discriminators are used to distinguish between different types of data in the same table. They differ from column discriminators by using a logical expression rather than a static value in a column.

In the example below, the formula discriminator categorize bookings into customerBooking and internalBooking within the same booking table. The categorization is based on the value of the booking_no column. For customerBooking, records are identified where the booking number falls within the range of 10000 to 99999. For internalBooking, the range is between 1000 to 9999. These conditions are utilized during fetch and delete operations to ensure that the program interacts with the appropriate subset of records according to their booking number. Unlike column discriminators, formula discriminators are not used during insert operations since they rely on existing data to evaluate the condition.

The '@this' acts as a placeholder within the formula. When Orange constructs a query, it repl

更新日志

Changelog

4.7.12
Bugfix: Bun Postgres: Race Condition After Transaction Commit #133
4.7.11
Support for bigint #102
4.7.10
Enabled cyclic dependencies in type mapping to support true bidirectional relationships. See #132
4.7.9
Bugfix: MsSql: Validation failed for parameter '0'. Value must be between -2147483648 and 2147483647, inclusive. See #131
4.7.8
Bugfix: Support for multiple result sets from stored procedures in MsSql. See #130
4.7.7
Always do logging with question mark as placeholder instead of dialect specific placeholder.
4.7.6
Changed logging for SAP ASE.
4.7.5
Implemented automatic hex conversion for non-ASCII UTF-8 characters in database parameters to resolve SAP ASE encoding issues.
4.7.4
Bugfix: SAP ASE: Do not throw errors on warnings. See #129
4.7.3
Bugfix: lessThanOrEqual throws incorrect syntax. See #128
4.7.2
Updated dependencies
4.7.1
Bugfix: Error when using bun and sqlite: Cannot find package 'Database'. See #127
4.7.0
Support for PGLite. See #124
4.6.3
Bugfix: Chained Reference relation throws if first one is null. See #126
4.6.2
Bugfix: crashed when combining endsWith filter with other filter: e.g. endsWith('foo').or(...)
4.6.1
Bugfix: No intellisense when running in browser mode . See #125
4.6.0
Support for Deno and Bun.
Using builtin sqlite for Node22++.
4.5.5
Bugfix: Parameterized queries are case sensitive on SAP ASE . See #123
4.5.4
Bugfix: hasMany relation through references relation gives empty array . See #122
4.5.3
Minor improvements of types in aggregate functions.
4.5.2
Bugfix: Aggregate functions are not allowed on root tables . See #121
4.5.1
Bugfix: "Changed by Other User" Error Triggered by Precision Mismatch in Numeric Column. See #120
4.5.0
Support for Cloudflare D1.
4.4.2
Support for schema in connection string. Postgrs only. #116
4.4.1
Support for date-only ISO strings. #116
4.4.0
More performance gains.
4.3.0
Minor performance improvements.
4.2.0
Improved performance.
4.1.4
Better error message when running over http.
4.1.3
Do not use transaction in readonly operations. #109
4.1.2
Bugfix with composite primary key with hasMany relation. #106
4.1.1
Some fixes regarding NotNull mappings. #104
4.1.0
Escape table and column names. #91 #92
NotNullable relations #104
4.0.1
Inhouse definitions of ajv to avoid trouble with nestjs.
4.0.0
Changed the behaviour of update to accept a where filter and only update passed in columns and relations. The previous behaviour of update has moved to replace method.
3.10.4
Bugfix: "Changed by Other User" Error Triggered by Precision Mismatch in Numeric Column. See #120
3.10.3
Fix duplicate method signatures for those still using code generation
3.10.2
Orange ORM was renamed from rdb. New installation url: npmjs.org/package/orange-orm . Old url was npmjs.org/package/rdb
3.10.1
Bugfix: Adding hasOne row to existing parent throws. See #86
3.10.0
Aggregate functions
3.9.1
Bugfix: Crashing on many relations if foreign key column is omitted in strategy. See #83
3.9.0
Possible to elevate associated column on a related table to a parent table when fetching. See https://github.com/alfateam/orange-orm/#user-content-aggregate-results
3.8.0
Aggregate operators: count, max, min, avg and sum.
3.7.0
Upgraded to mysql2@3.9.4 and tedious@18.1.0
3.6.2
Fixed bug with orderBy array that appeared in v3.6.0. See #78
3.6.1
Fixed bug in filtered relations regarding incorrect alias. See #76
3.6.0
Filters on relations.
3.5.2
MssqlNative and SAP ASE: PRINT statements no longer yields error.
3.5.1
Mssql: fix precision date types.
3.5.0
Support for Oracle.
3.4.0
Allow multiple resultset for SAP and mssql. See #72.
3.3.0
Possible to apply updates from JSON object. See #70.
3.2.9
Transaction was ignored. See #68.
Proxify method had incorrect type signature. See #69.
3.2.8
Signature for fetchingStrategy should accept boolean as well object (code generation only). See #67.
3.2.7
Bugfix Inserting with skipOnConflict on whole table yields incorrect sql. See #63.
3.2.6
Hide metadata types from intellisense on insert().
3.2.5
Fix typescript signature for insert overload. See #62.
3.2.4
Bugfix: array of filters. See #56 and #57.
3.2.3
Conflict resolution on insert. See #42 and Inserting rows.
3.2.2
Bugfix for Sql Server: OFFSET was ignored. See #46.
3.2.1
Bugfix for those using code generation: count had wrong signature.
3.2.0
Support for aggregate count() function.
3.1.47
Bugfix: Raw queries ran outside transaction. See #46.
Documented logging.
3.1.46
Bugfix code generation, only related for those still using legacy / code generation.
3.1.45
Memoize pool when resolving through func. See #45.
Improved ts types for and/or-operators: now accepting arrays of RawFilter.
3.1.44
Bugfix with globbing, only related for those still using legacy / code generation
3.1.43
Updated dependency glob@^10.3.4.
3.1.42
Bugfixes: errors when inserting json and binary in postgres, mysql and sqlite.
3.1.41
Bugfix: false positive 403.
3.1.40
Documented raw sql queries and filters.
3.1.39
Small typos in README.
3.1.38
Unknown data type in mssql when .eq() on boolean column.
3.1.37
Documented in the browser.
3.1.36
Fixed typescript types in definition file.
3.1.35
Fix CI badge
3.1.34
Badges for npm, ci and coverage
3.1.33
Bugfix for all, any and none. It did not handle nested any properly
3.1.32
Updated typescript types for filtering on many relations
3.1.31
Documented updating rows in Modern Typescript Documentation.
3.1.30
Removed blockquotes in docs.
3.1.29
Small improvements in docs.
3.1.28
Started with Modern Typescript Documentation.
3.1.27
Types for express hosting.
3.1.26
Read dates directly from database without parsing the through js date.
3.1.25
Lazily invoke db connectors.
3.1.24
Expand() to deeply expand all properties on rows.
3.1.23
No need for code generation
3.1.22
Use local time instead of UTC on tedious
3.1.21
Metadata with concurrency and readonly
3.1.20
Bugfix patching ms sql date 3.1.19
Make db optional in RdbConfig
3.1.18
Bugfix patching without metadata 3.1.17
Another Buffer workaround in browser
3.1.16
Buffer workaround in browser
3.1.15
Fixed concurrency misspelling type generation
3.1.14
Bugfix patch in browser
3.1.13
Readonly and concurrency
3.1.12
Tedious: return first error when aggregate error
3.1.11
Do not set default port for mssql
3.1.10
Bugfix when parsing port number in connection string for tedious.
3.1.9
Improved typescript support for notNullable props on row.
3.1.8
Improved typescript definitions for notNull() and validate().
3.1.7
Chainable column.validate() and column.JSONSchema().
column.notNull().
3.1.6
Tedious: return first error if AggregatError
3.1.5
Improved performance for ms sql
3.1.4
import type to satisfy Type-Only imports/exports in compilers
3.1.3
Added eslint-disable no-explicit-any in generated typescript
3.1.2
Tedious as default driver for msSql.
3.1.1
Typescript as dev dependency
3.1.0
Date is mapped to ISO string. Any date input will map 'casted' to ISO string on save.
3.0.33
ts ignore.
3.0.32
More compact typescript generation.
3.0.31
Bugfix json patching.
3.0.30
Removed dependency rfc6902.
3.0.29
Overload in typescript for getById.
3.0.28
Removed dependency rdb-client.
3.0.27
Query without transaction did not release connection on error.
3.0.26
Improve typescript code generation.
3.0.25
Improve typescript code generation.
3.0.24
Bugfixed related to wathcing JSON column when value is null.
3.0.23
Still some bugfix related to: Converting date to ISO sometimes gives incorrect milliseconds part.
3.0.22
Bugfix: Converting date to ISO sometimes gives incorrect milliseconds part.
3.0.21
Fix typescript generation for Express.
3.0.20
Throw if empty connection string. Lazy load dependency node-cls to improve performance and memory usage
3.0.19
Bugfix when patching many-relationship from rdb-client
3.0.17
Upgraded dependency uuid
3.0.13
Upgraded dependency uuid
3.0.12
Avoid page locking in ms sql in extreme concurrency situations when using patchTable
3.0.11
Null parameters are replaced with sql 'null' value. This is practical when calling stored procedures with raw sql.
3.0.10
PeerDependencies as optional with npm 7 and above - using peerDependenciesMeta.
3.0.9
Support for query without transaction.
3.0.8
Support for all, any and none filters.
3.0.7
Hosting in express with typescript support.
3.0.6
insertAndForget() for situations where you INSERT privileges, but no SELECT privileges.
3.0.5
Fixed typescript methods: insert-methods should be async and return Promise<..>
3.0.4
Support for offset
3.0.3
Exposed typescript methods for rdb-client: query, insert and bulk deletes.
3.0.2
Binary as base64.
3.0.1
Small fixes.
3.0.0
Support for Sybase SAP and MsSql.
2.6.28
Support for rdb in the browser. Docs will come later.
2.6.27
Bugfix: JSON patching could result in duplicate DTO when run in same transaction.
2.6.26
Bugfix: changing a dto on nested property would also change the jsonb column.
Default value accepts function
2.6.25
Possible to indirectly update JSONB column by setting a nested property
2.6.24
Temporary lock with func as an alternative to transaction locks (pg only)
2.6.23
Bugfix when JSON patching nested relations.
2.6.22
Bugfix JSON patch.
2.6.21
Added JSON patch strategy 'skipOnConflict' to skip update if conflict.
2.6.20
Upgraded to pg@8.
2.6.19
Wrong location of module.
2.6.18
Using domains instead of async_hooks for continution local context because async_hooks is unstable and experimental.
2.6.17
More bugfixes with array patching.
2.6.16
Bugfix. Did not properly apply json patch with new arrays.
2.6.15
Downgraded to pg@7 because pg@8 messes up async hooks
2.6.14
Possible to send in types for custom parsing
2.6.13
Upgraded to pg@8.
2.6.12
Bugfix. Delete many relations in JSON patch did delete all.
2.6.11
Bugfix. Delete related row in JSON patch.
2.6.10
Bugfix. Did not apply patch properly on arrays inside json.
2.6.9
Bugfix. Inserts inside update hooks did not execute when last command in transaction.
2.6.8
Bugfix. Update hooks were sometimes incomplete.
2.6.7
Hooks on insert, update and delete.
2.6.6
Return status 204 instead of 200 when json patching.
2.6.5
Make this version the latest in npm.
2.6.4
Patching on row no longer modifies patch, but uses a clone instead.
2.6.3
Bugfix in JSON patching when null on object.
2.6.2
Bugfix in JSON patching with many relations.
2.6.1
Experimental JSON patching: patch returns dto of last operation.
2.6.0
Experimental JSON patching with metadata and single row patching.
2.5.1
Experimental JSON patching with concurrency strategy.
2.5.0
Experimental JSON patching and express adapter.
2.4.0
Supporting native bindings for postgres. The package pg-native must be installed as a peer dependency
2.3.0
Added column validation with JSON Schema or plain validator.
2.2.0
Throw meaningful error when getById gives no match.
2.1.1
Bugfix with insert and hasOne relation.
2.1.0
Pooling is disabled when there is no pool option object sent in. Previously, a pool with default size 10 was always created.
2.0.1
Ignoring tests when packing for npm
2.0.0
Domains are replaced by node-cls which is based upon async_hooks. Beware that async_hooks are still experimental. Requires node >= 8.
1.9.0
Throw meaningful error when getById gives no match.
1.8.1
Bugfix with insert and hasOne relation.
1.8.0
Pooling is disabled when there is no pool option object sent in. Previously, a pool with default size 10 was always created.
1.7.7
Commit/rollback are implicit if sending a callback to the transaction function. See example
The callback must return a promise or be an async function.
1.7.6
Order By now supports jsonb-fields.
1.7.5
GetManyDto bugfix: crashing on empty result.
1.7.4
GetManyDto strips nulls.
1.7.3
GetManyDto. For read-only scenario. A lot faster than getMany.
1.7.2
Upgraded to pg@7.4.3 and pg-query-stream@1.1.1
1.7.1
Support for schemas (postgres only).
1.7.0
sqlite3 is now a peer dependency. Add it to your own package.json if you intend to use it.
1.6.9
Bugfix: one-to-many relation returns empty if strategy is included.
1.6.8
Bugfix: one-to-many relation returns empty if insert/update is done earlier in transaction.
1.6.7
Bugfix in relations.
1.6.6
Bugfix.
1.6.5
Improved performance on relations.
1.6.4
Bugfix.
1.6.3
Bugfix: potential incorrect timeZoneOffset when serializing date to JSON. Got timeZoneOffset from now() instead of on actual date.
1.6.2
Removed es6 syntax to ensure backwards compatability.
Fixed global var leak.
1.6.1
Now supporting sqlite.
1.6.0
Bugfix: potential ambigous column error when using limit and relating to other tables.
1.5.9
Bugfix: using multipleStatements in mySql could sometimes cause an error when updates are run right before a select.
Improved performance on limit when relating to other tables.
Using uuid instead of node-uuid
Updated all dependencies but generic-pool to latest. (Generic-pool has some breaking changes in latest. I will update it in next release.)
1.5.8
Cleanup line breaks in documentation.
1.5.7
Bugfix: getById.exclusive and tryGetById.exclusive did not lock if row was cached.
Improved performance on tryGetFirst.
1.5.6
Raw sql filters can accept sql both as string and as function. E.g. var filter = {sql: function() {return 'foo > 1';}}.
1.5.5
Optional locks for getMany, tryGetFirst and tryGetById. Instead of calling getMany(params) just call getMany.exclusive(params). Same syntax goes for tryGetFirst and tryGetById. This will result in SELECT FOR UPDATE.
Bugfix: bulk deletes now accepts raw sql filters too.
1.5.4
Transaction locks. Postgres only.
1.5.3
Upgraded to pg 6.0.3
1.5.2
Improved performance and reduced memory footprint.
1.5.1
Documented JSON column type. Bug fix: Insert and foreign key violation.
1.5.0
JSON column type. Postgres json type does not support rdb filters.
1.4.1
Empty filter would sometimes cause invalid filter.
1.4.0
Raw SQL query.
1.3.0
getMany() now supports limit and orderBy - same syntax as in streaming.
1.2.3
Bugfix: iEqual gave incorrect sql when parameterized.
1.2.2
Exlusive no longer returns a clone of table. It has changes current table to exclusive locking.
1.2.1
Bugfix: Exclusive row locks
1.2.0
Exclusive row locks
1.1.0
Now supporting streaming. Requires postgres or MySQL >=5.7.7
1.0.8
README fixup.
1.0.7
Better performance on insert and update.
1.0.6
Bugfix: Transaction domain should not forward rdb singleton from old domain.
1.0.5
Documentation cleanup.
1.0.4
orderBy in toDto().
1.0.3
toDto() using next tick on every thousandth row to avoid maximum call stack size exceeded.
1.0.2
Reduced number of simultaneous promises in order to avoid maximum call stack size exceeded.
1.0.1
Bugfix: Incorrect insert/updates on timestamp without timezone. The time was converted utc instead of stripping the timezone.
1.0.0
Transaction domain forwards properties from old domain.
Semantic versioning from now on.
0.5.1
Improved performance
0.5.0
Logging: rdb.log(someFunc) logs sql and parameters.
Raw sql filters.
0.4.9
New method: tryGetById.
New filter: iEqual, postgres only.
Bugfix: rows.toJSON() without strategy did not include any children.
0.4.8
Explicit pooling with size and end().
Bugfix: mySql did not release client to pool.
0.4.7
Upgraded to pg 4.3.0
Upgraded to mysql 2.5.5
0.4.6
Upgraded pg 4.2.0.
0.4.5
Oops. Forgot to use pg.js instead of pg.
0.4.4
Upgraded all dependencies to latest. Using pg.js instead of pg.
0.4.3
Can ignore columns when serializing to dto.
0.4.2
Bugfix: update on a row crashes when a delete occurs earlier in same transaction.
0.4.1
Bugfix: more global leaks.
0.4.0
Bugfix: global leak.
0.3.9
Bugfix: eager loading joins/hasOne with non unique column names was not handled correctly.
0.3.8
Supports mySql.
Bulk deletes.
0.3.7
Bugfix: eager loading manyRelation on a join/hasOne returned empty array #11
0.3.6
Fixed sql injection vulnerability.
0.3.5
Built-in fetching strategies for lazy loading. Works best in readonly scenarios.
0.3.4
Docs and examples split moved to separate file.
0.3.3
Fixed documentation layout again.
0.3.2
Fixed documentation layout.
0.3.1
Case insensitive filters: iStartsWith, iEndsWith and iContains.
0.3.0
Fix broken links in docs.
0.2.9
Support for row.delete().
Rollback only throws when error is present.
0.2.8
Guid accepts uppercase letters.
Bugfix: null inserts on guid columns yielded wrong sql.
0.2.7
New method, toDto(), converts row to data transfer object.
Bugfix: toJSON returned incorrect string on hasMany relations.
0.2.6
Fixed incorrect links in README.
0.2.5
Bugfix: caching on composite keys could give a crash #7. Improved sql compression on insert/update.
0.2.4
Bugfix: getMany with many-strategy and shallowFilter yields incorrect query #6.
0.2.3
Reformatted documentation. No code changes.