包详细信息

dat-node

datproject958MIT不推荐使用4.0.1

Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.

Build node applications with Dat archives on the file system.

data, dat, hyperdrive, p2p

自述文件

deprecated

More info on active projects and modules at dat-ecosystem.org


dat-node

dat-node is a high-level module for building Dat applications on the file system.

npm Travis Test coverage Greenkeeper badge

For a lower-level API for building your own applications, use the Dat SDK which works in Node and the Web

Compatibility

Note: Version 4 of dat-node is not compatible with earlier versions (3.5.15 and below).

Dat Project Documentation & Resources

Features

  • High-level glue for common dat:// and hyperdrive modules.
  • Sane defaults and consistent management of storage & secret keys across applications, using dat-storage.
  • Easily connect to the dat:// network with holepunching, using hyperswarm
  • Import files from the file system, using mirror-folder
  • Serve dats over http with hyperdrive-http
  • Access APIs to lower level modules with a single require!

Browser Support

Many of our dependencies work in the browser, but dat-node is tailored for file system applications. See dat-sdk if you want to build browser-friendly applications.

Example

To send files via dat:

  1. Tell dat-node where the files are.
  2. Import the files.
  3. Share the files on the dat network! (And share the link)
var Dat = require('dat-node')

// 1. My files are in /joe/cat-pic-analysis
Dat('/joe/cat-pic-analysis', function (err, dat) {
  if (err) throw err

  // 2. Import the files
  dat.importFiles()

  // 3. Share the files on the network!
  dat.joinNetwork()
  // (And share the link)
  console.log('My Dat link is: dat://' + dat.key.toString('hex'))
})

These files are now available to share over the dat network via the key printed in the console.

To download the files, you can make another dat-node instance in a different folder. This time we also have three steps:

  1. Tell dat where I want to download the files.
  2. Tell dat what the link is.
  3. Join the network and download!
var Dat = require('dat-node')

// 1. Tell Dat where to download the files
Dat('/download/cat-analysis', {
  // 2. Tell Dat what link I want
  key: '<dat-key>' // (a 64 character hash from above)
}, function (err, dat) {
  if (err) throw err

  // 3. Join the network & download (files are automatically downloaded)
  dat.joinNetwork()
})

That's it! By default, all files are automatically downloaded when you connect to the other users.

Dig into more use cases below and please let us know if you have questions! You can open a new issue or talk to nice humans in our chat room.

Example Applications

  • CLI: We use dat-node in the dat CLI.
  • Desktop: The Dat Desktop application manages multiple dat-node instances via dat-worker.
  • See the examples folder for a minimal share + download usage.
  • And more! Let us know if you have a neat dat-node application to add here.

Usage

All dat-node applications have a similar structure around three main elements:

  1. Storage - where the files and metadata are stored.
  2. Network - connecting to other users to upload or download data.
  3. Adding Files - adding files from the file system to the hyperdrive archive.

We'll go through what these are for and a few of the common usages of each element.

Storage

Every dat archive has storage, this is the required first argument for dat-node. By default, we use dat-storage which stores the secret key in ~/.dat/ and the rest of the data in dir/.dat. Other common options are:

  • Persistent storage: Stored files in /my-dir and metadata in my-dir/.dat by passing /my-dir as the first argument.
  • Temporary Storage: Use the temp: true option to keep metadata stored in memory.
// Permanent Storage
Dat('/my-dir', function (err, dat) {
  // Do Dat Stuff
})

// Temporary Storage
Dat('/my-dir', {temp: true}, function (err, dat) {
  // Do Dat Stuff
})

Both of these will import files from /my-dir when doing dat.importFiles() but only the first will make a .dat folder and keep the metadata on disk.

The storage argument can also be passed through to hyperdrive for more advanced storage use cases.

Network

Dat is all about the network! You'll almost always want to join the network right after you create your Dat:

Dat('/my-dir', function (err, dat) {
  dat.joinNetwork()
  dat.network.on('connection', function () {
    console.log('I connected to someone!')
  })
})

Downloading Files

Remember, if you are downloading - metadata and file downloads will happen automatically once you join the network!

dat runs on a peer to peer network, sometimes there may not be anyone online for a particular key. You can make your application more user friendly by using the callback in joinNetwork:

// Downloading <key> with joinNetwork callback
Dat('/my-dir', {key: '<key>'}, function (err, dat) {
  dat.joinNetwork(function (err) {
    if (err) throw err

    // After the first round of network checks, the callback is called
    // If no one is online, you can exit and let the user know.
    if (!dat.network.connected || !dat.network.connecting) {
      console.error('No users currently online for that key.')
      process.exit(1)
    }
  })
})
Download on Demand

If you want to control what files and metadata are downloaded, you can use the sparse option:

// Downloading <key> with sparse option
Dat('/my-dir', {key: '<key>', sparse: true}, function (err, dat) {
  dat.joinNetwork()

  // Manually download files via the hyperdrive API:
  dat.archive.readFile('/cat-locations.txt', function (err, content) {
    console.log(content) // prints cat-locations.txt file!
  })
})

Dat will only download metadata and content for the parts you request with sparse mode!

Importing Files

There are many ways to get files imported into an archive! Dat node provides a few basic methods. If you need more advanced imports, you can use the archive.createWriteStream() methods directly.

By default, just call dat.importFiles() to import from the directory you initialized with. You can watch that folder for changes by setting the watch option:

Dat('/my-data', function (err, dat) {
  if (err) throw err

  var progress = dat.importFiles({watch: true}) // with watch: true, there is no callback
  progress.on('put', function (src, dest) {
    console.log('Importing ', src.name, ' into archive')
  })
})

You can also import from another directory:

Dat('/my-data', function (err, dat) {
  if (err) throw err

  dat.importFiles('/another-dir', function (err) {
    console.log('done importing another-dir')
  })
})

That covers some of the common use cases, let us know if there are more to add! Keep reading for the full API docs.

API

Dat(dir|storage, [opts], callback(err, dat))

Initialize a Dat Archive in dir. If there is an existing Dat Archive, the archive will be resumed.

Storage

  • dir (Default) - Use dat-storage inside dir. This stores files as files, sleep files inside .dat, and the secret key in the user's home directory.
  • dir with opts.latest: false - Store as SLEEP files, including storing the content as a content.data file. This is useful for storing all history in a single flat file.
  • dir with opts.temp: true - Store everything in memory (including files).
  • storage function - pass a custom storage function along to hyperdrive, see dat-storage for an example.

Most options are passed directly to the module you're using (e.g. dat.importFiles(opts). However, there are also some initial opts can include:

opts = {
  key: '<dat-key>', // existing key to create archive with or resume
  temp: false, // Use random-access-memory as the storage.

  // Hyperdrive options
  sparse: false // download only files you request
}

The callback, cb(err, dat), includes a dat object that has the following properties:

  • dat.key: key of the dat (this will be set later for non-live archives)
  • dat.archive: Hyperdrive archive instance.
  • dat.path: Path of the Dat Archive
  • dat.live: archive.live
  • dat.writable: Is the archive writable?
  • dat.resumed: true if the archive was resumed from an existing database
  • dat.options: All options passed to Dat and the other submodules

Module Interfaces

dat-node provides an easy interface to common Dat modules for the created Dat Archive on the dat object provided in the callback:

var network = dat.joinNetwork([opts], [cb])

Join the network to start transferring data for dat.key, using discovery-swarm. You can also use dat.join([opts], [cb]).

If you specify cb, it will be called when the first round of discovery has completed. This is helpful to check immediately if peers are available and if not fail gracefully, more similar to http requests.

Returns a network object with properties:

  • network.connected - number of peers connected
  • network.on('listening') - emitted with network is listening
  • network.on('connection', connection, info) - Emitted when you connect to another peer. Info is an object that contains info about the connection
Network Options

opts are passed to discovery-swarm, which can include:

opts = {
  upload: true, // announce and upload data to other peers
  download: true, // download data from other peers
  port: 3282, // port for discovery swarm
  utp: true, // use utp in discovery swarm
  tcp: true // use tcp in discovery swarm
}

//Defaults from datland-swarm-defaults can also be overwritten:

opts = {
  dns: {
    server: // DNS server
    domain: // DNS domain
  }
  dht: {
    bootstrap: // distributed hash table bootstrapping nodes
  }
}

Returns a discovery-swarm instance.

dat.leaveNetwork() or dat.leave()

Leaves the network for the archive.

var importer = dat.importFiles([src], [opts], [cb])

Archive must be writable to import.

Import files to your Dat Archive from the directory using mirror-folder.

  • src - By default, files will be imported from the folder where the archive was initiated. Import files from another directory by specifying src.
  • opts - options passed to mirror-folder (see below).
  • cb - called when import is finished.

Returns a importer object with properties:

  • importer.on('error', err)
  • importer.on('put', src, dest) - file put started. src.live is true if file was added by file watch event.
  • importer.on('put-data', chunk) - chunk of file added
  • importer.on('put-end', src, dest) - end of file write stream
  • importer.on('del', dest) - file deleted from dest
  • importer.on('end') - Emits when mirror is done (not emitted in watch mode)
  • If opts.count is true:
    • importer.on('count', {files, bytes}) - Emitted after initial scan of src directory. See import progress section for details.
    • importer.count will be {files, bytes} to import after initial scan.
    • importer.putDone will track {files, bytes} for imported files.
Importer Options

Options include:

var opts = {
  count: true, // do an initial dry run import for rendering progress
  ignoreHidden: true, // ignore hidden files  (if false, .dat will still be ignored)
  ignoreDirs: true, // do not import directories (hyperdrive does not need them and it pollutes metadata)
  useDatIgnore: true, // ignore entries in the `.datignore` file from import dir target.
  ignore: // (see below for default info) anymatch expression to ignore files
  watch: false, // watch files for changes & import on change (archive must be live)
}
Ignoring Files

You can use a .datignore file in the imported directory, src, to ignore any the user specifies. This is done by default.

dat-node uses dat-ignore to provide a default ignore option, ignoring the .dat folder and all hidden files or directories. Use opts.ignoreHidden = false to import hidden files or folders, except the .dat directory.

It's important that the .dat folder is not imported because it contains a private key that allows the owner to write to the archive.

var stats = dat.trackStats()

stats.on('update')

Emitted when archive stats are updated. Get new stats with stats.get().

var st = stats.get()

dat.trackStats() adds a stats object to dat. Get general archive stats for the latest version:

{
  files: 12,
  byteLength: 1234,
  length: 4, // number of blocks for latest files
  version: 6, // archive.version for these stats
  downloaded: 4 // number of downloaded blocks for latest
}
stats.network

Get upload and download speeds: stats.network.uploadSpeed or stats.network.downloadSpeed. Transfer speeds are tracked using hyperdrive-network-speed.

var peers = stats.peers
  • peers.total - total number of connected peers
  • peers.complete - connected peers with all the content data

var server = dat.serveHttp(opts)

Serve files over http via hyperdrive-http. Returns a node http server instance.

opts = {
  port: 8080, // http port
  live: true, // live update directory index listing
  footer: 'Served via Dat.', // Set a footer for the index listing
  exposeHeaders: false // expose dat key in headers
}

dat.pause()

Pause all upload & downloads. Currently, this is the same as dat.leaveNetwork(), which leaves the network and destroys the swarm. Discovery will happen again on resume().

dat.resume()

Resume network activity. Current, this is the same as dat.joinNetwork().

dat.close(cb)

Stops replication and closes all the things opened for dat-node, including:

  • dat.archive.close(cb)
  • dat.network.close(cb)
  • dat.importer.destroy() (file watcher)

License

MIT

更新日志

Change Log

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog and this project adheres to Semantic Versioning.

[Unreleased]

Note: unreleased changes are added here. <!-- Change types:

Added, ### Changed, ### Fixed, ### Removed, ### Deprecated

-->

3.5.5 - 2017-06-19

Fixed

3.5.0 - 2017-06-19

Added

3.4.0 - 2017-06-19

Added

3.3.2 - 2017-05-29

Fixed

3.3.1 - 2017-05-16

Fixed

  • Replication Stream should be true for writable archives

Added

  • Use regular sleep storage if opts.latest is false

3.3.0 - 2017-05-10

Added

  • Importing - Ignore directories option, true by default

3.0.0 - 2017-04-28

Fixed

  • Upgrade hyperdrive with breaking change.

2.0.0 - 2017-04-13

Big new release! Hyperdrive version 8 upgrades to our SLEEP file format. The hyperdrive release improves import, transfer speed, and metadata access. It includes a new API much like the node fs API. Lots of cool things!

We've tried to keep the dat-node API changes to a minimum. But we've moved away from using leveldb to storing the metadata, using a flat file format instead. This means the 2.0 release will be incompatible with exiting dat archives.

If you have any old archives, we definitely recommend you upgrade. Any upgrade time will be made up for with more speed!

The major API differences are listed below, we probably forgot some minor changes as several of the underlying modules have changes options or APIs.

Changed

  • Using mirror-folder for importing files - this comes with a new event emitter for importing and different options.
  • Storage is a lot different! You can specify a directory or a storage function, e.g. Dat(ram, cb) now instead of the database.

Removed

  • opts.db option - no more database! You can specify a variety of storage formats as the first argument instead.
  • dat.owner - this is now dat.writable.
  • stats events - we are still in the process of upgrading hyperdrive-stats. Hypercore will also support more stats internally now and we will be transitioning to those soon.
  • Import options - mirror-folder has fewer options on import.

1.4.1 - 2017-03-17

Fixed

  • Pass network opts through to discovery-swarm.

1.4.0 - 2017-03-08

Added

  • .datignore support for ignoring files
  • Callback on joinNetwork after first round of discovery
  • Initial pause and resume API aliased to join and leave
  • stats.peers API with new peer counts

Fixed

  • Better leave network, also closes swarm.
  • Clone options passed to initArchive
  • Set opts.file for archive owner without length
  • createIfMissing passed to level options
  • dat.close() twice sync errors
  • Fix import without options
  • (hyperdrive fix) sparse content archives

Changed

  • Remove automatic finalize for snapshot imports

1.3.8 - 2017-02-20

Fixed

  • Close archive after bad key on init.

1.3.7 - 2017-02-15

Changed

  • Rollback temporary changes from 1.3.6
  • Set length on file option
  • Remove hyperdrive version pin

1.3.6 - 2017-02-13

Changed

  • Temporary changes for critical replication bugs
  • Pin hyperdrive to 7.13.2
  • Remove length option in raf
  • Do not allow owner to download
  • Do not set file option for owner

1.3.5 - 2017-02-04

Fixed

  • Key regression on resume

1.3.4 - 2017-02-03

Fixed

  • Call back with error if opts.key mismatches keys in database
  • Fix options casting and improve errors
  • Improve key handling for archive databases + debug info

1.3.3 - 2017-02-01

Fixed

  • Call back with error object on init archive

1.3.2 - 2017-02-01

Fixed

  • Do not mutate input args
  • Call unreplicate on close to make sure data replication stops
  • Throw error if close is called more than once

1.3.1 - 2017-01-25

Fixed

  • Real error message for createIfMissing

1.3.0 - 2017-01-25

Added

  • createIfMissing and errorIfExists options

Deprecated

  • resume option

1.2.4 - 2017-01-24

Fixed

  • fix regression in resuming archives without content by opening them first.

1.2.3 - 2017-01-24

Fixed

  • Learning things about npm versions!

1.2.2 - 2017-01-23

Fixed

  • Dowloaded file could have old bytes that weren't removed with updates. Issue #79.

1.2.1 - 2017-01-23

Fixed

  • Bug where opening archive on bad key returned without callback. Added timeout on open archive to make sure other key is tried before exiting.

1.2.0 - 2017-01-23

Changed

  • Read existing keys directly from hyperdrive instead of using the db. Allows for better resuming in any application.
  • Count files much faster on import
  • Add opts.indexing and default to true for when source = dest.

Added

  • Support for drive as first argument and multidrive support
  • dat.leaveNetwork - leave the network for this archive key.
  • Added dir option to importer.
  • Made it easier to require Dat as a module, without creating archive.

Fixed

  • Close archive after other things are closed
  • Use discoveryKey for stats database (security)

Deprecated

  • Expose discovery swarm instance on dat.network instead of dat.network.swarm.

1.1.1 - 2017-01-06

Fixed

  • Resolve the path and untildify before creating archive

1.1.0 - 2017-01-03

Added

  • Use opts.indexing for importing.

1.0.0 - 2016-12-21

0.1.1 - 2016-11-29

Fixed

  • Populate dat.key after archive opened (#43)

Changed

  • Use hyperdiscovery instead of hyperdrive-archive-swarm (#45)

0.1.0 - 2016-11-17

Released dat-js 4.0 as dat-node 0.1.

Moved to dat-node.

dat-node 0.1.0 === dat-js 4.0.0

4.0.0 - 2016-11-16

This will be the last major version of dat-js. This library will be moving to dat-fs, with a similar API.

Removed

  • webrtc support (opts.webrtc, opts.signalhub)
  • opts.upload changed to opts.discovery.upload (deprecated in 3.4.0)

Fixed

  • Error message for trying to download a dat to folder with existing dat.

3.8.2 - 2016-11-15

Fixed

  • Check type of keys on db resume

3.8.1 - 2016-11-15

Fixed

  • Progress incorrectly showing 100% with 0 bytes

3.8.0 - 2016-11-07

Added

  • Expose dat.owner, dat.key, dat.peers
  • Support buffer keys
  • Forward db.open errors

Fixed

  • Guard archive.close on dat.close

3.7.1 - 2016-10-29

Fixed

  • Create entryDone function once for downloads

3.7.0 - 2016-10-29

Fixed

  • Download file count for duplicate files

Removed

  • stats.bytesProgress on downloads

Changed

  • Upgrade to hyperdrive 7.5.0
  • Use archive.blocks for stats on download with new hyperdrive functions.

3.6.0 - 2016-09-27

Added

  • signalhub option.

3.5.0 - 2016-09-22

Added

  • opts.ignoreHidden ignores hidden directories by default.

3.4.0 - 2016-09-14

Changed

  • Accept object for discovery: {upload: true, download: true}.

Deprecated

  • upload option (moved to discovery.upload). Will be removed in 4.0.0.

3.3.1 - 2016-09-06

Fixed

  • Emit files-counted event on Dat instance
  • Include stats object on file-counted event

3.3.0 - 2016-09-06

Added

  • Add webrtc option

3.2.0 - 2016-09-01

Added

  • Upload option. upload=false will not upload data (allows download only option)

3.1.0 - 2016-09-01

Added

  • User opts.ignore extends default opts.

Fixed

  • Default ignore does not ignore files with .dat in them.

3.0.2 - 2016-08-26

Fixed

  • Default ignore config to ignore only .dat folder and files inside.

3.0.1 - 2016-08-18

Fixed

  • Fix hyperdrive-import-files bug on sharing directories

3.0.0 - 2016-08-18

Added

  • dat.open() function to initialize the .dat folder

Removed

  • dat.on('ready') event for initialization

2.x.x and earlier

  • Port lib/dat.js file from the Dat CLI library.

Changed

  • Use hyperdrive-import-files to import files on share