Working with file descriptors in Node.js
Before you’re able to interact with a file that sits in your filesystem, you must get a file descriptor
.
A file descriptor
is a reference
to an open file
, a number
(id) returned by opening the file
using the open() method
offered by the fs module
. 文件描述符是对打开文件的引用,是使用fs模块提供的open()方法打开文件时返回的一个数字(id)
This number (fd
) uniquely identifies an open file in operating system
:
const fs = require('fs')
fs.open('/Users/joe/test.txt', 'r', (err, fd) => {
//fd is our file descriptor
})
Notice the r
we used as the second parameter to the fs.open()
call.
That flag means we open the file for reading.
Other flags you’ll commonly use are
r+
open the file for reading and writing, if file don’t exist itwon't be created
.w+
open the file for reading and writing, positioning the stream at thebeginning
of the file. The file iscreated
if not existinga
open the file for writing, positioning the stream at theend of the file
. The file iscreated
if not existinga+
open the file for reading and writing, positioning the stream at theend of the file
. The file is created if not existing
You can also open the file by using the fs.openSync
method, which returns the file descriptor, instead of providing it in a callback:
const fs = require('fs')
try {
const fd = fs.openSync('/Users/joe/test.txt', 'r')
} catch (err) {
console.error(err)
}
Once you get the file descriptor, in whatever way you choose, you can perform all the operations that require it, like calling fs.open()
and many other operations that interact with the filesystem.
Node.js file stats
Every file comes with a set of details that we can inspect using Node.js.
In particular, using the stat() method
provided by the fs
module.
You call it passing a file path, and once Node.js gets the file details it will call the callback function
you pass,
with 2 parameters: an error message, and the file stats:
const fs = require('fs')
fs.stat('/Users/joe/test.txt', (err, stats) => {
if (err) {
console.error(err)
return
}
//we have access to the file stats in `stats`
})
Node.js provides also a sync method, which blocks the thread
until the file stats
are ready:
const fs = require('fs')
try {
const stats = fs.statSync('/Users/joe/test.txt')
} catch (err) {
console.error(err)
}
The file information is included in the stats variable. What kind of information can we extract using the stats?
A lot, including:
- if the file is a directory or a file, using
stats.isFile()
andstats.isDirectory()
- if the file is a symbolic link using
stats.isSymbolicLink()
- the file size in bytes using
stats.size
.
There are other advanced methods, but the bulk of what you’ll use in your day-to-day programming is this.
const fs = require('fs')
fs.stat('./hello.js', (err, stats) => {
if (err) {
console.error(err)
return
}
console.log(stats.isFile());
console.log(stats.isDirectory());
console.log(stats.isSymbolicLink());
console.log(stats.size);
})
Node.js File Paths
Every file in the system has a path.
On Linux and macOS, a path might look like:
/users/joe/file.txt
while Windows computers are different, and have a structure such as:
C:\users\joe\file.txt
You need to pay attention when using paths in your applications, as this difference must be taken into account.
You include this module in your files using
const path = require('path')
and you can start using its methods.
Getting information out of a path
Given a path, you can extract information out of it using those methods:
dirname
: get the parent folder of a filebasename
: get the filename partextname
: get the file extension
Example:
const path = require('path')
const notes = '/users/joe/notes.txt'
console.log(path.dirname(notes)); // /users/joe
console.log(path.basename(notes)); // notes.txt
console.log(path.extname(notes)); // .txt
console.log(path.basename(notes, path.extname(notes))); // notes
Working with paths
You can join two or more parts of a path by using path.join()
:
const path = require('path')
const name = 'joe'
// \users\joe\notes.txt
console.log(path.join('/', 'users', name, 'notes.txt'));
You can get the absolute path
calculation of a relative path using path.resolve()
:
const path = require('path')
// C:\Users\admin\Desktop\front_project\node_code\joe.txt
console.log(path.resolve('joe.txt'));
In this case Node.js will simply append /joe.txt
to the current working directory.
If you specify a second parameter folder, resolve
will use the first as a base for the second:
const path = require('path')
// C:\Users\admin\Desktop\front_project\node_code\tmp\joe.txt
console.log(path.resolve('tmp', 'joe.txt'));
If the first parameter starts with a slash, that means it’s an absolute path:
const path = require('path')
// C:\etc\joe.txt
console.log(path.resolve('/etc', 'joe.txt'));
path.normalize()
is another useful function, that will try and calculate the actual path, when it contains relative specifiers like .
or ..
, or double slashes:
const path = require('path')
// \users\test.txt
console.log(path.normalize('/users/joe/../test.txt') );
Neither resolve nor normalize will check if the path exists. They just calculate
a path based on the information
they got.
Reading files with Node.js
The simplest way to read a file in Node.js is to use the fs.readFile()
method, passing it the file path
, encoding
and a callback function
that will be called with the file data (and the error):
const fs = require('fs')
fs.readFile('./hello.txt', 'utf8' , (err, data) => {
if (err) {
console.error(err)
return
}
console.log(data)
})
Alternatively, you can use the synchronous version fs.readFileSync()
:
const fs = require('fs')
try {
const data = fs.readFileSync('./hello.txt', 'utf8')
console.log(data)
} catch (err) {
console.error(err)
}
Both fs.readFile()
and fs.readFileSync()
read the full content of the file in memory
before returning the data.
This means that big files are going to have a major impact on your memory consumption and speed of execution of the program.
In this case, a better option is to read the file content using streams
.
Writing files with Node.js
The easiest way to write to files in Node.js is to use the fs.writeFile()
API.
const fs = require('fs')
const content = 'Some content!'
fs.writeFile('./hello.txt', content, err => {
if (err) {
console.error(err)
return
}
//file written successfully
})
Alternatively, you can use the synchronous version fs.writeFileSync()
:
const fs = require('fs')
const content = 'Some content use writeFileSync!'
try {
const data = fs.writeFileSync('./hello.txt', content)
//file written successfully
} catch (err) {
console.error(err)
}
By default, this API will replace the contents of the file
if it does already exist.
You can modify the default by specifying a flag:
const fs = require('fs')
const content = '\nadd'
try {
const data = fs.writeFileSync('./hello.txt', content, { flag: 'a+' }, err => { })
//file written successfully
} catch (err) {
console.error(err)
}
The flags you’ll likely use are
r+
open the file for reading and writing, if file don’t exist itwon't be created
.w+
open the file for reading and writing, positioning the stream at thebeginning
of the file. The file iscreated
if not existinga
open the file for writing, positioning the stream at theend of the file
. The file iscreated
if not existinga+
open the file for reading and writing, positioning the stream at theend of the file
. The file is created if not existing
(you can find more flags at https://nodejs.org/api/fs.html#fs_file_system_flags)
Append to a file
A handy method to append content to the end of a file is fs.appendFile()
(and its fs.appendFileSync()
counterpart):
const fs = require('fs')
const content = '\nSome content!'
fs.appendFile('./hello.txt', content, err => {
if (err) {
console.error(err)
return
}
//done!
})
Using streams
All those methods write the full content to the file before returning the control back to your program (in the async version, this means executing the callback)
In this case, a better option is to write the file content using streams.
Working with folders in Node.js
The Node.js fs
core module provides many handy methods
you can use to work with folders.
Check if a folder exists
Use fs.access()
to check if the folder exists and Node.js can access it with its permissions.
Create a new folder
Use fs.mkdir()
or fs.mkdirSync()
to create a new folder.
const fs = require('fs')
const folderName = 'haha.txt'
try {
if (!fs.existsSync(folderName)) {
fs.mkdirSync(folderName)
}
} catch (err) {
console.error(err)
}
Read the content of a directory
Use fs.readdir()
or fs.readdirSync()
to read the contents of a directory.
This piece of code reads the content of a folder, both files and subfolders, and returns their relative path:
const fs = require('fs')
const folderPath = './'
console.log(fs.readdirSync(folderPath)); // return Array
You can get the full path:
const fs = require('fs')
const path = require('path')
const folderPath = './node_modules'
fs.readdirSync(folderPath).map(fileName => {
console.log(path.join(folderPath, fileName));
})
You can also filter the results to only return the files, and exclude the folders:
const fs = require('fs')
const path = require("path")
const folderPath = './node_modules'
console.log(typeof fs.lstatSync("haha.txt")); // object
console.log(fs.lstatSync("haha.txt").isFile()); // false
const jungleIsFile = fileName => {
try {
return fs.lstatSync(fileName).isFile()
// return fs.statSync(fileName).isFile()
} catch (err) {
console.error(err)
}
}
fs.readdirSync(folderPath).map(fileName => {
let aFileName = path.join(folderPath, fileName)
if (fs.existsSync(aFileName)) {
console.log(jungleIsFile(aFileName));
}
})
path in windows usually use
"/"
or"\\"
will trun to"\"
Rename a folder
Use fs.rename()
or fs.renameSync()
to rename folder. The first parameter is the current path, the second the new path:
const fs = require('fs')
fs.rename('a', 'b', err => {
if (err) {
console.error(err)
return
}
//done
})
fs.renameSync()
is the synchronous version:
const fs = require('fs')
try {
fs.renameSync('b', 'c')
} catch (err) {
console.error(err)
}
Remove a folder
Use fs.rmdir()
or fs.rmdirSync()
to remove a folder.
Removing a folder that has content can be more complicated than you need. You can pass the option { recursive: true }
to recursively remove the contents.
const fs = require('fs')
const dir = "c"
fs.rmdir(dir, { recursive: true }, (err) => {
if (err) {
throw err;
}
console.log(`${dir} is deleted!`);
});
NOTE: In Node
v16.x
the optionrecursive
is deprecated forfs.rmdir
of callback API, instead usefs.rm
to delete folders that have content in them:
const fs = require('fs')
fs.rm(dir, { recursive: true, force: true }, (err) => {
if (err) {
throw err;
}
console.log(`${dir} is deleted!`)
});
Or you can install and make use of the fs-extra module, which is very popular and well maintained.
It’s a drop-in replacement of the fs module
, which provides more features on top of it.
In this case the remove()
method is what you want.
Install it using
npm install fs-extra
and use it like this:
const fs = require('fs-extra')
const folder = '/Users/joe'
fs.remove(folder, err => {
console.error(err)
})
It can also be used with promises:
fs.remove(folder)
.then(() => {
//done
})
.catch(err => {
console.error(err)
})
or with async/await:
async function removeFolder(folder) {
try {
await fs.remove(folder)
//done
} catch (err) {
console.error(err)
}
}
const folder = '/Users/joe'
removeFolder(folder)
The Node.js fs module
The fs module
provides a lot of very useful functionality
to access and interact with the file system
.
There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:
const fs = require('fs')
Once you do so, you have access to all its methods, which include:
fs.access()
: check if the file exists and Node.js can access it with its permissionsfs.appendFile()
: append data to a file. If the file does not exist, it’s createdfs.chmod()
: change the permissions of a file specified by the filename passed. Related:fs.lchmod()
,fs.fchmod()
fs.chown()
: change the owner and group of a file specified by the filename passed. Related:fs.fchown()
,fs.lchown()
fs.close()
: close a file descriptorfs.copyFile()
: copies a filefs.createReadStream()
: create a readable file streamfs.createWriteStream()
: create a writable file streamfs.link()
: create a new hard link to a filefs.mkdir()
: create a new folderfs.mkdtemp()
: create a temporary directoryfs.open()
: set the file modefs.readdir()
: read the contents of a directoryfs.readFile()
: read the content of a file. Related:fs.read()
fs.readlink()
: read the value of a symbolic linkfs.realpath()
: resolve relative file path pointers (.
,..
) to the full pathfs.rename()
: rename a file or folderfs.rmdir()
: remove a folderfs.stat()
: returns the status of the file identified by the filename passed. Related:fs.fstat()
,fs.lstat()
fs.symlink()
: create a new symbolic link to a filefs.truncate()
: truncate to the specified length the file identified by the filename passed. Related:fs.ftruncate()
fs.unlink()
: remove a file or a symbolic linkfs.unwatchFile()
: stop watching for changes on a filefs.utimes()
: change the timestamp of the file identified by the filename passed. Related:fs.futimes()
fs.watchFile()
: start watching for changes on a file. Related:fs.watch()
fs.writeFile()
: write data to a file. Related:fs.write()
One peculiar thing about the fs module
is that all the methods are asynchronous
by default, but they can also work synchronously
by appending Sync
.
For example:
fs.rename()
fs.renameSync()
fs.write()
fs.writeSync()
This makes a huge difference in your application flow.
Node.js 10 includes experimental support for a promise based API
For example let’s examine the fs.rename()
method. The asynchronous API is used with a callback:
const fs = require('fs')
fs.rename('before.json', 'after.json', err => {
if (err) {
return console.error(err)
}
//done
})
A synchronous API can be used like this, with a try/catch block to handle errors:
const fs = require('fs')
try {
fs.renameSync('before.json', 'after.json')
//done
} catch (err) {
console.error(err)
}
The key difference here is that the execution of your script will block
in the second example, until the file operation succeeded.
The Node.js path module
The path module
provides a lot of very useful functionality to access and interact with the file system.
There is no need to install it. Being part of the Node.js core, it can be used by simply requiring it:
const path = require('path')
This module provides path.sep
which provides the path segment separator (\
on Windows,and /
on Linux / macOS),
path.delimiter
which provides the path delimiter (;
on Windows, and :
on Linux / macOS).
These are the path
methods:
path.basename()
Return the last portion of a path. A second parameter can filter out the file extension:
require('path').basename('/test/something') //something
require('path').basename('/test/something.txt') //something.txt
require('path').basename('/test/something.txt', '.txt') //something
path.dirname()
Return the directory part of a path:
require('path').dirname('/test/something') // /test
require('path').dirname('/test/something/file.txt') // /test/something
path.extname()
Return the extension part of a path
require('path').extname('/test/something') // ''
require('path').extname('/test/something/file.txt') // '.txt'
path.format()
Returns a path string from an object, This is the opposite of path.parse
path.format
accepts an object as argument with the follwing keys:
root
: the rootdir
: the folder path starting from the rootbase
: the file name + extensionname
: the file nameext
: the file extension
root
is ignored if dir
is provided
ext
and name
are ignored if base
exists
// POSIX
require('path').format({ dir: '/Users/joe', base: 'test.txt' }) // '/Users/joe/test.txt'
require('path').format({ root: '/Users/joe', name: 'test', ext: 'txt' }) // '/Users/joe/test.txt'
// WINDOWS
require('path').format({ dir: 'C:\\Users\\joe', base: 'test.txt' }) // 'C:\\Users\\joe\\test.txt'
path.isAbsolute()
Returns true if it’s an absolute path
JScopy
require('path').isAbsolute('/test/something') // true
require('path').isAbsolute('./test/something') // false
path.join()
Joins two or more parts of a path:
const name = 'joe'
require('path').join('/', 'users', name, 'notes.txt') //'/users/joe/notes.txt'
path.normalize()
Tries to calculate the actual path when it contains relative specifiers like .
or ..
, or double slashes:
require('path').normalize('/users/joe/..//test.txt') //'/users/test.txt'
path.parse()
Parses a path to an object with the segments that compose it:
root
: the rootdir
: the folder path starting from the rootbase
: the file name + extensionname
: the file nameext
: the file extension
Example:
require('path').parse('/users/test.txt')
results in
JScopy
{
root: '/',
dir: '/users',
base: 'test.txt',
ext: '.txt',
name: 'test'
}
path.relative()
Accepts 2 paths as arguments. Returns the relative path
from the first path
to the second
, based on the current working directory.
Example:
JScopy
require('path').relative('/Users/joe', '/Users/joe/test.txt') //'test.txt'
require('path').relative('/Users/joe', '/Users/joe/something/test.txt') //'something/test.txt'
path.resolve()
You can get the absolute path
calculation of a relative path using path.resolve()
:
path.resolve('joe.txt') //'/Users/joe/joe.txt' if run from my home folder
By specifying a second parameter, resolve
will use the first as a base for the second:
path.resolve('tmp', 'joe.txt') //'/Users/joe/tmp/joe.txt' if run from my home folder
If the first parameter starts with a slash, that means it’s an absolute path:
path.resolve('/etc', 'joe.txt') //'/etc/joe.txt'
The Node.js os module
This module provides many functions that you can use to retrieve information
from the underlying operating system and the computer the program runs on, and interact with it.
const os = require('os')
There are a few useful properties that tell us some key things related to handling files:
os.EOL
gives the line delimiter sequence. It’s \n
on Linux and macOS, and \r\n
on Windows.
Let’s now see the main methods that os
provides:
os.cpus()
Return information on the CPUs available on your system.
os.freemem()
Return the number of bytes that represent the free memory in the system.
console.log(os.freemem() / 1024 / 1024 / 1024); // GB
os.homedir()
Return the path to the home directory of the current user.
C:\Users\admin
os.networkInterfaces()
Returns the details of the network interfaces available on your system.
os.tmpdir()
Returns the path to the assigned temp folder.
C:\Users\admin\AppData\Local\Temp
os.totalmem()
Returns the number of bytes that represent the total memory available in the system.
console.log(os.totalmem() / 1024 / 1024 / 1024);
os.type()
Identifies the operating system:
os.uptime()
Returns the number of seconds the computer has been running since it was last rebooted.
os.userInfo()
Returns an object that contains the current username
, uid
, gid
, shell
, and homedir
{
uid: -1,
gid: -1,
username: 'admin',
homedir: 'C:\\Users\\admin',
shell: null
}
The Node.js events module
The events
module provides us the EventEmitter
class, which is key to working with events in Node.js.
const EventEmitter = require('events')
const door = new EventEmitter()
The event listener has these in-built events:
newListener
when a listener is addedremoveListener
when a listener is removed
Here’s a detailed description of the most useful methods:
Node.js Buffers
What is a buffer?
A buffer
is an area of memory. Most JavaScript developers are much less familiar with this concept, compared to programmers using a system programming languages (like C, C++, or Go), which interact directly with memory every day.
It represents a fixed-size chunk of memory
(can’t be resized) allocated outside of the V8 JavaScript engine.
You can think of a buffer
like an array of integers
, which each represent a byte of data.
It is implemented by the Node.js Buffer class.
Why do we need a buffer?
Buffers were introduced to help developers ⭐️deal with binary data
, in an ecosystem that traditionally only dealt with strings
rather than binaries.
Buffers
in Node.js are not related to the concept of buffering data
. That is what happens when a stream processor
receives data faster than it can digest.
How to create a buffer
A buffer is created using the Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe() methods.
const buf = Buffer.from('Hey!')
- Buffer.from(array)
- Buffer.from(arrayBuffer[, byteOffset[, length]])
- Buffer.from(buffer)
- [Buffer.from(string, encoding])
You can also just initialize the buffer passing the size. This creates a 1KB buffer:
const buf = Buffer.alloc(1024)
//or
const buf = Buffer.allocUnsafe(1024)
While both alloc
and allocUnsafe
allocate a Buffer
of the specified size in bytes
the Buffer
created by alloc
will be initialized with zeroes
and the one created by allocUnsafe
will be uninitialized
.
This means that while allocUnsafe
would be quite fast in comparison to alloc
, the allocated segment of memory may contain old data which could potentially be 🚫sensitive
.
Older data
, if present in the memory, can be accessed or leaked when the Buffer
memory is read. This is what really makes allocUnsafe
unsafe and extra care must be taken while using it.
Using a buffer
Access the content of a buffer
A buffer, being an array of bytes, can be accessed like an array:
const buf = Buffer.from('Hey!')
console.log(buf[0]) //72
console.log(buf[1]) //101
console.log(buf[2]) //121
Those numbers are the UTF-8 bytes that identify the characters in the buffer (H
→ 72
, e
→ 101
, y
→ 121
). This happens because Buffer.from()
uses UTF-8 by default. Keep in mind that some characters may occupy more than one byte in the buffer (é
→ 195 169
).
You can print the full content of the buffer using the toString()
method:
console.log(buf.toString())
buf.toString()
also uses UTF-8 by default.
Notice that if you initialize a buffer with a number that sets its size, you’ll get access to pre-initialized memory that will contain random data, not an empty buffer!
Get the length of a buffer
Use the length
property:
const buf = Buffer.from('Hey!')
console.log(buf.length)
Iterate over the contents of a buffer
const buf = Buffer.from('Hey!')
for (const item of buf) {
console.log(item) //72 101 121 33
}
Changing the content of a buffer
You can write to a buffer a whole string of data by using the write()
method:
const buf = Buffer.alloc(4)
buf.write('Hey!')
Just like you can access a buffer with an array syntax, you can also set the contents of the buffer in the same way:
const buf = Buffer.from('Hey!')
buf[1] = 111 //o in UTF-8
console.log(buf.toString()) //Hoy!
Slice a buffer
If you want to create a partial visualization of a buffer, you can create a slice. A slice is not a copy: the original buffer is still the source of truth. If that changes, your slice changes.
Use the subarray()
method to create it. The first parameter is the starting position, and you can specify an optional second parameter with the end position:
const buf = Buffer.from('Hey!')
buf.subarray(0).toString() //Hey!
const slice = buf.subarray(0, 2)
console.log(slice.toString()) //He
buf[1] = 111 //o
console.log(slice.toString()) //Ho
Copy a buffer
Copying a buffer is possible using the set()
method:
const buf = Buffer.from('Hey!')
let bufcopy = Buffer.alloc(4) //allocate 4 bytes
bufcopy.set(buf)
By default you copy the whole buffer. If you only want to copy a part of the buffer, you can use .subarray()
and the offset
argument that specifies an offset to write to:
const buf = Buffer.from('Hey?')
let bufcopy = Buffer.from('Moo!')
bufcopy.set(buf.subarray(1, 3), 1) // ey
bufcopy.toString() //'Mey!'
Node.js Streams
What are streams
Streams
are one of the fundamental concepts that power Node.js applications.
They are a way to handle reading/writing files,
network communications
, or any kind of end-to-end information exchange
in an efficient way.
Streams
are not a concept unique to Node.js. They were introduced in the Unix operating system decades ago, and programs can interact with each other passing streams through the pipe operator (|
).
For example, in the traditional way, when you tell the program to read a file, the file is read into memory, from start to finish, and then you process it.
Using streams you read it ⭐️ piece by piece, processing its content without keeping it all in memory.
The Node.js stream module provides the foundation upon which all streaming APIs are built. All streams are instances of EventEmitter
Why streams
Streams basically provide two major advantages over using other data handling methods:
- Memory efficiency: you don’t need to load large amounts of data in memory before you are able to process it
- Time efficiency: it takes way less time to start processing data, since you can start processing as soon as you have it, rather than waiting till the whole data payload is available
An example of a stream
A typical example is reading files from a disk.
Using the Node.js fs
module, you can read a file, and serve it over HTTP when a new connection is established to your HTTP server:
const http = require('http')
const fs = require('fs')
const server = http.createServer(function (req, res) {
res.statusCode = 200
res.setHeader('Content-Type', 'text/html;charset=utf8')
fs.readFile(__dirname + '/haha.txt', 'utf8' , (err, data) => {
res.end(data)
})
})
server.listen(3000)
readFile()
reads the full contents of the file, and invokes the callback function when it’s done.
res.end(data)
in the callback will return the file contents to the HTTP client.
If the file is big, the operation will take quite a bit of time. Here is the same thing written using streams:
const http = require('http')
const fs = require('fs')
const server = http.createServer((req, res) => {
res.statusCode = 200
res.setHeader('Content-Type', 'text/html;charset=utf8')
const stream = fs.createReadStream(__dirname + '/haha.txt')
stream.pipe(res)
})
server.listen(3000)
Instead of waiting until the file is fully read, we start streaming it to the HTTP client as soon as we have a chunk of data ready to be sent.
pipe()
The above example uses the line stream.pipe(res)
: the pipe() method
is called on the file stream
.
What does this code do? It takes the 🉐source
, and pipes it into a destination
.
You call it on the source stream
, so in this case, the file stream
is piped to the HTTP response
.
The return value of the pipe() method
is the 🎢 destination stream
, which is a very convenient thing that lets us chain multiple pipe()
calls, like this:
src.pipe(dest1).pipe(dest2)
This construct is the same as doing
src.pipe(dest1)
dest1.pipe(dest2)
Streams-powered Node.js APIs
Due to their advantages, many Node.js core modules provide native stream handling capabilities, most notably:
process.stdin
returns a stream connected to stdinprocess.stdout
returns a stream connected to stdoutprocess.stderr
returns a stream connected to stderrfs.createReadStream()
creates a readable stream to a filefs.createWriteStream()
creates a writable stream to a filenet.connect()
initiates a stream-based connectionhttp.request()
returns an instance of the http.ClientRequest class, which is a writable streamzlib.createGzip()
compress data using gzip (a compression algorithm) into a streamzlib.createGunzip()
decompress a gzip stream.zlib.createDeflate()
compress data using deflate (a compression algorithm) into a streamzlib.createInflate()
decompress a deflate stream
Different types of streams
There are four classes of streams:
Readable
: a stream you can pipe from, but not pipe into (you can receive data, but not send data to it). When you push data into a readable stream, it is buffered, until a consumer starts to read the data.Writable
: a stream you can pipe into, but not pipe from (you can send data, but not receive from it)Duplex
: a stream you can both pipe into and pipe from, basically a combination of a Readable and Writable streamTransform
: a Transform stream is similar to a Duplex, but the output is a transform of its input
从流中读取数据
var fs = require("fs");
var data = '';
// 创建可读流
var readerStream = fs.createReadStream('haha.txt');
// 设置编码为 utf8。
readerStream.setEncoding('UTF8');
// 处理流事件 --> data, end, and error
readerStream.on('data', function(chunk) {
data += chunk;
});
readerStream.on('end',function(){
console.log(data);
});
readerStream.on('error', function(err){
console.log(err.stack);
});
console.log("程序执行完毕");
写入流
var fs = require("fs");
var data = '!!sadwdsadfauiefbkuaf';
// 创建一个可以写入的流,写入到文件 output.txt 中
var writerStream = fs.createWriteStream('haha.txt');
// 使用 utf8 编码写入数据
writerStream.write(data,'UTF8');
// 标记文件末尾
writerStream.end();
// 处理流事件 --> finish、error
writerStream.on('finish', function() {
console.log("写入完成。");
});
writerStream.on('error', function(err){
console.log(err.stack);
});
console.log("程序执行完毕");
管道流
管道提供了一个输出流到输入流的机制。通常我们用于从一个流中获取数据并将数据传递到另外一个流中。
var fs = require("fs");
// 创建一个可读流
var readerStream = fs.createReadStream('haha.txt');
// 创建一个可写流
var writerStream = fs.createWriteStream('hoho.txt');
// 管道读写操作
// 读取 haha.txt 文件内容,并将内容写入到 hoho.txt 文件中 readerStream => writerStream
readerStream.pipe(writerStream);
console.log("程序执行完毕");
链式流
链式是通过连接输出流到另外一个流并创建多个流操作链
的机制。链式流一般用于管道操作
压缩文件
var fs = require("fs");
var zlib = require('zlib');
// 压缩 input.txt 文件为 input.txt.gz
fs.createReadStream('haha.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('haha.txt.gz'));
console.log("文件压缩完成。");
解压文件
var fs = require("fs");
var zlib = require('zlib');
// 解压 input.txt.gz 文件为 input.txt
fs.createReadStream('haha.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('output.txt'));
console.log("文件解压完成。");
Node.js, the difference between development and production
You can have different configurations for production
and development
environments.
Node.js assumes it’s always running in a development
environment.
You can signal Node.js that you are running in production by setting the NODE_ENV=production
environment variable.
This is usually done by executing the command
export NODE_ENV=production
‘export’ command is valid only for unix shells. In Windows - use 'set’ instead of ‘export’
in the shell, but it’s better to put it in your shell configuration file
(e.g. .bash_profile
with the Bash shell) because otherwise the setting does not persist in case of a system restart.
You can also apply the environment variable by prepending it to your application initialization command:
NODE_ENV=production node app.js
This environment variable is a convention that is widely used in external libraries as well.
Setting the environment to production
generally ensures that
- logging is kept to a minimum, essential level
- more caching levels take place to optimize performance
For example Pug, the templating library used by Express, compiles in debug mode
if NODE_ENV
is not set to production
.
Express views are compiled
in every request in development
mode, while in production
they are cached
.
You can use conditional statements to execute code in different environments:
if (process.env.NODE_ENV === "development") {
//...
}
if (process.env.NODE_ENV === "production") {
//...
}
if(['production', 'staging'].indexOf(process.env.NODE_ENV) >= 0) {
//...
})
For example, in an Express app, you can use this to set different error handlers per environment:
if (process.env.NODE_ENV === "development") {
app.use(express.errorHandler({ dumpExceptions: true, showStack: true }))
})
if (process.env.NODE_ENV === "production") {
app.use(express.errorHandler())
})