S3 Upload Part Couldnt Find a Transformer

s3-upload-stream Build Status

A pipeable write stream which uploads to Amazon S3 using the multipart file upload API.

NPM

Changelog

1.0.6 (2014-10-20)

Removing global state, and adding break and resume functionality.

Historical Changelogs

Why use this stream?

  • This upload stream does not require y'all to know the length of your content prior to beginning uploading. Many other popular S3 wrappers such as Knox too permit you to upload streams to S3, just they crave yous to specify the content length. This is not always feasible.
  • By piping content to S3 via the multipart file upload API you can keep retentiveness usage depression fifty-fifty when operating on a stream that is GB in size. Many other libraries really store the entire stream in memory then upload it in one slice. This stream avoids high retention usage past flushing the stream to S3 in 5 MB parts such that it should only always shop 5 MB of the stream information at a time.
  • This parcel is designed to employ the official Amazon SDK for Node.js, helping continue it small and efficient. For maximum flexibility you laissez passer in the aws-sdk client yourself, allowing you to use a uniform version of AWS SDK throughout your code base.
  • You can provide options for the upload call directly to do things like fix server side encryption, reduced redundancy storage, or admission level on the object, which another similar streams are lacking.
  • Emits "part" events which expose the amount of incoming data received by the writable stream versus the amount of data that has been uploaded via the multipart API so far, allowing you to create a progress bar if that is a requirement.
  • Support for pausing and later resuming in progress multipart uploads.

Limits

  • The multipart upload API does not accept parts less than 5 MB in size. And then although this stream emits "function" events which tin can be used to prove progress, the progress is not very granular, as the events are just per office. By default this means that you will receive an outcome each 5 MB.
  • The Amazon SDK has a limit of ten,000 parts when doing a mulitpart upload. Since the role size is currently set to 5 MB this ways that your stream will fail to upload if it contains more than fifty GB of data. This can be solved by using the 'stream.maxPartSize()' method of the writable stream to set the max size of an upload role, equally documented below. By increasing this value you lot should be able to salvage streams that are many TB in size.

Example

                

var AWS = require ( ' aws-sdk ' ) ,

    zlib = require ( ' zlib ' ) ,

    fs = require ( ' fs ' ) ;

    s3Stream = require ( ' s3-upload-stream ' ) ( new AWS . S3 ( ) ) ,

AWS . config . loadFromPath ( ' ./config.json ' ) ;

var  read = fs . createReadStream ( ' /path/to/a/file ' ) ;

var  compress = zlib . createGzip ( ) ;

var  upload = s3Stream . upload ( {

" Bucket " : " bucket-name " ,

" Key " : " cardinal-name "

} ) ;

upload . maxPartSize ( 20971520 ) ;

upload . concurrentParts ( 5 ) ;

upload . on ( ' error ' , role ( mistake ) {

console . log ( error ) ;

} ) ;

upload . on ( ' role ' , function ( details ) {

panel . log ( details ) ;

} ) ;

upload . on ( ' uploaded ' , function ( details ) {

console . log ( details ) ;

} ) ;

read . piping ( compress ) . pipe ( upload ) ;

Usage

Before uploading you must configure the S3 client for s3-upload-stream to use. Please note that this module has only been tested with AWS SDK 2.0 and greater.

This module does not include the AWS SDK itself. Rather you lot must crave the AWS SDK in your own application code, instantiate an S3 client and so supply it to s3-upload-stream.

The primary advantage of this is that rather than being stuck with a set version of the AWS SDK that ships with s3-upload-stream you lot can ensure that s3-upload-stream is using whichever verison of the SDK you desire.

When setting up the S3 client the recommended approach for credential direction is to set your AWS API keys using surroundings variables or AMI roles.

If you are following this approach and then yous can configure the S3 client very just:

                

var AWS = crave ( ' aws-sdk ' ) ,

    s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

However, some environments may require you to keep your credentials in a file, or hardcoded. In that example yous tin can use the following course:

                

var AWS = require ( ' aws-sdk ' ) ;

AWS . config . loadFromPath ( ' ./config.json ' ) ;

AWS . config . update ( { accessKeyId : ' akid ' ,  secretAccessKey : ' clandestine ' } ) ;

var  s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

client.upload(destination)

Create an upload stream that will upload to the specified destination. The upload stream is returned immeadiately.

The destination details is an object in which yous tin specify many dissimilar destination properties enumerated in the AWS S3 documentation.

Example:

                

var AWS = require ( ' aws-sdk ' ) ,

    s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

var  read = fs . createReadStream ( ' /path/to/a/file ' ) ;

var  upload = s3Stream . upload ( {

  Bucket : " saucepan-name " ,

  Key : " key-name " ,

ACL : " public-read " ,

  StorageClass : " REDUCED_REDUNDANCY " ,

  ContentType : " binary/octet-stream "

} ) ;

read . pipe ( upload ) ;

client.upload(destination, [session])

Resume an incomplete multipart upload from a previous session by providing a session object with an upload ID, and ETag and numbers for each office. destination details is as to a higher place.

Example:

                

var AWS = require ( ' aws-sdk ' ) ,

    s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

var  read = fs . createReadStream ( ' /path/to/a/file ' ) ;

var  upload = s3Stream . upload (

{

    Bucket : " saucepan-name " ,

    Key : " key-name " ,

ACL : " public-read " ,

    StorageClass : " REDUCED_REDUNDANCY " ,

    ContentType : " binary/octet-stream "

} ,

{

    UploadId : " f1j2b47238f12984f71b2o8347f12 " ,

    Parts : [

{

        ETag : " 3k2j3h45t9v8aydgajsda " ,

        PartNumber : 1

} ,

{

        Etag : " kjgsdfg876sd8fgk3j44t " ,

        PartNumber : 2

}

]

}

) ;

read . pipage ( upload ) ;

Stream Methods

The following methods tin exist chosen on the stream returned past from client.upload()

stream.pause()

Intermission an active multipart upload stream.

Calling pause() will immediately:

  • stop accepting information from an input stream,
  • stop submitting new parts for upload, and
  • emit a pausing consequence with the number of parts that are still mid-upload.

When mid-upload parts are finished, a paused issue will fire, including an object with UploadId and Parts data that tin exist used to resume an upload in a later session.

stream.resume()

Resume a paused multipart upload stream.

Calling resume() will immediately:

  • resume accepting data from an input stream,
  • resume submitting new parts for upload, and
  • repeat a resume outcome back to whatever listeners.

Information technology is rubber to telephone call resume() at whatever time after pause(). If the stream is between pausing and paused, then resume() will resume data menses and the paused event will not be fired.

stream.maxPartSize(sizeInBytes)

Used to adjust the maximum amount of stream information that will exist buffered in memory prior to flushing. The everyman possible value, and default value, is 5 MB. It is not possible to set this value any lower than 5 MB due to Amazon S3 restrictions, simply at that place is no difficult upper limit. The higher the value you choose the more stream information will be buffered in memory before flushing to S3.

The main reason for setting this to a college value instead of using the default is if y'all take a stream with more than 50 GB of information, and therefore need larger function sizes in order to affluent the entire stream while also staying within Amazon's upper limit of 10,000 parts for the multipart upload API.

                

var AWS = crave ( ' aws-sdk ' ) ,

    s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

var  read = fs . createReadStream ( ' /path/to/a/file ' ) ;

var  upload = s3Stream . upload ( {

" Bucket " : " bucket-name " ,

" Key " : " cardinal-name "

} ) ;

upload . maxPartSize ( 20971520 ) ;

read . pipe ( upload ) ;

stream.concurrentParts(numberOfParts)

Used to adjust the number of parts that are concurrently uploaded to S3. By default this is merely ane at a fourth dimension, to keep retention usage low and allow the upstream to deal with backpressure. However, in some cases you may wish to bleed the stream that you are piping in quickly, and so issue concurrent upload requests to upload multiple parts.

Keep in heed that total retention usage volition be at to the lowest degree maxPartSize * concurrentParts as each concurrent part volition be maxPartSize large, and so it is non recommended that you set both maxPartSize and concurrentParts to high values, or your process will exist buffering large amounts of data in its memory.

                

var AWS = require ( ' aws-sdk ' ) ,

    s3Stream = require ( ' ../lib/s3-upload-stream.js ' ) ( new AWS . S3 ( ) ) ;

var  read = fs . createReadStream ( ' /path/to/a/file ' ) ;

var  upload = s3Stream . upload ( {

" Saucepan " : " bucket-proper name " ,

" Key " : " key-name "

} ) ;

upload . concurrentParts ( 5 ) ;

read . pipe ( upload ) ;

Tuning configuration of the AWS SDK

The following configuration tuning can assist prevent errors when using less reliable net connections (such as 3G data if you are using Node.js on the Tessel) by causing the AWS SDK to notice upload timeouts and retry.

                

var AWS = require ( ' aws-sdk ' ) ;

AWS . config . httpOptions = { timeout : 5000 } ;

Installation

              npm install s3-upload-stream                          

Running Tests

              npm exam                          

License

(The MIT License)

Copyright (c) 2014 Nathan Peck nathan@storydesk.com

Permission is hereby granted, gratuitous of accuse, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to exercise and so, field of study to the following conditions:

The higher up copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'Equally IS', WITHOUT WARRANTY OF ANY KIND, Express OR IMPLIED, INCLUDING BUT NOT Express TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO Effect SHALL THE AUTHORS OR COPYRIGHT HOLDERS Exist LIABLE FOR Whatsoever CLAIM, Damages OR OTHER LIABILITY, WHETHER IN AN Activeness OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN Connectedness WITH THE SOFTWARE OR THE Utilise OR OTHER DEALINGS IN THE SOFTWARE.

johnsonfacquale.blogspot.com

Source: https://www.npmjs.com/package/s3-upload-stream

0 Response to "S3 Upload Part Couldnt Find a Transformer"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel