Write File to S3 using Lambda

S3 can be used as the content repository for objects and it maybe needed to process the files and also read and write files to a bucket. A good example being in a serverless architecture to hold the files in one bucket and then to process the files using lambda and write the processed files in another bucket. The processed files maybe a simple file conversion from xml to json for example.

AWS S3 Online Course

Check out the AWS S3 online course. The course covers beginners and advanced level topics on S3 including lifecycle policy, event notifications, replication, security, logging and monitoring. It also covers topics around accessing data in S3 and performance optimisation and website hosting.

To write files to S3, the lambda function needs to be setup using a role that can write objects to S3. Also, the bucket name needs to be known to the lambda function and the key name which will contain the file name needs to be known or created.

        var AWS = require('aws-sdk');
        var s3 = new AWS.S3();
        exports.handler = (event, context, callback) => {  
            var bucketName = process.env.bucketName;
            var keyName = getKeyName(folder, filename);
            var content = 'This is a sample text file';
            var params = { Bucket: bucketName, Key: keyName, Body: content };
            s3.putObject(params, function (err, data) {
                if (err)
                    console.log("Successfully saved object to " + bucketName + "/" + keyName);
        function getKeyName(folder, filename) {
            return folder + '/' + filename;

Once the role has been setup, create the lambda function and then deploy the code. Then set environment variables for the bucket name and then the function can be used.

From the above code, the important function to look into is the s3.putObject which allows writing files to S3.