Failing to write to DynamoDB using Lambda function with an S3 trigger



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I'm trying to write a lambda function that is triggered whenever a new image is written to an S3 bucket. The trigger is already setup using the correct S3 bucket so I know that's not the issue.



The lambda function itself has the roles s3:GetObject and dynamodb.* (which should be full access for DynamoDB writes).



The goal here is to simply write to a table that I've already created named 'art' and insert a primary key value (imageTitle) which i'm trying to obtain in var imageName. Then I want to assign to that key an attribute which is the url of that image, which I'm storing in var url.



This is just a simple exercise I'm trying to get down so that I can move on to more complex DB writes. But as of right now, I'm not getting anything written to the art table, even though I am adding new objects to the S3 bucket which sets off the trigger. Is it possible that the lambda function isn't deployed? I wrote it directly in the inline editor of the Lambda Console and saved it.



Here's the code:



const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient(region: 'us-east-1');
const s3 = new AWS.S3();

exports.handler = async (event, context, callback) =>
//var sourceBucket = event.Records[0].s3.bucket.name;
var sourceKey = event.Records[0].s3.object.key;
var imageName = sourceKey.stringify;

//generate imageURL
var url = "https://s3.amazonaws.com/myapp-20181030214040-deployment/public/" + imageName;

var params =
TableName : 'art',
Item:
imageTitle: imageName,
imageURL: url

;
docClient.put(params, function(err, data)
if (err) console.log(err);
else console.log(data);
);
;









share|improve this question






















  • Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

    – vahdet
    Nov 15 '18 at 7:19











  • Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

    – Stu
    Nov 15 '18 at 8:45











  • Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

    – KSamra
    Nov 15 '18 at 9:11

















0















I'm trying to write a lambda function that is triggered whenever a new image is written to an S3 bucket. The trigger is already setup using the correct S3 bucket so I know that's not the issue.



The lambda function itself has the roles s3:GetObject and dynamodb.* (which should be full access for DynamoDB writes).



The goal here is to simply write to a table that I've already created named 'art' and insert a primary key value (imageTitle) which i'm trying to obtain in var imageName. Then I want to assign to that key an attribute which is the url of that image, which I'm storing in var url.



This is just a simple exercise I'm trying to get down so that I can move on to more complex DB writes. But as of right now, I'm not getting anything written to the art table, even though I am adding new objects to the S3 bucket which sets off the trigger. Is it possible that the lambda function isn't deployed? I wrote it directly in the inline editor of the Lambda Console and saved it.



Here's the code:



const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient(region: 'us-east-1');
const s3 = new AWS.S3();

exports.handler = async (event, context, callback) =>
//var sourceBucket = event.Records[0].s3.bucket.name;
var sourceKey = event.Records[0].s3.object.key;
var imageName = sourceKey.stringify;

//generate imageURL
var url = "https://s3.amazonaws.com/myapp-20181030214040-deployment/public/" + imageName;

var params =
TableName : 'art',
Item:
imageTitle: imageName,
imageURL: url

;
docClient.put(params, function(err, data)
if (err) console.log(err);
else console.log(data);
);
;









share|improve this question






















  • Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

    – vahdet
    Nov 15 '18 at 7:19











  • Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

    – Stu
    Nov 15 '18 at 8:45











  • Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

    – KSamra
    Nov 15 '18 at 9:11













0












0








0


1






I'm trying to write a lambda function that is triggered whenever a new image is written to an S3 bucket. The trigger is already setup using the correct S3 bucket so I know that's not the issue.



The lambda function itself has the roles s3:GetObject and dynamodb.* (which should be full access for DynamoDB writes).



The goal here is to simply write to a table that I've already created named 'art' and insert a primary key value (imageTitle) which i'm trying to obtain in var imageName. Then I want to assign to that key an attribute which is the url of that image, which I'm storing in var url.



This is just a simple exercise I'm trying to get down so that I can move on to more complex DB writes. But as of right now, I'm not getting anything written to the art table, even though I am adding new objects to the S3 bucket which sets off the trigger. Is it possible that the lambda function isn't deployed? I wrote it directly in the inline editor of the Lambda Console and saved it.



Here's the code:



const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient(region: 'us-east-1');
const s3 = new AWS.S3();

exports.handler = async (event, context, callback) =>
//var sourceBucket = event.Records[0].s3.bucket.name;
var sourceKey = event.Records[0].s3.object.key;
var imageName = sourceKey.stringify;

//generate imageURL
var url = "https://s3.amazonaws.com/myapp-20181030214040-deployment/public/" + imageName;

var params =
TableName : 'art',
Item:
imageTitle: imageName,
imageURL: url

;
docClient.put(params, function(err, data)
if (err) console.log(err);
else console.log(data);
);
;









share|improve this question














I'm trying to write a lambda function that is triggered whenever a new image is written to an S3 bucket. The trigger is already setup using the correct S3 bucket so I know that's not the issue.



The lambda function itself has the roles s3:GetObject and dynamodb.* (which should be full access for DynamoDB writes).



The goal here is to simply write to a table that I've already created named 'art' and insert a primary key value (imageTitle) which i'm trying to obtain in var imageName. Then I want to assign to that key an attribute which is the url of that image, which I'm storing in var url.



This is just a simple exercise I'm trying to get down so that I can move on to more complex DB writes. But as of right now, I'm not getting anything written to the art table, even though I am adding new objects to the S3 bucket which sets off the trigger. Is it possible that the lambda function isn't deployed? I wrote it directly in the inline editor of the Lambda Console and saved it.



Here's the code:



const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient(region: 'us-east-1');
const s3 = new AWS.S3();

exports.handler = async (event, context, callback) =>
//var sourceBucket = event.Records[0].s3.bucket.name;
var sourceKey = event.Records[0].s3.object.key;
var imageName = sourceKey.stringify;

//generate imageURL
var url = "https://s3.amazonaws.com/myapp-20181030214040-deployment/public/" + imageName;

var params =
TableName : 'art',
Item:
imageTitle: imageName,
imageURL: url

;
docClient.put(params, function(err, data)
if (err) console.log(err);
else console.log(data);
);
;






amazon-web-services aws-lambda amazon-dynamodb aws-sdk






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 15 '18 at 7:15









KSamraKSamra

447




447












  • Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

    – vahdet
    Nov 15 '18 at 7:19











  • Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

    – Stu
    Nov 15 '18 at 8:45











  • Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

    – KSamra
    Nov 15 '18 at 9:11

















  • Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

    – vahdet
    Nov 15 '18 at 7:19











  • Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

    – Stu
    Nov 15 '18 at 8:45











  • Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

    – KSamra
    Nov 15 '18 at 9:11
















Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

– vahdet
Nov 15 '18 at 7:19





Can you share the cloudwatch logs? Btw, saving your lambda function would be enough to have it ready to work: They are not deployed in a classical manner and new instance is created every time you call it.

– vahdet
Nov 15 '18 at 7:19













Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

– Stu
Nov 15 '18 at 8:45





Click on your Lambda, click on the monitoring tab, then click on the 'View logs in CloudWatch' button. Find and open the most recent log stream. This is where your logs are written. Add some more console.log statements to your function if required to identify which line of code is failing.

– Stu
Nov 15 '18 at 8:45













Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

– KSamra
Nov 15 '18 at 9:11





Is there anything specific I should be seeing in the CloudWatch logs? I have a Start, End, and Report log each with the same requestID. Does my approach to placing an item in the table look correct to you guys though? I've had a hard time understanding exactly how I'd specify which item I want to place in as the primary key and which items should be attributes of that key so I tried placing the primary key first. I'll try placing some console.log statements in tomorrow morning and see what I can find.

– KSamra
Nov 15 '18 at 9:11












1 Answer
1






active

oldest

votes


















3














The problem here is that you're using an async lambda but returning nothing that is awaitable. This means that your lambda is terminating before the docClient.put operation is sent.



With an async handler you need to await and return, for example you could change this snippet to:



const data = await docClient.put(params).promise();
return data;


Or instead you could use the callback approach (note the signature of the handler does not contain async anymore):



exports.handler = (event, context, callback) => 
// ... the rest of the code as was ...
docClient.put(params, function(err, data)
if (err)
console.log(err);
callback(err); // return 'lambda invoke failed because of the error' - will cause s3 to retry three times.
else
console.log(data);
callback(null, data); // return 'nothing failed'; n.b. the s3 trigger ignores whatever you return.

);
;





share|improve this answer























  • Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

    – KSamra
    Nov 15 '18 at 22:46












  • No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

    – thomasmichaelwallace
    Nov 15 '18 at 22:59











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53314197%2ffailing-to-write-to-dynamodb-using-lambda-function-with-an-s3-trigger%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














The problem here is that you're using an async lambda but returning nothing that is awaitable. This means that your lambda is terminating before the docClient.put operation is sent.



With an async handler you need to await and return, for example you could change this snippet to:



const data = await docClient.put(params).promise();
return data;


Or instead you could use the callback approach (note the signature of the handler does not contain async anymore):



exports.handler = (event, context, callback) => 
// ... the rest of the code as was ...
docClient.put(params, function(err, data)
if (err)
console.log(err);
callback(err); // return 'lambda invoke failed because of the error' - will cause s3 to retry three times.
else
console.log(data);
callback(null, data); // return 'nothing failed'; n.b. the s3 trigger ignores whatever you return.

);
;





share|improve this answer























  • Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

    – KSamra
    Nov 15 '18 at 22:46












  • No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

    – thomasmichaelwallace
    Nov 15 '18 at 22:59















3














The problem here is that you're using an async lambda but returning nothing that is awaitable. This means that your lambda is terminating before the docClient.put operation is sent.



With an async handler you need to await and return, for example you could change this snippet to:



const data = await docClient.put(params).promise();
return data;


Or instead you could use the callback approach (note the signature of the handler does not contain async anymore):



exports.handler = (event, context, callback) => 
// ... the rest of the code as was ...
docClient.put(params, function(err, data)
if (err)
console.log(err);
callback(err); // return 'lambda invoke failed because of the error' - will cause s3 to retry three times.
else
console.log(data);
callback(null, data); // return 'nothing failed'; n.b. the s3 trigger ignores whatever you return.

);
;





share|improve this answer























  • Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

    – KSamra
    Nov 15 '18 at 22:46












  • No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

    – thomasmichaelwallace
    Nov 15 '18 at 22:59













3












3








3







The problem here is that you're using an async lambda but returning nothing that is awaitable. This means that your lambda is terminating before the docClient.put operation is sent.



With an async handler you need to await and return, for example you could change this snippet to:



const data = await docClient.put(params).promise();
return data;


Or instead you could use the callback approach (note the signature of the handler does not contain async anymore):



exports.handler = (event, context, callback) => 
// ... the rest of the code as was ...
docClient.put(params, function(err, data)
if (err)
console.log(err);
callback(err); // return 'lambda invoke failed because of the error' - will cause s3 to retry three times.
else
console.log(data);
callback(null, data); // return 'nothing failed'; n.b. the s3 trigger ignores whatever you return.

);
;





share|improve this answer













The problem here is that you're using an async lambda but returning nothing that is awaitable. This means that your lambda is terminating before the docClient.put operation is sent.



With an async handler you need to await and return, for example you could change this snippet to:



const data = await docClient.put(params).promise();
return data;


Or instead you could use the callback approach (note the signature of the handler does not contain async anymore):



exports.handler = (event, context, callback) => 
// ... the rest of the code as was ...
docClient.put(params, function(err, data)
if (err)
console.log(err);
callback(err); // return 'lambda invoke failed because of the error' - will cause s3 to retry three times.
else
console.log(data);
callback(null, data); // return 'nothing failed'; n.b. the s3 trigger ignores whatever you return.

);
;






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 15 '18 at 9:18









thomasmichaelwallacethomasmichaelwallace

2,8101919




2,8101919












  • Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

    – KSamra
    Nov 15 '18 at 22:46












  • No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

    – thomasmichaelwallace
    Nov 15 '18 at 22:59

















  • Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

    – KSamra
    Nov 15 '18 at 22:46












  • No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

    – thomasmichaelwallace
    Nov 15 '18 at 22:59
















Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

– KSamra
Nov 15 '18 at 22:46






Thank you. So now that I've gotten rid of the asynchronous call the function actually runs according to the CloudWatch logs. However, it's failing due to the exception: ValidationException: One or more parameter values were invalid: Missing the key imageTitle in the item. But I believe I am entering a value for the key when I'm assigning imageTitle: imageName ? Is this not how you assign values to a primary key?

– KSamra
Nov 15 '18 at 22:46














No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

– thomasmichaelwallace
Nov 15 '18 at 22:59





No- you need to use Key: imageTitle: imageName , Item: imageURL: url , TableName : 'art'

– thomasmichaelwallace
Nov 15 '18 at 22:59



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53314197%2ffailing-to-write-to-dynamodb-using-lambda-function-with-an-s3-trigger%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Use pre created SQLite database for Android project in kotlin

Darth Vader #20

Ondo