This is Part Three of a three part series on Migrating an Amplify Backend on Serverless Framework. You can find Part One here and Part Two here.
This is the third and final part of our series where we're showing the steps needed to migrate an Amplify backend to Serverless Framework. After scaffolding the project in the first part, and setting up the GraphQL API in the second part, what now remains is setting up final touches like DynamoDB triggers and S3 buckets. Let's get to it.
DynamoDB Triggers
A DynamoDB trigger allows you to invoke a lambda every time a DynamoDB table is updated, and the lambda will receive the updated row in the input event. In our application, we will be using this to add a new notification to the NotificationQueue
table every time an Item
row is created that has remindAt
set. For this purpose, let's create that lambda, which will be just a placeholder since we're focusing mainly on the Serverless configuration.
Copy the contents of handlers/process-queue/index.js
to handlers/add-to-queue/index.js
. This lambda has the following content:
'use strict';
module.exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'Go Serverless v3.0! Your function executed successfully!',
input: event,
},
null,
2
),
};
};
Now we need to make a slight modification to our Item
table resource, to add stream configuration. Having stream configured on the DynamoDB table is a prerequisite for a trigger to be invoked on row modification. The stream is configured by adding a StreamViewType
property to the table configuration.
The table configuration for the Item
resource now becomes:
Resources:
# ...other resources
ItemTableResource:
Type: AWS::DynamoDB::Table
Properties:
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
KeySchema:
- AttributeName: id
KeyType: HASH
AttributeDefinitions:
- AttributeName: id
AttributeType: S
- AttributeName: listId
AttributeType: S
GlobalSecondaryIndexes:
- IndexName: byList
KeySchema:
- AttributeName: listId
KeyType: HASH
Projection:
ProjectionType: ALL
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.ITEM_TABLE_NAME}
The only remaining part is to connect the lambda and stream configuration together. This is done in the functions
property of the Serverless configuration:
functions:
# ... other functions
addToQueue:
handler: handlers/add-to-queue/index.handler
events:
- stream:
type: dynamodb
arn: !GetAtt ItemTableResource.StreamArn
We have the standard lambda function definition as well as an events
property that hooks up the lambda to the stream of the Item
table. Again, we use an intrinsic function, in this case !GetAtt
to fetch the ARN (Amazon Resource Name) of the stream. With this in place, lambda is now hooked to the Item
data stream and will begin listening to modification events.
One such event might look like this:
{
"Records": [
{
"awsRegion": "us-east-1",
"dynamodb": {
"ApproximateCreationDateTime": 1632502576,
"Keys": {
"id": {
"S": "..."
}
},
"NewImage": {
"__typename": {
"S": "Item"
},
"id": {
"S": "..."
},
"remindAt": {
"S": "2021-09-24T16:56:15.182Z"
},
"cognitoUserId": {
"S": "..."
},
"listId": {
"S": "..."
},
"title": {
"S": "Some title"
},
"notes": {
"S": "Item notes"
}
},
"SequenceNumber": "853010500000000020163575159",
"SizeBytes": 356,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventID": "...",
"eventName": "INSERT",
"eventSource": "aws:dynamodb",
"eventSourceARN": "arn:aws:dynamodb:us-east-1:...",
"eventVersion": "1.1"
}
]
}
Setting up S3 Buckets
In case a user of our app would like to upload an image as part of the todo item's note, we could upload that image to an S3 bucket and then serve it from there when displaying the note in the UI. For this to work, we would need to provision an S3 bucket through our Serverless configuration.
An S3 bucket is a resource, just like any other DynamoDB table in the configuration. We need to give it a name, so let's configure that in the environment first:
provider:
# ...
environment:
# ... other environment variables...
S3_BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images
The S3 bucket name is composed of service name and stage, suffixed by the string "-images". In our case, for dev environment, the bucket would be named amplified-todo-api-dev-images
.
Now we need to configure the resources for this S3 bucket. We can append the following configuration to the end of the Resources
section:
Resources:
# ...other resources
ImageBucketResource:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:provider.environment.S3_BUCKET_NAME}
ImageBucketPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: PublicRead
Effect: Allow
Principal: '*'
Action:
- 's3:GetObject'
Resource: !Join ['', ['arn:aws:s3:::', !Ref ImageBucketResource, /*]]
Bucket:
Ref: ImageBucketResource
In the above configuration, we create a resource for the bucket, and a policy specifying public read permissions for that resource. Note how ImageBucketPolicy
is referencing ImageBucketResource
. We're using intrinsic functions again to avoid hardcoding the image bucket resource name.
If we wanted to have a lambda that would upload to this S3 bucket, then we would need to add the permissions for it:
provider:
# ...
environment:
# ...other environment variables
S3_BUCKET_NAME: ${self:service}-${opt:stage, self:provider.stage}-images
iam:
role:
statements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource: 'arn:aws:s3:::${self:provider.environment.S3_BUCKET_NAME}/*'
Our S3 bucket is now set up.
Bonus: Lambda Bundling
The project in this state is relatively simple and should take less than a couple of minutes to deploy. With time, however, it will probably grow larger, and the lambdas may start to require some external dependencies. The deployment will become slower, and lambda deployments will contain more files than are really necessary. At that point, it will be a good idea to introduce lambda bundling.
serverless-esbuild is a plugin that utilizes esbuild to bundle and minify your lambda code. It's almost zero-config, and works out of the box without the need to install any additional plugins. With it, you can have both TypeScript and JavaScript code.
To start using it, install it first:
npm install --save-dev serverless-esbuild
Now add it to the plugins array:
plugins:
- serverless-esbuild
- serverless-appsync-plugin
Finally, configure it to both bundle and minify your lambdas:
custom:
esbuild:
bundle: true
minify: true
appSync:
# ... appSync config
That's it. Your lambdas will now be bundled and minified on every deploy.
Conclusion
This is the end of our three-part series on migrating an Amplify backend to Serverless Framework. We hope you enjoyed the journey! Even if you're not migrating from Amplify, these guides should help you configure various services such as AppSync and DynamoDB in Serverless Framework. Don't forget that the entire source code for this project is up on GitHub. Should you need any help, though, with either Amplify or Serverless Framework, please do not hesitate to drop us a line!