AWS Fargate vs. Google Cloud Run
One trend in hosting web and back-end applications is to use containers (e.g. Docker) and try to abstract away from the physical or even virtual machines. Usually one deploys a containerised application in some sort of a container management platform. This container management/orchestration platform aims to reduce the burden of managing machines and infrastructure, so the engineers can achieve more with less effort by being able to focus on functionality. The application engineers abstract from the servers on which the applications run, hence the buzzword serverless.
AWS Fargate and Google Cloud Run are two platforms which offer environments for deploying containerised applications. We deployed a python flask app to both by following official tutorials ([1], [2]) and in this blog post we will report our comparison and opinion.
Note that Perelik Soft is not affiliated by any means with neither of the companies, opinions here are our own. Also, since the technologies constantly develop, there is no guarantee that points presented herein will be also valid in the future.
Google Cloud Run
Deploying the hello-world containerised flask app with Google Cloud Run was indeed very quick. Deployment of the docker image/container through the gcloud cli-tool was straightforward. After the container is deployed one gets directly an URL for invocation.
Load Balancer and https come out-of-the-box. There is also auto scaling configured by default. The service scales from 0 to 1000 containers by default where a container might use up to 4GB memory and up to 4vCPUs [5]. You cannot limit the resources spent on the project globally, so it is probably a good idea to limit the max number of container instances.
Regarding documentation, we found the developer guides good but sometimes slightly more implicit than we would have liked.
The interaction with the platform can be done from the Web UI or with shell commands using the gcloud command line interface. There is also support for using infrastructure-as-code tools (e.g. terraform [11]).
We found the Main Menu Panel in the Web UI gives a good overview and access to the different services.
In order to start new resources you need to create a project where the resources will reside. You could easily switch between projects. We liked this feature because it gives explicit and more granular control over the resources. This would come in handy when hosting multiple projects, for example with budgeting and billing – one gets a clear overview of the costs of a specific project. Deleting all resources for a given project also becomes a trivial task.
We found the region management somewhat more implicit as in AWS. You could set up the default region through gcloud cli or through the web UI though [6].
AWS Fargate
With Fargate it was clear from the start that one needs more architecture to put the code in production. For example there is no out-of-the box load balancer, so you would need to provision and configure one by yourself. Furthermore there is no automatic SSL termination, so if you wish to use https you would need to manually include a component for this in the architecture and configure it appropriately (API Gateway).
There is also more manual effort required for setting up networking, e.g. NAT gateways, public and private subnets, redundancy among availability zones.
Because you need to manually configure more components there is also more effort in setting up the correct access rules through Security Groups and IAM Roles.
Having said all that, we have found the AWS documentation really well written, unambiguous and helpful. There is more manual effort in configuring and plugging in the different components, however from the engineering perspective, we liked that in this way it was clearer what happens behind the scene. Overall we did not find the additional manual steps too complicated. We also liked that there are white papers and labs regarding architectural best practices [3].
Similar to GCP you can interact with the platform through the web UI, with shell commands through the aws command line interface or with an infrastructure-as-code tool. Amazon provide their own CloudFormation to manage infrastructure as code however there is support for other tools as well, e.g. terraform [12].
You expose your containersied app as a service in a cluster. A service consists of task(s) which consist of container(s). You can have multiple tasks per service [10] and you can define the hardware resources on a task level – at the time of this writing up to 30GB of memory and up to 4 vCPUS [8]. A task is a solid boundary and the containers running in it cannot exceed the resources the task was created with [10]. By default there is no auto scaling configured. However you could easily configure one with minimum and maximum limits for the number of tasks. You could also define an auto scaling policy based on CPU and memory utilisation metrics.
A minus for us was that it felt difficult to gain an overview of what resources are currently running for a specific project. You could gain some visibility with AWS Resources Groups (Tag Editor), but the project chooser in GCP felt more explicit and clear, we could easier delete all resources for the example project.
If you use CloudFormation (infrastructure as code) you can have a stack for all resources and just delete the stack, however dependency from objects outside of the stack to objects from the stack will hinder an easy deletion.
We liked the global, easy to see region chooser. It improves transparency where the resources are geographically located.
Conclusion:
Making the simple flask app respond on “prod” in AWS Fargate felt closer to an old school hosting where you need to provision modules for the different hosting objectives. Google Cloud Run felt more like a platform where much of the provisioning happens behind the scenes for you.
Although with a different philosophy, we think both platforms offer in many areas similar development experience. There are tools which are analogous between them, for example both offer a command line interface (AWS cli, gcloud cli) and shell interaction through a browser (cloud shell, cloud9).
Different projects and teams might lean more towards one or the other platform.
References:
- https://cloud.google.com/run/docs/quickstarts/build-and-deploy#python
- https://aws.amazon.com/getting-started/hands-on/build-modern-app-fargate-lambda-dynamodb-python/
- https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&wa-lens-whitepapers.sort-order=desc
- https://codelabs.developers.google.com/codelabs/cloud-run-django#0
- https://cloud.google.com/run/quotas
- https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region#:~:text=To%20change%20your%20default%20region,go%20to%20the%20Settings%20page.&text=From%20the%20Region%20drop%2Ddown,menu%2C%20select%20a%20default%20zone.
- https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html
- https://docs.aws.amazon.com/AmazonECS/latest/userguide/service-quotas.html
- https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html
- https://aws.amazon.com/blogs/containers/how-amazon-ecs-manages-cpu-and-memory-resources/
- https://www.sethvargo.com/configuring-cloud-run-with-terraform/
- https://learn.hashicorp.com/tutorials/terraform/aws-build
If you have any questions about the article or want to discuss the topic, do not hesitate to contact us at
— — —
We put a lot of effort in the content creation in our blog. Multiple information sources are used, we do our own analysis and always double check what we have written down. However, it is still possible that factual or other mistakes occur. If you choose to use what is written on our blog in your own business or personal activities, you do so at your own risk. Be aware that Perelik Soft Ltd. is not liable for any direct or indirect damages you may suffer regarding the use of the content of our blog.
Author: Luben Alexandrov