AWS Categories: Serverless and Containers
Red Hat OpenShift Service on AWS
What is it?
Red Hat OpenShift Service on AWS provides an integrated experience to use OpenShift. If you are already familiar with OpenShift, you can accelerate your application development process by leveraging familiar OpenShift APIs and tools for deployments on AWS. With Red Hat OpenShift Service on AWS, you can use the wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to build secure and scalable applications faster. Red Hat OpenShift Service on AWS comes with pay-as you-go hourly and annual billing, a 99.95% SLA, and joint support from AWS and Red Hat.
Red Hat OpenShift Service on AWS makes it easier for you to focus on deploying applications and accelerating innovation by moving the cluster lifecycle management to Red Hat and AWS. With Red Hat OpenShift Service on AWS, you can run containerized applications with your existing OpenShift workflows and reduce the complexity of management.
ROSA is in limited preview at this time. Customers can register interest at: https://pages.awscloud.com/ROSA_Preview.html
- Clear path to running in the cloud: Red Hat OpenShift Service on AWS delivers the production-ready OpenShift that many enterprises already use on- premises today, simplifying the ability to shift workloads to the AWS public cloud as business needs change.
- Deliver high-quality applications faster: Remove barriers to development and build high-quality applications faster with self-service provisioning, automatic security enforcement, and consistent deployment. Accelerate change iterations with automated development pipelines, templates, and performance monitoring.
- Flexible, cost-efficient pricing: Scale per your business needs and pay as you go with flexible pricing with an on-demand hourly or annual billing model.
Amazon Elastic Container Registry (ECR) Public
What is it?
Amazon Elastic Container Registry (ECR) is a fully managed container registry that makes it easy to store, manage, share, and deploy your container images and artifacts anywhere. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and high-performance architecture, allowing you to reliably deploy images for your container applications. You can share container software privately within your organization or publicly worldwide for anyone to discover and download.
ECR is available for use globally, see details on the AWS Regions Table.
- NEW: Public container image and artifact gallery: You can discover and use container software that vendors, open-source projects and community developers share publicly in the Amazon ECR public gallery. Popular base images such as operating systems, AWS-published images, Kubernetes add-ons and files such as Helm charts can be found in the gallery.
- Team and public collaboration: Amazon ECR supports the ability to define and organize repositories in your registry using namespaces. This allows you to organize your repositories based on your team’s existing workflows. You can set which API actions another user may perform on your repository (e.g., create, list, describe, delete, and get) through resource-level policies, allowing you to easily share your repositories with different users and AWS accounts, or publicly with anyone in the world.
- Reduce your effort with a fully managed registry: Amazon Elastic Container Registry eliminates the need to operate and scale the infrastructure required to power your container registry. There is no software to install and manage or infrastructure to scale. Just push your container images to Amazon ECR and pull the images using any container management tool when you need to deploy.
- Securely share and download container images Amazon Elastic Container Registry transfers your container images over HTTPS and automatically encrypts your images at rest. You can configure policies to manage permissions and control access to your images using AWS Identity and Access Management (IAM) users and roles without having to manage credentials directly on your EC2 instances.
- Provide fast and highly available access: Amazon Elastic Container Registry has a highly scalable, redundant, and durable architecture. Your container images are highly available and accessible, allowing you to reliably deploy new containers for your applications. You can reliably distribute public container images as well as related files such as helm charts and policy configurations for use by any developer. ECR automatically replicates container software to multiple AWS Regions to reduce download times and improve availability.
Amazon Elastic Container Service (ECS) Anywhere
What is it?
Amazon Elastic Container Service (ECS) Anywhere is a capability in Amazon ECS that enables customers to easily run and manage container-based applications on-premises, including on virtual machines (VMs), bare metal servers, and other customer-managed infrastructure.
With this announcement, customers will now be able to use ECS on any compute infrastructure, whether in AWS regions, AWS Local Zones, AWS Wavelength, AWS Outposts, or in any on-premises environment, without installing or operating container orchestration software.
Amazon ECS Anywhere is planned to be available in all standard regions where Amazon ECS is available.
- Use ECS as a common tool to deploy “anywhere”: ECS Anywhere offers customers a single container orchestration platform for consistent tooling and deployment experience across AWS and on-premises environments including now on customer-managed infrastructure. With ECS Anywhere, you get the same powerful simplicity of the ECS API, cluster management, monitoring, and tooling for containers running anywhere.
- Run containers on customer-managed infrastructure to meet specific requirements: ECS Anywhere enables customers to run workloads on-premises on their own infrastructure for reasons such as regulatory, latency, security, and data residency requirements.
- Leverage the simplicity of ECS while making use of existing capital investments: ECS Anywhere allows customers to utilize their on-premises investments as they need to in order to run containerized applications. Additionally, some customers are looking to use their on-premises infrastructure as base capacity while bursting into AWS during peaks or as their business grows. Over time, as they retire their on-premises hardware, they would continue to move the dial to use more compute on AWS until they have fully migrated.
- Fully managed cloud-based control plane: No need to run, update, or maintain container orchestrators on-premises.
- Consistent tooling and governance: Use the same tools and APIs for all container-based applications regardless of operating environment.
- Manage your hybrid footprint: Run applications in on-premises environments and easily expand to cloud when you’re ready.
Amazon EKS Anywhere
What is it?
Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises, including on your own virtual machines (VMs) and bare metal servers. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support.
EKS Anywhere creates clusters based on Amazon EKS Distro, the same Kubernetes distribution used by EKS for clusters on AWS. EKS Anywhere enables you to automate cluster management, reduce support costs, and eliminate the redundant effort of using multiple tools for operating Kubernetes clusters. EKS Anywhere is fully supported by AWS. In addition, you can leverage the EKS console to view all your Kubernetes clusters, running anywhere.
As an on-premises offering, EKS Anywhere can run anywhere
- Train models in the cloud and run inference on premises: With EKS Anywhere, you can now combine and benefit the best of both worlds: train your ML model in the cloud, using AWS managed services and use the trained ML model in your on-premises setup.
- Workload migration (on-premises to cloud): With EKS Anywhere, you can have the same EKS tooling on-premises, and this consistency provides a quicker on-ramp of your Kubernetes-based workloads to the cloud.
- Application modernization: EKS Anywhere empowers you to finally address the modernization of your applications, removing the heavy lifting of keeping up with upstream Kubernetes and security patches, so you can focus on the business value.
- Data sovereignty: Some large data sets cannot or will not soon leave the data center due to legal requirements concerning the location of the data. Yet EKS Anywhere helps to move the stateless part of the application to the cloud, while keeping data in place.
- Bursting: Seasonal workloads can require a lot of compute (5x to 10x more than the baseline) for a days or weeks. Being able to burst into the cloud provides this temporary capacity. With EKS Anywhere you can now manage your workloads across on-premises and the cloud consistently and cost-effectively.
- Simplify and automate Kubernetes management: EKS Anywhere provides you with consistent Kubernetes management tooling optimized to simplify cluster installation with default configurations for OS, container registry, logging, monitoring, networking, and storage.
- Create consistent clusters: Amazon EKS Anywhere uses EKS Distro, the same Kubernetes distribution deployed by Amazon EKS, allowing you to easily create clusters consistent with Amazon EKS best practices. EKS Anywhere eliminates the fragmented collection of vendor support agreements and tools required to install and operate Kubernetes clusters on-premises.
- Deliver a more reliable Kubernetes environment: EKS Anywhere gives you a Kubernetes environment on-premises that is easier to support. EKS Anywhere helps you integrate Kubernetes with existing infrastructure, keep open-source software up to date and patched, and maintain business continuity with cluster backups and recovery.
What is it?
AWS Proton is the first fully managed application deployment service for container and serverless applications. Platform teams can use Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates.
Proton enables platform teams to give developers an easy way to deploy their code using containers and serverless technologies, using the management tools, governance, and visibility needed to ensure consistent standards and best practices.
During preview: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu-west-1. Global region availability planned for GA.
- Streamlined management: Platform teams use AWS Proton to manage and enforce a consistent set of standards for compute, networking, continuous integration/continuous delivery (CI/CD), and security and monitoring in modern container and serverless environments. With Proton, you can see what was deployed and who deployed it. You can automate in-place infrastructure updates when you update your templates.
- Managed developer self-service: AWS Proton enables platform teams to offer a curated self-service interface for developers, using the familiar experience of the AWS Management Console or AWS Command Line Interface (AWS CLI). Using approved stacks, authorized developers in your organization are able to use Proton to create and deploy a new production infrastructure service for their container and serverless applications.
- Infrastructure as code (IaC) adoption: AWS Proton uses infrastructure as code (IaC) to define application stacks and configure resources. It integrates with popular AWS and third-party CI/CD and observability tools, offering a flexible approach to application management. Proton makes it easy to provide your developers with a curated set of building blocks they can use to accelerate the pace of business innovation.
- Set guardrails: AWS Proton enables your developers to safely adopt and deploy applications using approved stacks that you manage. It delivers the right balance of control and flexibility to ensure developers can continue rapid innovation.
- Increase developer productivity: AWS Proton lets you adopt new technologies without slowing your developers down. It gives them infrastructure provisioning and code deployment in a single interface, allowing developers to focus on their code.
- Enforce best practices: When you adopt a new feature or best practice, AWS Proton helps you update out- of-date applications with a single click. With Proton, you can ensure consistent architecture across your organization.
Resources: Website | What’s New Post
AWS Lambda Container Image Support & 1ms billing granularity
What is it?
AWS Lambda supports packaging and deploying functions as container images, making it easy for customers to build Lambda based applications by using familiar container image tooling, workflows, and dependencies. Customers also benefit from the operational simplicity, automatic scaling with sub-second startup times, high availability, native integrations with 140 AWS services, and pay for use model offered by AWS Lambda. Enterprise customers can use a consistent set of tools with both their Lambda and containerized applications for central governance requirements such as security scanning and image signing. Customers can create their container deployment images by starting with either AWS Lambda provided base images or by using one of their preferred community or private enterprise images.
Container Image Support for AWS Lambda and 1ms billing granularity for AWS Lambda are available in all regions where AWS Lambda is available, except for regions in China.
- Build cross-platform applications, with both containers and AWS Lambda
- Large applications, or applications relying on large dependencies, such as machine learning, analytics, or data intensive apps.
- Customers who want to run serverless applications but have standardized on container tooling within their organizations.
- Leverage familiar container tooling and workflows: Leverage the flexibility and familiarity of container tooling, and the agility and operational simplicity of AWS Lambda to be more agile when building applications.
- Get the flexibility of containers and agility of AWS Lambda: When invoked, functions deployed as container images are executed as-is, with sub-second automatic scaling. You benefit from high availability, only pay for what you use and can take advantage of 140 native service integrations.
- Build and deploy large workloads to AWS Lambda: With container images of up to 10GB, you can easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads.
Amazon EKS Add-ons
What is it?
Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Amazon EKS helps you provide highly available and secure clusters and automates key tasks such as patching, node provisioning, and updates.
NEW! Add-ons – Add-ons are common operational software which extend the operational functionality of Kubernetes. You can use EKS to install and keep this software up to date. When you start an Amazon EKS cluster, you can select the add-ons that you would like to run in the cluster, including Kubernetes tools for observability, networking, autoscaling, and AWS service integrations.
Amazon EKS is generally available in all AWS public regions as of November 2020. Support in the new Osaka region is coming soon.
- Hybrid Deployments
- Web Applications
- Big Data
- Machine Learning
- Batch Processing
- NEW! Service Integrations – AWS Controllers for Kubernetes (ACK) lets you directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly available Kubernetes applications that utilize AWS services.
- NEW! Integrated Kubernetes Console – EKS provides an integrated console for Kubernetes clusters. Cluster operators and application developers can use EKS as a single place to organize, visualize, and troubleshoot their Kubernetes applications running on Amazon EKS. The EKS console is hosted by AWS and is available automatically for all EKS clusters.
- NEW! Add-ons – Add-ons are common operational software which extend the operational functionality of Kubernetes. You can use EKS to install and keep this software up to date. When you start an Amazon EKS cluster, you can select the add-ons that you would like to run in the cluster, including Kubernetes tools for observability, networking, autoscaling, and AWS service integrations.
Resources: Website | What’s New Post
Amazon EKS Distro
What is it?
Amazon EKS Distro is a Kubernetes distribution used by Amazon EKS to help create reliable and secure clusters. EKS Distro includes binaries and containers of open-source Kubernetes, etc. (cluster configuration database), networking, storage plugins, all tested for compatibility. You can deploy EKS Distro wherever your applications need to run.
You can deploy clusters and let AWS take care of testing and tracking Kubernetes updates, dependencies, and patches. Each EKS Distro verifies new Kubernetes versions for compatibility. The source code, open-source tools, and settings are provided for reproducible builds. EKS Distro will provide extended support for Kubernetes, with builds of previous versions updated with the latest security patches. EKS Distro is available as open source on GitHub.
Amazon EKS Distro is open-source software that can be run anywhere.
- Get consistent Kubernetes builds: EKS Distro provides the same installable builds and code of open-source Kubernetes that are used by Amazon EKS. You can perform reproducible builds with the provided source code, tooling, and documentation.
- Run Kubernetes on any infrastructure: You can deploy EKS Distro on your own self-provisioned hardware infrastructure, including bare-metal servers or VMware vSphere virtual machines, or on Amazon EC2 instances.
- Have a more reliable and secure distribution: EKS Distro will provide extended support for Kubernetes versions in alignment with the Amazon EKS Version Lifecycle Policy, by updating builds of previous versions with the latest critical security patches.
Resources: Website | What’s New Post
Amazon Managed Workflows for Apache Airflow
What is it?
Amazon Managed Workflows are a managed orchestration service for Apache Airflow that makes it easy to set up and operate end-to-end data pipelines in the cloud at scale. Apache Airflow is an open-source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as “workflows.” With Managed Workflows you can use the same open-source Airflow platform and Python language to create workflows without having to manage the underlying infrastructure for scalability, availability, and security. Managed Workflows automatically scale its workflow execution capacity up and down to meet your needs and is integrated with AWS security services to enable fast and secure access to data.
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), ca-central-1 (Canada Central), eu-north-1 (Stockholm), eu-west-1 (Ireland), eu-central-1 (Frankfurt), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), and ap-southeast-1 (Singapore)
- Enable Complex Workflows: Big data platforms often need complicated data pipelines that connect many internal and external services. To use this data, customers need to first build a workflow that defines the series of sequential tasks that prepare and process the data. Managed Workflows execute these workflows on a schedule or on-demand.
- Coordinate Extract, Transform, and Load (ETL) Jobs: You can use Managed Workflows as an open-source alternative to orchestrate multiple ETL jobs involving a diverse set of technologies in an arbitrarily complex ETL workflow.
- Prepare Machine Learning (ML) Data: In order to enable machine learning, source data must be collected, processed, and normalized so that ML modeling systems like the fully managed service Amazon SageMaker can train on that data. Managed Workflows solve this problem by making it easier to stitch together the steps it takes to automate your ML pipeline.\
- Deploy Airflow rapidly at scale: Get started in minutes from the AWS Management Console, CLI, AWS CloudFormation, or AWS SDK. Create an account and begin deploying Directed Acyclic Graphs (DAGs) to your Airflow environment immediately without reliance on development resources or provisioning infrastructure.
- Run Airflow with built-in security: With Managed Workflows, your data is secure by default as workloads run in your own isolated and secure cloud environment using Amazon’s Virtual Private Cloud (VPC), and data is automatically encrypted using AWS Key Management Service (KMS).
- Reduce operational costs: Managed Workflows are a managed service, removing the heavy lift of running open-source Apache Airflow at scale. With Managed Workflows, you can reduce operational costs and engineering overhead while meeting the on-demand monitoring needs of end-to-end data pipeline orchestration.
What is it?
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.
Amazon MQ is available in 19 AWS Regions, see details on the AWS Regions Table.
- Migrate quickly: Connecting your current applications to Amazon MQ is easy because it uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP 1.0 and 0-9-1, STOMP, MQTT, and WebSocket. This enables you to move from any message broker that uses these standards to Amazon MQ by simply updating the endpoints of your applications to connect to Amazon MQ.
- Offload operational responsibilities: Amazon MQ manages the administration and maintenance of message brokers and automatically provisions infrastructure for high availability. There is no need to provision hardware or install and maintain software and Amazon MQ automatically manages tasks such as software upgrades, security updates, and failure detection and recovery.
- Durable messaging made easy: Amazon MQ is automatically provisioned for high availability and message durability when you connect your message brokers. Amazon MQ stores messages redundantly across multiple Availability Zones (AZ) within an AWS region and will continue to be available if a component or AZ fails.