AWS Categories: AI and ML
Amazon SageMaker Pipelines
What is it?
Amazon SageMaker Pipelines is the world’s first machine learning (ML) CI/CD service accessible to every developer and data scientist. SageMaker Pipelines brings CI/CD practices to ML reducing the months of coding required to manually stitch together different code packages to just a few hours.
ML workflows are typically out of reach for all but the largest enterprises, because they are hard to build. To build ML workflows, you typically need to create hundreds of code packages for data preparation, model training, and model deployment, and stitch them together so they run as a sequence of steps. The process is tedious and error prone because you need to define the order of the steps while keeping track of dependencies between each step, making it slow and difficult to scale model production.
With just a few clicks in SageMaker Pipelines, you can create an automated machine learning workflow. SageMaker Pipelines takes care of all the heavy lifting involved with managing the dependencies between each step of the workflow and orchestrates them so you can scale to thousands of models in production and expand your use of machine learning across more lines of business.
Amazon SageMaker Pipelines is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table
- Workflows are required for all machine learning applications, so Amazon SageMaker Pipelines can be used for all ML use cases.
- Compose and manage ML workflows: Amazon SageMaker Pipelines enables you to build an automated sequence of steps to move models from concept to production. You can build every step of the ML lifecycle with an easy-to-use Python interface for creating pipelines to develop and deploy models, automate the process through built-in CI/CD templates, and monitor the pipelines using SageMaker Studio. You can also manage the dependencies between each step, build the correct sequence, and execute the steps automatically, reducing months of coding to a few hours.
- Scale workflows to thousands of models: Amazon SageMaker Pipelines automatically tracks code, datasets, and model versions through each step of the machine learning lifecycle. This enables you to go back and replay model generation steps, troubleshoot problems, and reliably track the lineage of models at scale, across thousands of models in production.
- Track and access model versions in a model registry: You can have hundreds of machine learning workflows in your business, each with a different version of the same model, which makes tracking model versions tedious and time-consuming. To help you track versions, Amazon SageMaker Pipelines provides a central repository of trained models called a model registry. You can access the model registry through SageMaker Studio or programmatically through the Python SDK making it easy to deploy your models you are responsible for, across development and production
Amazon SageMaker Data Wrangler
What is it?
SageMaker Data Wrangler takes the tedium out of preparing training data by allowing data scientists and ML engineers to analyze and prepare data for machine learning applications from a single interface. Instead of requiring complex queries to collect data from different sources, SageMaker Data Wrangler connects to data sources with just a few clicks. Its ready-to-use visualization templates and built-in data transforms streamline the process of cleaning, verifying, and exploring data so you can produce accurate ML models without writing a single line of code. Once your training data is prepared, you can automate data preparation and, through integration with SageMaker Pipelines, add it as a step into your ML workflow.
Amazon SageMaker Data Wrangler is available in all AWS Regions where SageMaker Studio is available. See details on the AWS Regions Table.
- Cleanse & Explore Your Data: Data scientists need to collect data in various formats from different sources, which requires creating complex queries and using import tools to load the data into a data preparation environment. The data selection tool in Amazon SageMaker Data Wrangler makes it easy to select and query data from one of several data sources. Once data is imported, you can view statistics and access a suite of built-in data transforms designed to reduce tedious tasks such as data cleansing and exploration.
- Visualize & Understand Your Data: SageMaker Data Wrangler provides a set of visualization templates, such as histograms, scatter plots, and box and whisker plots, so you can quickly detect outliers or extreme values within a data set without the need to write code. You can also use ML model report capabilities to gain an understanding of important columns in your data set, and proactively identify potential inconsistencies in the data preparation workflow.
- Enrich Your Data: Data scientists must use feature engineering to transform data into a format that can be used to build an accurate ML model. SageMaker Data Wrangler provides pre-configured data transformation tools so you can easily perform feature engineering. Within SageMaker Data Wrangler, you can also identify imbalance in datasets and spot potential bias in training data.
- Operationalize ML workflows faster: With a single visual interface, you can manage all steps of the data preparation workflow and quickly operationalize it into a production setting. Without manually sifting through and translating hundreds of lines of data preparation code, you can export your data preparation workflow to a notebook or code script to easily bring the workflow into production.
- Select and Query Data with a Few Clicks: Preparing high-quality training data often requires the creation of complex queries to collect data in various formats from different sources. With SageMaker Data Wrangler’s data selection tool, you can quickly select data from multiple data sources, such as Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon S3, and Amazon SageMaker Feature Store. You can write queries for data sources and import data directly into SageMaker from various file formats, such as CSV files, parquet files, and database tables.
- Easily Transform Data: Amazon SageMaker Data Wrangler offers a rich selection of pre-configured data transforms, such as convert column type, rename column, and delete column, so you can transform your data into formats that can be effectively used for ML models without writing a single line of code. You can convert a text field column into a numerical column with a single click, or author custom transforms in PySpark, SQL, and Pandas to provide flexibility across your organization.
Amazon DevOps Guru
What is it?
Amazon DevOps Guru is a machine learning (ML) powered DevOps service that gives you a simpler way to measure and improve an application’s operational performance and availability and reduce expensive downtime– no machine learning expertise required.
Using machine learning models informed by years of operational expertise in building, scaling, and maintaining highly available applications at Amazon.com, DevOps Guru identifies behaviors that deviate from normal operating patterns. When DevOps Guru identifies a critical issue, it automatically alerts you with a summary of related anomalies, the likely root cause, and context on when and where the issue occurred. DevOps Guru also, when possible, provides prescriptive recommendations on how to remediate the issue.
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland), and ap-northeast-1 (Tokyo)
- Operational audits: IT managers responsible for reliability of their applications can use DevOps Guru to get a quick summary of all the operationally significant events, identified and sorted by their severity. In the console, you can search for issues in specific applications, identify trends, and decide where developers should spend their time and resources.
- Proactive resource exhaustion planning: Build predictive alarming for exhaustible resources such as memory, CPU, and disk space with DevOps Guru. It forecasts when resource utilization will exceed the provisioned capacity and informs you by creating a notification in the dashboard, helping you avoid an impending outage.
- Predictive maintenance: Site reliability engineers can use DevOps Guru insights to prevent incidents before they occur. DevOps Guru flags medium- and low severity findings that might not be critical but, if left alone, worsen over time and affect the availability of your application. This helps you plan, prioritize, and avoid unforeseen downtime.
- Automatically detect operational issues: DevOps Guru continuously analyzes streams of disparate data and watches thousands of metrics to establish normal bounds for application behavior. It discovers and classifies resources like application metrics, logs, events, and traces in your account, automatically identifies deviations from normal activity, and surfaces high severity issues to quickly alert you of downtime.
- Resolve issues quickly with ML-powered insights: DevOps Guru helps to reduce your issue resolution time and assists in root cause identification by correlating multiple metrics and events anomalies. When an operational issue occurs, it generates insights with a summary of related anomalies, contextual information about the issue, and when possible actionable recommendations for remediation.
- Easily scale and maintain availability: As you migrate and adopt new AWS services, DevOps Guru automatically adapts to changing behavior and evolving system architecture. With DevOps Guru, you save time and effort otherwise spent on monitoring applications and manually updating static rules and alarms. In just a few clicks, DevOps Guru starts analyzing your AWS application activity.
Amazon SageMaker Feature Store
What is it?
Amazon SageMaker Feature Store is a feature store for machine learning (ML) serving features in both real-time and in batch. Using SageMaker Feature Store, you can store, discover, and share features so you don’t need to recreate the same features for different ML applications saving months of development effort.
Your ML models use inputs called “features” to make predictions. For example, lot size could be a feature in a model that predicts housing prices. Features need to be available in large batches for training and also in real- time to make fast predictions. For example, in a housing price predictor model, users expect an immediate update as new listings become available. The quality of your predictions is dependent on keeping features consistent but requires months of coding and deep expertise to keep features consistent across training and development environments.
Amazon SageMaker Feature Store provides a consistent set of features so you get the exact same features for training and inference, and you can easily share features across your organization which improves collaboration and eliminates rework.
Amazon SageMaker Feature Store is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table.
- Model features are required for all machine learning applications, so Amazon SageMaker Feature Store can be used for all ML use cases.
- Develop models faster: Amazon SageMaker Feature Store provides a central repository of features so they can be used for many applications across your organization. By discovering and reusing features that are already deployed, you spend less time on data preparation and feature computation and more time on innovation.
- Increase model accuracy: Accuracy of ML models can be increased by looking at model metadata such as the dataset used, model attributes, and hyperparameters. In addition to the actual features, Amazon SageMaker Feature Store stores metadata for each feature so you can understand its impact while building and training models.
- Track model lineage for compliance: With Amazon SageMaker Feature Store, you can track lineage of the feature generation process. The feature store maintains the data lineage for every feature providing the required information to understand how a feature was generated. This helps with addressing compliance requirements in regulated industries.
Distributed training on Amazon SageMaker
What is it?
Training models on large datasets can take hours, slowing down your ability to deploy your latest innovations into production. You can split large training datasets across multiple GPUs (data parallelism) but splitting data can take weeks of experimentation to do efficiently. Also, more advanced ML use cases may require large models. For example, models can have billions of parameters and be petabytes in size. As a result, the models are often too big to fit on a single GPU. You can split large models across multiple GPUs (model parallelism) but finding the best way to split up the model and adjust training code can take weeks and delay your time to market.
For customers using GPUs, Amazon SageMaker makes it faster to perform data parallelism and model parallelism. With minimal code changes, SageMaker helps split your data across multiple GPUs in a way that achieves near-linear scaling efficiency. SageMaker also helps split your model across multiple GPUs by automatically profiling and partitioning your model with fewer than 10 lines of code in your TensorFlow or PyTorch training script
Distributed training is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table.
- Object Detection: For object detection, model training time is often a bottleneck, slowing data science teams down as they wait several days or weeks for results. SageMaker’s data parallelism library can help data science teams efficiently split training data and quickly scale to hundreds or even thousands of GPUs, reducing training time from days to minutes.
- Natural Language Processing: In natural language understanding, data scientists often improve model accuracy by increasing the number of layers and the size of the neural network which creates models with billions of parameters such as GPT-2, GPT-3, T5, and Megatron. Splitting model layers and operations across GPUs can take weeks, but the model parallelism library in SageMaker automatically analyzes and splits the model efficiently to enable data science teams to start training large models within minutes.
- Computer Vision: In computer vision, hardware constraints often force data scientists to pick batch sizes or input sizes that are smaller than they would prefer. For example, bigger inputs may improve model accuracy but may cause out-of-memory errors and poor performance with smaller batch sizes. SageMaker offers the flexibility to easily train models efficiently with lower batch sizes or train with bigger inputs by leveraging managed distributed training.
- Reduce training time: Amazon SageMaker reduces training time by 25% or more by making it easy to split training data across GPUs. For example, training Mask R-CNN on p3dn.24xlarge runs 25% faster on SageMaker compared to Horovod. The reduction in training time is possible because SageMaker manages the GPUs running in parallel to achieve optimal synchronization.
- Optimized for AWS: Using open-source tools for distributed training that are not optimized for AWS results in poor scaling efficiency. SageMaker’s data parallelism library provides communication algorithms that are designed to fully utilize the AWS network and infrastructure to achieve near-linear scaling efficiency. For example, BERT on p3dn.24xlarge instances achieves a scaling efficiency of 88% using SageMaker, or a 27% improvement over the same model using Horovod.
- Support for popular ML framework APIs: SageMaker enables you to reuse existing APIs for training without writing any custom SageMaker training code. SageMaker supports DistributedDataParallel (DDP) for PyTorch and Horovod for TensorFlow.
Amazon CodeGuru updates
What is it?
Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve your code quality and identify an application’s most expensive lines of code. Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application’s performance in production and provide recommendations and visual clues on how to improve code quality, application performance, and reduce overall cost.
- Improve application performance: Amazon CodeGuru Profiler is always searching for application performance optimizations, identifying your most “expensive” lines of code and recommending ways to fix them to reduce CPU utilization, cut compute costs, and improve application performance.
- Detect deviation from AWS API and SDK best practices: Amazon CodeGuru Reviewer is trained using rule mining and supervised machine learning models that use a combination of logistic regression and neural networks to look at code changes intended to improve the quality of the code and cross-references them against documentation data.
- Catch code problems before they hit production: For code reviews, developers commit their code to GitHub, GitHub Enterprise, Bitbucket Cloud, and AWS CodeCommit and add CodeGuru Reviewer as one of the code reviewers, with no other changes to the normal development process. CodeGuru Reviewer analyzes existing code bases in the repository, identifies hard to find bugs and critical issues with high accuracy, provides intelligent suggestions on how to remediate them, and creates a baseline for successive code reviews.
- Fix Security Vulnerabilities: CodeGuru Reviewer Security Detector leverages machine learning and AWS’s years of security experience to improve your code security. It ensures that your code follows best practices for KMS, EC2 APIs and common Java crypto and TLS/SSL libraries. When the security detector discovers an issue, a recommendation for remediation is provided along with an explanation for why the code improvement is suggested, thereby enabling Security Engineers to focus on architectural and application-specific security best-practices.
- Continuous monitoring to proactively improve code quality: For every pull request initiated, CodeGuru Reviewer automatically analyzes the incremental code changes and posts recommendations directly on the pull request. Additionally, it supports full repository or code base scan for periodic code maintainability, and code due diligence initiatives to ensure that your code quality is consistent.
Resources: Website | What’s new post
AWS for Industrial
What is it?
‘AWS for Industrial’ is a new go-to-market umbrella initiative comprised of new and existing services and solutions from AWS and our strategic partners built and packaged specifically for developers, engineers and operators at industrial sites. AWS solutions can include reference architectures, AWS CloudFormation templates, deployment guides, and Quick Starts to help customers speed deployment of their own applications. Amazon Panorama Appliance, Amazon Panorama Device SDK, Amazon Monitron, Amazon Lookout for Vision, and Amazon Lookout for Equipment join an existing suite of services including AWS IoT SiteWise, the AWS Snow Family, AWS Outposts, and Amazon Timestream to make it easy for customers to digitize, monitor and optimize their industrial operations.
Increasingly, industrial customers across asset intensive industries such as manufacturing, energy, mining, transportation, and agriculture are leveraging new digital technologies to drive faster and better decisions. The ‘AWS for Industrial’ initiative simplifies the process of building and deploying innovative Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), analytics and edge solutions to achieve step change improvements in operational efficiency, quality, and agility. Industrial customers seek cloud and edge solutions to their business problems rather than a collection of individual services, and with the additional product truth from our newly launched industrial services, the ‘AWS for Industrial’ initiative unites AWS offerings under a single go-to-market motion to meet the market demand across the multiple industrial customer sub-segments.
For availability of AWS services relevant for industrial customers such as Amazon Panorama (Appliance and SDK), Amazon Monitron, Amazon Lookout for Vision, Amazon Lookout for Equipment, AWS IoT SiteWise, the AWS Snow Family (Snowball, Snowcone), AWS Outposts, and Amazon Timestream, see details on the AWS Regions Table.
- Engineering & Design: Modern product design requires sophisticated data storage, compute, and collaboration. With AWS and our extensive network of industrial partners, you can transform your engineering, design, and simulation efforts with the most comprehensive set of cloud solutions available today, while leveraging the highest level of security to protect your intellectual property
- Production & Asset Performance Management: Digital transformation enables industrial customers to maximize productivity and asset availability, and lower costs. To do this, industrial customers must liberate data from their legacy operational technology systems and leverage new tools in the cloud. With AWS and our network of leading industrial partners, you can transform your industrial operations with the most comprehensive and advanced set of cloud solutions available today, while taking advantage of security designed for the most sensitive industries.
- Supply Chain Management: As modern supply chains continue to expand, they also are becoming more complex and disparate — they require a unified view of data, while still being able to independently verify their transactions, such as production and transport updates. Solutions built using AWS services, such as Amazon Managed Blockchain, can provide the end-to-end visibility today’s supply chains need to track and trace their entire production process with unprecedented efficiency.
- Worker Safety & Productivity: Industrial companies need to empower their teams with the technology needed to keep the organization healthy, safe, and productive. With AWS and our extensive network of industrial partners, you can keep your staff safe by monitoring employee health to meet pandemic guidelines, reduce errors with digital job aids, automate manual workflows, enhance productivity, and reduce manual processing and documentation.
Quality Management: Industrial customers are increasingly focused on improving quality to maintain brand reputation, satisfy their customers, and manage costs. AWS and our extensive network of partners can help you customize and automate quality inspection with fast, fully scalable computer vision solutions to improve accuracy, reduce cost, and maintain the quality bar that your customers expect.
Resources: Website | Industrial Blog
Amazon Lookout for Vision
What is it?
Amazon Lookout for Vision enables you to find visual defects in industrial products, accurately and at scale. It uses computer vision to identify missing components in an industrial product, damage to vehicles or structures, irregularities in production lines, and even minuscule defects in silicon wafers — or any other physical item where quality is important such as a missing capacitor on printed circuit boards.
Visual inspection of industrial processes typically involves manual inspection, which can be tedious and inconsistent. For example, an automobile door assembly line requires quality inspectors to identify scratches or discoloration on newly painted door panels to prevent shipment of defective products. Computer vision brings speed, consistency, and accuracy, but implementation can be complex and require teams of data scientists to build, deploy, and manage the machine learning models needed to identify defects.
With Amazon Lookout for Vision you can automate real-time visual inspection with computer vision for processes like quality control and defect assessment – with no machine learning expertise required. You can get started in minutes by providing as few as 30 images for the process you want to visually inspect, such as machine parts or manufactured products. Amazon Lookout for Vision then analyses images from your cameras that monitor the process line, in real-time, to quickly and accurately identify anomalies like dents, cracks and scratches. It spots differences between the baseline images provided and the image feed from the process line and reports the presence of product defects. Reports are available in an easy-to-use dashboard in the AWS management console, so that you can take action quickly and reduce further defects – saving you time and money.
us-east-2, us-west-2, us-east-1, eu-west-1, eu-central-1, ap-northeast-1, ap- northeast-2
- Detect part damage: With Amazon Lookout for Vision, customers will detect damage to a product’s surface quality, color, and shape. For example, you can detect dents, scratches, and poorly welded surfaces on an automotive door panel across the fabrication and assembly processes.
- Identify missing components: Amazon Lookout for Vision will identify missing assembly components related to the absence, presence or placement and positioning of objects.
- Uncover process issues: Lookout for Vision can detect a defect that has a repeating pattern, which indicates a potential process issue. For example, you can detect repeated scruff marks on a nylon bobbin, which in combination with machine tag information can be used to identify an underlying process issue.
- Quickly and easily improve processes: Amazon Lookout for Vision gives you a fast and easy way to implement computer vision-based inspection in industrial processes, at scale. Provide as few as 30 baseline good images and Lookout for Vision will automatically build a model for you in minutes. You can then process images from IP cameras in batch or in real-time to quickly and accurately identify anomalies like dents, cracks and scratches.
- Increase production quality, fast: With Lookout for Vision you can reduce defects in production processes, real-time. It identifies and reports visual anomalies in an easy-to-use dashboard so you can take action quickly to stop more defects from occurring – increasing production quality and reducing costs.
- Reduce operational costs: Lookout for Vision reports trends in your visual inspection data, such as identifying processes with the highest defect rate, or flagging recent variations in defects. This gives you the ability to determine whether to schedule maintenance on the process line or reroute production to another machine before costly, unplanned downtime occurs.
Resources: Website | What’s new post
What is it?
Amazon Monitron is an end-to-end system that detects abnormal machine behavior, so you can enable predictive maintenance and reduce lost productivity from unplanned machine downtime. Reliability managers can quickly deploy Monitron to easily track machine health for industrial equipment such as such as bearings, motors, gearboxes, and pumps without any development work or specialized training.
Amazon Monitron enables customers to start proactively monitoring their equipment in just a few hours, without any software development or specialized training. Monitron is a secure end-to-end system that includes sensors to capture vibration and temperature data, gateways to automatically transfer data to the AWS Cloud, ML-based software that analyzes the data for abnormal machine patterns, and a companion mobile app for simple system setup and immediate notifications of abnormal machine behavior.
Amazon Monitron is available in us-east-1 and will be available in additional regions soon. You can buy Amazon Monitron Starter Kits, Sensors and Gateways on amazon.com and Amazon Business and ship them to any location in the US, UK and EU.
- Enable predictive maintenance: With Amazon Monitron, you can enable predictive maintenance for your equipment. Predictive maintenance is the activity of monitoring and evaluating the condition of equipment, detecting developing faults and planning specific corrective maintenance activities at a time when it is most cost effective. Monitron detects developing faults and notifies the technicians about it, allowing them to plan and execute corrective measures at an optimal time.
- Monitor remotely: With Monitron, you can remotely monitor equipment at your site without having to take readings manually. Amazon Monitron Sensor wakes up periodically and captures readings. When Amazon Monitron notifies you of a developing fault, you can schedule a time to investigate and execute a repair before secondary damage occurs, saving you time and money.
- Track the condition of inaccessible equipment: Today’s safety standards require fixed guards to be mounted on rotating equipment to protect people from injury. Often fixed guards restrict maintenance technician’s access to equipment to perform condition monitoring checks. Monitron Sensors are wireless and small, so the condition of components in restricted areas can now be monitored safely.
- Easy to install, and easy to use: Monitron works right out of the box. Monitron Sensors and Gateways are easy to install and use so technicians can start monitoring equipment in less than an hour.
- Reduce unplanned downtime: Monitron detects abnormal machine conditions proactively with ML technology and industry recognized vibration ISO standards, and thereby helps reduce costly and unplanned downtime.
- Cost effective: Monitron offers a cost-effective way to start monitoring your equipment, with low upfront hardware investment and pay-as-you- go software.
- Continuously improving: Reliability managers and technicians can add feedback directly in the Monitron mobile app and benefit from continuously improving ML model performance. Monitron Sensors and Gateways are remotely updated over the air (OTA), providing system improvements over the life of your installation.
Resources: Website | What’s new post
Amazon Lookout for Equipment
What is it?
Amazon Lookout for Equipment is an industrial equipment anomaly detection service that uses your machine data to detect abnormal equipment behavior automatically, so you can avoid unplanned downtime and optimize performance. Further, Amazon Lookout for Equipment leverages the best machine learning (ML) model for the job by searching 28K algorithms and parameters to define the best fit analytics – making ML accessible and scalable to industrial customers across all industrial machinery.
Lookout for Equipment enables operators to build custom ML models using their own historical, time-series machine data (temperature, vibration, rotation, pitch, rpms, flow rates, and more) along with historical maintenance events automatically. This service requires little or no ML expertise, which makes accessing and scaling ML across industrial assets assessable to industrial facilities and across industrial fleets. With Lookout for Equipment, little or no ML expertise is required. You pay only for what you use, there are no minimum fees, and no upfront commitments.
US West, eu-west-1, ap-northeast-2
- Scaling Anomaly detection: Amazon Lookout for Equipment automatically searches through up to 28,000 parameters to derive the optimal normal multi-variate relationships between each sensor within hours versus traditionally months of development. The result is being able to develop custom ML model specific to each equipment’s unique operating conditions effectively across 100s, if not 1000s of equipment.
- Enable advanced ML analytics in the hands of operators: Until now, machine learning has been exclusively leveraged by data scientists. With Amazon Lookout for Equipment, an operator or engineer can enable machine learning insights for abnormal equipment detection for uses such as predictive maintenance. Amazon Lookout for Equipment provides a user friendly and workflow agnostic approach to leveraging ML so that an operator only needs to decide on the right inputs to use and use the right labeled examples of failure to generate insights in hours.
- Integrate ML inference into your monitoring software: Industrial companies are constantly working to avoid unplanned downtime, improve operational efficiency, and get actionable real-time alerts. With Amazon Lookout for Equipment, you can run ML inference on real-time data to detect abnormal equipment behavior. The results can be integrated into your existing monitoring software or you can leverage AWS IoT SiteWise to get alerts and visualize real-time output.
- Automate the iterative steps of machine learning to enable access and scalability: Amazon Lookout for Equipment provides a user-friendly UI to put advanced ML analytics in the hands of operators. This service also automates time and resource intensive iterative machine learning steps to enable scale across equipment, assets and applications.
- Identify subtle issues earlier: Amazon Lookout for Equipment automatically identifies equipment anomalies by learning the healthy state and operational relationships between sensors on each asset. Lookout for Equipment can then pinpoint subtle changes in patterns and the highest contributing factor which enables operations to respond quickly with greater confidence.
- Best fit model for the application: Amazon Lookout for Equipment not only automates machine learning steps but searches through thousands of machine data feature combinations to select the best ML model for the application.
Resources: Website | What’s new post
What is it?
AWS Panorama is a managed service for building, deploying, and managing computer vision applications (Panorama applications) that can be deployed to edge devices (Panorama devices). The first Panorama device will be the AWS Panorama Appliance, a computer vision appliance, available in April 2021. The AWS Panorama Appliance Developer Kit will be available in limited quantity at re:Invent 2020 so application developers can build their apps ahead of AWS Panorama Appliance availability. The Panorama Appliance Developer Kit provides extra on-device logging and debugging to make it easier for developers to test and debug their computer vision applications.
AWS Panorama is a machine learning appliance and SDK, which allow you to bring computer vision (CV) to your on-premises cameras or on new Panorama enabled devices. This gives you the ability to make real-time decisions to improve your operations. With Panorama, you can use live video feeds to automate monitoring or visual inspection tasks, like evaluating manufacturing quality, finding bottlenecks in industrial processes, and assessing worker safety within your facilities.
Panorama is available in the us-east-1 (N. Virginia) and us-west-2 (Oregon) regions
- Reimagined retail insights: In retail environments, Panorama enables you to run multiple, simultaneous CV models using your existing onsite cameras. Applications for retail analytics, such as for people counting, heat mapping, and queue management, can help you get started quickly. By using the streamlined management capabilities that Panorama offers, you can easily scale your CV applications to include multiple process locations or stores.
- Workplace safety and social distance monitoring: Panorama allows you to monitor workplace safety, get notified immediately about any potential issues or unsafe situations, and take corrective action.
- Supply chain efficiency: In manufacturing and assembly environments, Panorama can help to provide critical input to supply chain operations by tracking throughput, recognizing bar codes or labels of parts or completed products, or monitoring individual workstations to measure productivity.
- Manufacturing quality control: Panorama can help improve product quality and decrease costs from manufacturing defects, by processing CV at the edge and notifying you immediately of any anomalies in production so you can take quick corrective action.
- Real-time visibility for fast decision making: You can analyze video feeds within milliseconds, enabling real-time visibility into operations and fast decision making with Panorama enabled devices or the Panorama Appliance.
- Easily add to your existing infrastructure: Plug AWS Panorama Appliance in, connect it to your network, and the device automatically identifies camera streams and starts interacting with your existing fleet of IP cameras. The Panorama Appliance also seamlessly works alongside your existing video management systems (VMS).
- Enable CV in limited connectivity environments: AWS Panorama devices run CV models directly on the device (at the edge), meaning you can get access to real-time predictions in remote and isolated places where cloud connectivity can be slow, expensive, or completely non-existent.
Resources: Website | What’s new post
What is it?
Amazon HealthLake is a HIPAA-eligible service that enables healthcare providers, health insurance companies, and pharmaceutical companies to store, transform, query, and analyze health data in a consistent fashion in the AWS Cloud at petabyte scale. Health data is frequently incomplete and inconsistent, and is often unstructured, with information contained in clinical notes, laboratory reports, insurance claims, medical images, recorded conversations, and time series data.
Amazon HealthLake removes the heavy lifting of organizing, indexing, and structuring patient information, to provide a complete view of each patient’s medical history in a secure, compliant, and auditable manner. It transforms unstructured data using specialized machine learning models, like natural language processing, to automatically understand and extract meaningful medical information from the data and provides powerful query and search capabilities. Organizations can use advanced analytics and ML models, such as Amazon QuickSight and Amazon SageMaker to analyze and understand relationships, identify trends, and make predictions from the newly normalized and structured data.
us-east-1 (N. Virginia)
- Population health management: Amazon HealthLake helps healthcare organizations analyze population health trends, outcomes, and costs. This gives organizations the tools to identify the most appropriate intervention for a patient population and choose better care management options with ready-to-use Jupyter notebooks with pre-trained ML algorithms.
- Improving quality of care: Amazon HealthLake aids hospitals, health insurance companies, and life sciences organizations to close gaps in care, improve quality, and reduce cost by bringing together a complete view of a patient’s medical history. HealthLake provides a significant leap forward for these organizations by predicting disease onset and identifying patients requiring additional care.
- Streamlined data operations: Medical data, which takes many forms, from prescriptions to insurance claims to imaging, is difficult to ingest and make sense of. Amazon HealthLake removes the heavy lifting and reduces operational overhead using document classification, and natural language understanding such as text extraction, speech to text technologies, and medical comprehension capabilities to streamline data operations.
- Easily transform health data: Amazon HealthLake can automatically understand and extract meaningful medical information from raw, disparate data, such as prescriptions, procedures, and diagnoses- revolutionizing a process that was traditionally manual
- Identify trends and make predictions: Healthcare organizations can store, transform, and prepare their patient health information to unlock novel insights. This gives healthcare organizations new tools to improve care and intervene more quickly to save lives and reduce costs
- Support interoperable standards: Interoperability ensures that health data is shared in a consistent, compatible format across multiple applications. Amazon HealthLake creates a complete view of each patient’s medical history, and structures it in the Fast Healthcare Interoperability Resources (FHIR) standard format to facilitate the exchange of information.
Resources: Website | What’s New Post
Amazon SageMaker Edge Manager
What is it?
Amazon SageMaker Edge Manager provides model management for edge devices so you can optimize, secure, monitor, and maintain machine learning models on fleets of edge devices such as smart cameras, robots, personal computers, and mobile devices.
Amazon SageMaker Edge Manager makes it easy to manage ML models on edge devices. SageMaker Edge Manager uses SageMaker Neo to compile and optimize models for edge devices. Then, SageMaker Edge Manager packages the model with its runtime and credentials for deployment. You have the flexibility to use AWS IoT Greengrass or your own on-device deployment mechanism to deploy models to the edge. Once a model is deployed, SageMaker Edge Manager manages each model on each device by collecting metrics, sampling input/output data, and sending the data securely to your Amazon S3 buckets for monitoring, labeling, and retraining so you can continuously improve model quality. And, because SageMaker Edge Manager enables you to manage models separately from the rest of the application, you can update the model and the application independently reducing costly downtime and service disruptions.
us-east-1, us-west-2, us-east-2, eu-west-1, eu-central-1, and ap-northeast-1, see details on the AWS Regions Table.
- Driver-assist dashcam: Connected vehicle solution providers use Amazon SageMaker Edge Manager to operate ML models to driver dashcams. The models help detect pedestrians and road hazards to improve the safety of both drivers and pedestrians.
- Theft detection: Amazon SageMaker Edge Manager is used by retailers to identify theft during checkout. Image detection models run on smart cameras at checkout counters and send alerts when the merchandise does not match the scanned barcode.
- Predictive maintenance: Amazon SageMaker Edge Manager runs predictive maintenance models on gateway servers at manufacturing facilities in order to predict which machines are at high risk of failure. When possible, failure is detected, alerts are sent to staff so they can remediate the issue
- Run ML models up to 28x faster: Amazon SageMaker Edge Manager automatically optimizes ML models for deployment on a wide variety of edge devices, including CPUs, GPUs, and embedded ML accelerators. SageMaker Edge Manager compiles your trained model into an executable that discovers and applies specific performance optimizations that will make your model run most efficiently on the target hardware platform.
- Improve model quality: Amazon SageMaker Edge Manager continuously monitors each model instance across your device fleet to detect when model quality declines. Declines in model quality can be caused differences in the data used to make predictions compared to the data used to train the model or by changes in the real world. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions.
- Easily integrate with device applications: Amazon SageMaker Edge Manager supports gRPC, an open-source remote procedure call, which allows you to integrate SageMaker Edge Manager into your existing edge applications through common programming languages, such as Android Java, C++, C#, and Python.
Resources: External Website | What’s New Post
Amazon Lookout for Metrics
What is it?
Amazon Lookout for Metrics uses machine learning (ML) to detect anomalies in virtually any time series-driven business and operational metrics–such as revenue performance, purchase transactions, and customer acquisition and retention rates–with no ML experience required.
Amazon Lookout for Metrics automatically connects to popular databases and SaaS applications to continuously monitor metrics that you care about, and sends you alerts as soon as anomalies are detected. When it finds anomalies, Amazon Lookout for Metrics immediately sends you alerts, groups anomalies that might be related to the same event, and helps you identify the root cause so that you can fix an issue or quickly react to opportunities. It also ranks anomalies in the order of severity, so that you can focus on what matters the most, and lets you to tune the results by providing feedback based on your knowledge about your business, and uses your feedback to improve the accuracy of results over time.
Amazon Lookout for Metrics is a gated preview and will available in 5 regions at launch: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu-west-1.
By metric category
- Customer Engagement: Ensure a seamless customer experience by detecting sudden changes in metrics across the customer journey such as during enrollment, login, and engagement.
- Operational: Proactively monitor metrics like latency, CPU utilization, and error rates to mitigate service interruptions.
- Sales: Quickly track changes in win rate, pipeline coverage, and average deal size to evaluate business growth opportunities.
- Marketing: With actionable marketing analytics, quickly detect how your campaigns, partners, and ad platform metrics affect your overall traffic volume, revenue, churn, and conversion.
- Retail: Gain insights into category-level revenue and margin by monitoring inventory levels, item pricing, promotional traffic, and conversion.
- Gaming: Boost player engagement and optimize gaming revenue by monitoring changes in new users, active users, level-completion rate, in- app purchases, and retention rate.
- Ad Tech: Optimize ad spend by detecting spikes or dips in metrics like reach, impressions, views, and ad clicks.
- Telecom: Reduce customer frustration by detecting unexpected changes in network performance metrics, like tracking traffic channel (TCH), evolved packet core (EPC), and Erlang.
- Highly accurate anomaly detection: Detects anomalies in metrics with high accuracy using ML technology and over 20 years of experience at Amazon.
- Actionable results at scale: Helps you identify the root cause by grouping related anomalies together and ranking them in the order of severity, so that you can diagnose issues or identify opportunities quickly.
- Integration with AWS databases and SaaS applications: Connects with commonly used AWS databases and SaaS applications. Sends alerts through multiple channels, and automatically triggers pre-defined custom actions, such as filing trouble tickets when anomalies are detected.
- Tunable results: Uses your feedback on detected anomalies to automatically tune the results and improve accuracy over time.
Resources: External Website | What’s New Post
Amazon SageMaker Debugger
What is it?
With Amazon SageMaker Debugger you can detect bottlenecks and training problems in real-time so you can correct problems before the model is deployed to production. SageMaker Debugger collects, analyzes, and generates alerts, reports, and visualizations providing insights for you to act and train models faster.
Amazon SageMaker Debugger captures model metrics and monitors system resources and profiles ML framework resources during ML model training, without requiring additional code. All metrics are captured in real-time so you can correct issues during training, which speeds up training time and enables you to get higher quality models to production much faster.
Amazon SageMaker Debugger is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table
- Consolidate multiple tools: Amazon SageMaker Debugger provides a single, unified tool that data scientists can use to collect training data across different parameters in real-time, gain visibility into the effects of different parameter values, and receive alerts for the appropriate action to be taken.
- Visualize training data: Amazon SageMaker Debugger renders visualizations of training data and helps you visualize tensors in your network to determine their state at each point in the training process. This is useful in scenarios such as determining stale or saturated data or mapping effects of specific parameters on the model.
- Explain ML models better: Amazon SageMaker Debugger saves the state of ML models at periodic intervals and enables you to explain the model predictions in real-time during training or offline after the training is completed. This helps you to interpret better and explain the predictions the trained model makes. With SageMaker Debugger, you can explain the internal mechanics of an ML model and eliminate the black box aspects of predictions, leading to better business outcomes.
- Generate ML models faster: Amazon SageMaker Debugger helps generate ML models faster by providing you with full visibility and control during the training process, to quickly troubleshoot and take corrective measures. With SageMaker Debugger, you can take immediate action if any anomalies such as overfitting overtraining models are detected, resulting in faster model generation for deployment. With the insights provided by SageMaker Debugger, you can reduce the time required to troubleshoot models from weeks to days, with no additional code.
- Optimize system resources with no additional code: Using the profiling capability of Amazon SageMaker Debugger, you can automatically monitor system resources such as CPU, GPU, network, and memory to give you a complete view of current resource utilization. Additionally, the profiler suggests recommendations to reallocate resources if there are being underutilized or if there are bottlenecks, helping you to optimize resources effectively. You can profile your training job on the SageMaker Studio visual interface at any time.
- Make ML training transparent: Amazon SageMaker Debugger makes the training process transparent so you can explain if the ML model is progressively learning correct parameter values such as gradients to yield the desired results. Insights into the training data are provided by automatically capturing real-time metrics such as weights and tensors during training to help improve model accuracy. Debugging is made easy with a visual interface to analyze the debug data and take corrective actions specific to the models that are being trained.
Resources: Website | What’s New Post | Detailed Blog Post
Amazon SageMaker Clarify
What is it?
Amazon SageMaker Clarify provides data to help you make your machine learning (ML) models fair and transparent by detecting bias so you can take corrective action.
Amazon SageMaker Clarify detects bias across the entire ML workflow— including during data preparation, after training, and ongoing over time—and also includes tools to explain ML models and their predictions. You can skip the tedious processes of implementing third-party tools and improve fairness and transparency to improve trust with your customers, all within SageMaker. SageMaker Clarify also provides transparency through model explainability reports that you can share with customers, business leaders, or auditors, so all stakeholders can see how and why models make predictions.
Amazon SageMaker Clarify is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table.
- Regulatory Compliance: Regulations such as the Equal Credit Opportunity Act (ECOA) or Fairness in Housing Act often require companies to remain unbiased and to be able to explain financial decisions. Amazon SageMaker can help flag any potential bias present in the initial data or in the financial model after training and can also help explain which data caused an ML model to make a particular financial decision.
- Internal Reporting & Compliance: Data science teams are often required to justify or explain ML models to internal stakeholders, such as internal auditors or executives who would like more transparency. Amazon SageMaker can provide data science teams with a graph of feature importance when requested and can quantify potential bias in an ML model or its data to provide the information needed to support internal presentations or mandates.
- Operational Excellence: Machine learning is often applied in operational scenarios, such as predictive maintenance or supply chain operations. However, data science teams may want insight into why a given machine needs to be repaired, or why an inventory model is recommending surplus stock in a particular location. Amazon SageMaker can detail the causes for individual predictions, helping data science teams to work with other internal teams to improve operations.
- Find imbalances in data: Amazon SageMaker Clarify is integrated with Amazon SageMaker Data Wrangler, making it simple to identify bias during data preparation. You specify attributes of interest, such as gender or age, and Amazon SageMaker Clarify runs a set of algorithms to detect the presence of bias in those attributes. After the algorithm runs, SageMaker Clarify provides a visual report with a description of the sources and severity of possible bias so that you can take steps to mitigate.
- Check your trained model for bias: Ensure that predictions are fair by checking trained models for imbalances, such as more frequent denial of services to one protected class than another. Amazon SageMaker Clarify is integrated with SageMaker Experiments so that after a model has been trained, you can identify attributes you would like to check for bias, such as income or marital status.
- Monitor your model for bias: While your initial data or model may not have been biased, changes in the world may cause bias to develop over time. For example, a substantial change in mortgage rates could cause a home loan application model to become biased. Amazon SageMaker Clarify is integrated with SageMaker Model Monitor, enabling you to configure alerting systems like Amazon CloudWatch to notify you if your model begins to develop bias.
Amazon SageMaker JumpStart
What is it?
Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. To make it easier to get started, SageMaker JumpStart provides a set of solutions for the most common use cases that can be deployed readily with just a few clicks. The solutions are fully customizable and showcase the use of AWS CloudFormation templates and reference architectures so you can accelerate your ML journey. SageMaker JumpStart also supports one-click deployment and fine-tuning of more than 150 popular open-source models for modalities such as natural language processing, object detection, and image classification.
Amazon SageMaker JumpStart is available in all AWS Regions where SageMaker is available. See details on the AWS Regions Table.
- There are 15+ pre-built solutions for common ML use cases including predictive maintenance, demand forecasting, fraud detection, and personalized recommendations.
- Accelerate time to deploy over 150 open-source models: Amazon SageMaker JumpStart provides one-click deployable ML models and algorithms from popular model zoos, including PyTorch Hub and Tensorflow Hub. One-click deployable ML models and algorithms are easily deployable for image classification, object detection, and language modeling use cases, minimizing the time to deploy ML models originating from outside of SageMaker.
- 15+ pre-built solutions for common ML use cases: With Amazon SageMaker JumpStart, you can move quickly from concept to production with pre-built solutions that include all of the components needed to deploy a ML application in SageMaker with a few clicks, including an AWS CloudFormation template, reference architecture, and getting started content. Solutions are fully customizable so you can easily modify to fit your specific use case and dataset and can be readily deployed with just a few clicks. These end-to-end solutions cover common use case, from predictive maintenance, demand forecasting, to fraud detection and personalized recommendations.
- Get started with just a few clicks: Amazon SageMaker JumpStart provides notebooks, blogs, and video tutorials designed to help you when you want to learn something new or encounter roadblocks. Content is easily accessible within Amazon SageMaker Studio, enabling you to get started with ML faster.
Resources: Website | What’s New Post