Aws SageMaker

Data scientists and developers can rapidly and easily construct, train, and deploy machine learning models at any scale using Amazon SageMaker, a fully managed service. You can develop, train, and deploy your machine learning models using modules that are included in Amazon SageMaker that can be used jointly or separately.
Build:
By giving you everything you need to quickly connect to your training data, choose and optimize the appropriate algorithm and framework for your application, and build ML models and get them ready for training, Amazon SageMaker makes it simple. Your training data saved in Amazon S3 may be easily explored and visualized with the help of hosted Jupyter notebooks provided by Amazon SageMaker. For usage in your notebook research, you can connect directly to data in S3 or use AWS Glue to transfer data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3.
The 10 most popular machine learning algorithms are pre-installed and tuned into Amazon SageMaker to provide up to 10 times the performance of running these algorithms elsewhere, making it easier for you to choose your algorithm. Two of the most well-liked open-source frameworks, TensorFlow and Apache MXNet, are also pre-configured to run on Amazon SageMaker. You can even use your own framework if you choose.
Train:
One-click in the Amazon SageMaker dashboard will start training your model. With its simple scaling capabilities and ability to train models at petabyte size, Amazon SageMaker controls all of the underlying infrastructures for you. Amazon SageMaker can automatically tune your model to reach the best accuracy, speeding up and simplifying the training process.
Deploy:
After your model has been trained and optimized, Amazon SageMaker makes it simple to deploy in production so you can start making predictions on new data (a process called inference). In order to provide high performance and high availability, Amazon SageMaker runs your model on an auto-scaling cluster of Amazon EC2 instances dispersed over different availability zones. Additionally, Amazon SageMaker has built-in A/B testing features that can be used to test your model and try out several iterations in order to get the best results.
You can develop, train, and deploy machine learning models fast and easily by using Amazon SageMaker, which handles the labor-intensive aspects of machine learning.
Overview of the ML Lifecycle:
The quantity and availability of data, together with more affordable computing alternatives, is one of the primary forces behind new developments and applications in ML. The need for data scientists and machine learning (ML) practitioners is rapidly rising because ML has demonstrated its ability to solve problems in a number of fields that were previously intractable by traditional big data and analytical methodologies. From a very high level, the ML lifecycle is made up of numerous different components, however, the process of creating an ML model often involves the following steps:

1. Data cleansing and preparation (feature engineering)
2. Model training and tuning
3. Model evaluation
4. Model deployment (or batch transform)
Data is loaded, manipulated, and turned into the features that the ML model requires during the data preparation process. Writing the scripts to modify the data is often an iterative process, and accelerating progress requires quick feedback loops. When testing feature engineering scripts, it is typically not essential to use the entire dataset, so SageMaker Processing’s local mode feature can be used. This enables you to update the code iteratively while running locally and utilizing a smaller dataset. When the final code is prepared, it is sent to the remote processing task, which runs on instances controlled by SageMaker and accesses the entire dataset.
You may use Amazon SageMaker Pipelines to build ML pipelines that connect these phases, resulting in more sophisticated ML workflows that process, train, and assess ML models as an organization’s ML maturity rises. For automating the many elements of the ML workflow, including data loading, data transformation, model training and tuning, and model deployment, SageMaker Pipelines provides a fully managed service. Up until recently, you had to test your ML pipelines in the cloud but could build and test your scripts locally. This makes improving the direction and structure of ML pipelines a time-consuming and expensive task.
SageMaker Pipelines
You can now test and iterate on your ML pipelines in the same ways that you test and iterate on your processing and training scripts thanks to SageMaker Pipelines’ newly added local mode capability. Using a tiny selection of data to verify the syntax and functionality of your pipelines, you can run and test them locally.
In order to scale machine learning (ML) across your organization, SageMaker Pipelines applies continuous integration and continuous deployment (CI/CD) principles to ML. These principles include maintaining parity between development and production environments, version control, on-demand testing, and end-to-end automation. DevOps professionals are aware that reusable components and automated testing boost productivity and quality, respectively, and that these benefits result in a quicker return on investment for your company goals. By automating the training, testing, and deployment of ML models using SageMaker Pipelines, MLOps practitioners may now take advantage of these advantages. You can now iterate much more quickly when creating pipeline-ready scripts thanks to local mode.
SageMaker Pipelines, which builds a Directed Acyclic Graph (DAG) of orchestrated workflow steps, supports many activities that are part of the ML lifecycle. In local mode, the following steps are supported:
• Processing job steps – A simplified, managed experience on SageMaker to run data processing workloads, such as feature engineering, data validation, model evaluation, and model interpretation
• Training job steps – An iterative process that teaches a model to make predictions by presenting examples from a training dataset
• Hyperparameter tuning jobs – An automated way to evaluate and select the hyperparameters that produce the most accurate model
• Conditional run steps – A step that provides a conditional run of branches in a pipeline
• Model step – Using Create Model arguments, this step can create a model for use in transform steps or later deployment as an endpoint
• Transform job steps – A batch transform job that generates predictions from large datasets, and runs inference when a persistent endpoint isn’t needed
• Fail steps – A step that stops a pipeline run and marks the run as failed
Before recently, you could test your processing and training scripts locally using the local mode feature of SageMaker Processing and SageMaker Training before executing them on all the data using SageMaker managed resources. With SageMaker Pipelines’ new local mode functionality, ML practitioners may now iterate on their ML pipelines using the same process, connecting the various ML workflows. Just a few lines of code modifications are needed to execute the pipeline using SageMaker-managed resources once it is prepared for production. This decreases the pipeline run time during development, resulting in shorter pipeline development cycles and lower costs for SageMaker-controlled resources.
Pricing:
For the great majority of our cloud services, AWS provides you with a pay-as-you-go pricing model. With AWS, there are no lengthy contracts or complicated licensing requirements; you only pay for the specific services you utilize. Like how you pay for utilities like water and electricity, AWS pricing is flexible. There are no hidden fees or additional charges after you stop using the services; you just pay for what you use.
Pay-as-you-go:
For the great majority of our cloud services, AWS provides you with a pay-as-you-go pricing model. With AWS, there are no lengthy contracts or complicated licensing requirements; you only pay for the specific services you utilize. Like how you pay for utilities like water and electricity, AWS pricing is flexible. There are no hidden fees or additional charges after you stop using the services; you just pay for what you use.
Savings Plan:
Savings Plans is a flexible pricing model offering lower prices compared to On-Demand pricing, in exchange for a specific usage commitment (measured in $/hour) for a one or three-year period. AWS offers three types of Savings Plans – Compute Savings Plans, EC2 Instance Savings Plans, and Amazon SageMaker Savings Plans. Compute Savings Plans apply to usage across Amazon EC2, AWS Lambda, and AWS Fargate. The EC2 Instance Savings Plans apply to EC2 usage, and Amazon SageMaker Savings Plans apply to Amazon SageMaker usage. You can easily sign up for a 1- or 3-year term Savings Plans in AWS Cost Explorer and manage your plans by taking advantage of recommendations, performance reporting, and budget alerts.
Understanding spending and cost allocation for your ML environment to satisfy organizational requirements is becoming more and more important as organizations and IT leaders want to speed up the adoption of machine learning (ML). Your ML spending may result in unexpected charges on your monthly AWS payment if cost management and governance are not properly managed. With the help of the fully managed ML platform Amazon SageMaker, our enterprise customers can create cost allocation policies and have better visibility into the precise costs and usage of their teams, business units, products, and other entities.
Cost allocation on AWS is a three-step process:
1. Attach cost allocation tags to your resources.
2. Activate your tags in the Cost allocation tags section of the AWS Billing console.
3. Use the tags to track and filter for cost allocation reporting.
The AWS Billing console’s Cost allocation tags section lists your newly created and attached tags under User-defined cost allocation tags. After they are created, tags may not appear for up to 24 hours. Then, for AWS to begin tracking these tags for your resources, you must turn them on. Usually, it takes between 24 and 48 hours after a tag is triggered for the tags to appear in Cost Explorer. Search for your new tag in the tags filter in Cost Explorer to see if your tags are functioning. You can use the tags for your cost allocation reporting if it is present. Your results can then be grouped based on tag keys

Amazon SageMaker is available in the US East (N. Virginia & Ohio), EU (Ireland) and US West (Oregon) AWS regions.
Competitors:
Looking at the supported software frameworks, GCP ML Engine offers less flexibility than Amazon’s SageMaker for the creation, training, and deployment of machine learning models.
Jupyter Notebooks are not available within Google’s ML Engine, in contrast to Amazon SageMaker. To access Jupyter Notebooks on the Google Cloud Platform, one must use Datalab. Data exploration, processing, and transformation are handled by separate GCP divisions that offer distinct services. DataLab is used to enable data exploration and processing, DataPrep is used for exploration and the preparation of raw data for processing and analysis, and DataFlow is used for the deployment of batch and streaming data processing pipelines, respectively.
Cool storage is designed for data stored for long periods of time, which is rarely accessed. Cool storage is typically used for database and file backups.
AWS offers cool storage through Amazon S3 storage classes. There are two classes available for cool storage:
Amazon S3 Standard-Infrequent Access (S3 Standard-IA): For data accessed less frequently but requires rapid access when needed.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA): Offering the same service as S3 Standard-IA, but in only one available region. This service costs 20% less, ideal if you want a lower-cost option and are not concerned about reduced availability and resilience.
Azure offers cool storage through Azure Blob Storage access tiers. There is only one relevant tier for cool storage:
Azure Blob Storage Cool: Optimized for storing infrequently accessed data for at least 30 days.
AWS has four available support plans split between free and premium tiers. Premium support is divided across three tiers: Developer, Business, and Enterprise
Pricing starts from $29/month or 3% of AWS usage and scales upwards over $15,000/month. Business and enterprise pricing is calculated at a percentage of AWS usage that decreases across brackets, as seen below:
• 10% of the first $150k
• 7% from $150k to $500k
• 5% from $500k to $1 million
• 3% from $1 million+
Microsoft offers five Azure support plans: Basic, Developer, Standard, Professional Direct, and Premier.
Basic, Developer and Standard support plans are role-based and offer plans ranging from free to $100/month per user. Each level increase adds additional layers of support, including:
• More support types
• More communication channels
• Faster response times
• General architecture support
Conclusion:
In this blog, I’ve covered Aws SageMaker briefly and attempted to compare it to its rivals in terms of pros and downsides.



References:

https://aws.amazon.com/blogs/machine-learning/set-up-enterprise-level-cost-allocation-for-ml-environments-and-workloads-using-resource-tagging-in-amazon-sagemaker/
https://aws.amazon.com/blogs/machine-learning/new-features-for-amazon-sagemaker-pipelines-and-the-amazon-sagemaker-sdk/
https://www.coursera.org/articles/aws-vs-azure-vs-google-cloud
https://www.c-sharpcorner.com/article/aws-vs-azure-vs-google-cloud-comparative-analysis-of-cloud-platforms-for-machi/
US Bureau of Labour Statistics. “Software Developers, Quality Assurance Analysts, and Testers: Occupational Outlook Handbook, https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm.” Accessed June 29. 2022.
Glassdoor. “How Much Does an AWS Cloud Developer Make, https://www.glassdoor.com/Salaries/aws-cloud-developer-salary-SRCH_KO0,19.htm.” Accessed July 1, 2022.
https://kinsta.com/blog/aws-vs-azure/