Meta, the parent company to Facebook, announced this week that it will expand its use Amazon Web Services (AWS), to provide privacy and reliability, as well as scale in the cloud.
The terms of the agreement will allow Meta to expand its use of AWS storage, compute, databases, and security services to scale research, development, and facilitate third-party collaborations. It will also drive operational efficiency. Meta will support acquisitions of companies already powered by AWS through third-party collaborations. It will also make use of AWS’s compute services in order to accelerate its artificial intelligence (AI), research and development for its Meta AI Group.
Both companies will also collaborate to help enterprises use PyTorch, an open-source machine-learning library on AWS “to bring deeper learning models from research into manufacturing faster and easier.”
Kathrin Renz (Vice President of Business Development and Industries, Amazon Web Services), stated that Meta and AWS have been expanding their collaboration over the past five years. AWS will continue to support Meta in research and development, drive innovation and collaborate with third parties as well as the open-source community. Customers can trust Meta and AWS to work together on PyTorch. This will make it easier to build, train and deploy deep-learning models on AWS.
AWS and Meta will collaborate to assist machine learning researchers and developers. The companies stated that PyTorch performance will be further optimized and PyTorch integration with core managed services like Amazon Elastic Compute Cloud, (Amazon EC2), and Amazon SageMaker. SageMaker is an AWS service that allows data scientists and developers to build, train and deploy machine-learning models in the cloud or at the edge.
The companies have enabled PyTorch on AWS so developers can create large-scale deep learning models for natural languages processing and computer vision. This will make it easier to develop large-scale models. Both companies will offer native tools to improve PyTorch’s performance, explainability, cost of inference, and cost. The companies will continue to improve TorchServe (the native PyTorch serving engine) to make it easier to deploy PyTorch models to production. AWS and Meta will use these open-source contributions to help organizations bring large-scale deep-learning models from research to production more quickly and easily. This will be possible with optimized performance on AWS.
Jason Kalich, Vice President, Production Engineering at Meta, stated that “We are excited about expanding our strategic relationship with AWS in order to help us innovate quicker and expand the scope and scope of our research-and-development work,” in a statement. “The global reach of AWS and its reliability will allow us to continue to deliver innovative experiences for the millions of people who use Meta products and services around the globe and for customers running PyTorch using AWS.”