Adopting MLOps Reduced IT Costs and Improved Efficiency for A Global Shipping & Logistics Company Prescience Decision Solutions March 19, 2024

Adopting MLOps Reduced IT Costs and Improved Efficiency for A Global Shipping & Logistics Company

The Fortune 500 company provides transportation, supply chain and logistics services, and related digital solutions. With 700 container vessels, operations in 130 countries and over 7 million square feet of warehouse capacity in around 450 sites, it is one of the world’s largest companies in this sector.


The global Third Party Logistics (3PL) market is facing strong headwinds due to unpredictable spikes in demand, constraints on warehouse space, increasing business expenses, government regulations, a strong mandate to implement sustainable business practices, talent shortages etc. 3PL companies are also facing complex challenges related to operations across multiple geographies, impact of the growing ecommerce sector, competition from local companies, compliance costs etc.

To improve their market competitiveness and ensure future readiness while addressing all these problems, the company invested in various technological initiatives. Among these programs was the adoption of Machine Learning (ML) models across the enterprise.

Each division within the company, including marketing, finance, sales etc came out with their own guidelines for the development and deployment of ML models. Within every division, different teams created a range of ML models for various business requirements and use cases. While these teams created ML models within the prescribed guidelines, there was no organization wide approach, recommended technology stack or set of best practices which were defined for managing and maintaining these ML models. Hence, the ML models were built on a variety of platforms such as SageMaker, Databricks, open-source tools etc. Their ongoing monitoring and management also varied from division to division.

As the company’s business grew, the corresponding count of ML models increased to a total of 1,300. Tracking all these models at an enterprise level became challenging. The company also had found it hard to maintain the growing set of models. Additionally, the cumulative annual license costs of these disparate technologies kept consistently rising.

The company needed an IT partner to understand the variety of existing ML models, architect an open-source enterprise level Machine Learning Operations (MLOps) platform and migrate some of the existing ML models onto the new platform.


The team of architects and data scientists from Prescience Decision Solutions identified distinct use cases for creating new ML models and analyzed how they were being built across the enterprise. This included studying the models across multiple parameters including the frequency, speed, size of data, complexity of the algorithm, how the models were served etc.

Our team also recognized that the new enterprise MLOps platform had to be built to cater to the diverse teams of data scientists, data engineers, machine learning engineers, DevOps engineers etc. Hence, the platform was built on the following architectural principles,

  • Ease of use
  • Scalability
  • Security
  • Stability  

Based on these primary requirements, the team explored different open-source tools, understood their advantages and disadvantages, and designed an end-to-end MLOps platform architecture over Azure Kubernetes Service (AKS).

The Feast feature store was implemented using Redis and PostgreSQL databases. The orchestration of the ML models was done using Kubeflow. The centralized code repository and code merging was enabled by using Git. The storage of artifacts and model management was implemented using MLFlow. The orchestration of the model serving from the registry was managed through GitHub Actions. The model serving was done using Seldon Core and BentoML. The model monitoring was enabled through Evidently AI. The product scalability and sustainability were handled by Azure Kubernetes. The retraining of the models was enabled by EvidentlyAI thresholds and Kubeflow pipelines. Both Grafana and Power BI were used to monitor the results.

The team selected 4 existing ML models to be onboarded onto the newly created MLOps platform. These included a Risk Assessment model from the Marketing and Customer Experience division, a Sales Copilot model from the Sales team etc. The successful migration of these ML models demonstrated that the solution met all the outlined architectural principles. As more ML models were onboarded, auto-scaling was enabled for the platform.

The different technologies used for this engagement included,

  1. Redis
  2. PostgreSQL
  3. Kubeflow
  4. Git
  5. MLflow
  6. Github Actions
  7. Seldon Core
  8. BentoML
  9. Evidently-AI
  10. Azure Kubernetes
  11. Grafana
  12. Power BI

With the new open-source MLOps platform, the company is enjoying enhanced efficiency and reduced expenses across their enterprise. Major accomplishments include abilities to:

  • Centrally track the models
  • Automate the monitoring mechanism
  • Upkeep the model accuracies
  • Address the challenges of mounting license fees
  • Manage the growing volume of ML models more efficiently

The best part is, the teams from the company’s Platform Management division were able to create a roadmap for the migration of the disparate ML models and onboard them onto the MLOps platform in a phased manner, with no impact to their business users using a blue-green strategy. 

Learn from the Success Stories of Business Transformations Fueled by Data, AI and Machine Learning