Achieving MLOps Via Vertex AI

Written by Hammad Umer

Software Engineer

The MLOps acts as a bridge between data scientists and the production team. It is applied and designed to eliminate all the waste and make the machine learning system more scalable by providing automation and producing highly consistent insights from the ML model. While many different platforms can help achieve MLOps, Vertex AI, the GCP Tool, is the best feature to deploy it.

In this blog, you'll learn how to use Vertex AI — Google Cloud's newly announced managed ML platform to build end-to-end ML workflows. In addition, you'll gain insight into going from raw data to a deployed model and leave this workshop ready to develop and produce your ML projects with Vertex AI. Read ahead!

Build Scalable Models

Vertex AI helps bring AutoML and AI Platform together to make a unified API, user interface, and client library. The AutoML also helps train models on image, tabular, text, and video datasets without writing code. Another interesting feature to note is that AI Platform training enables you to run custom training code that increases the efficiency of manifolds. With Vertex AI, AutoML training and custom training are available. Duirng training, you can save models, deploy models, and request predictions with Vertex AI.

Why Vertex AI?

For those already familiarized with the AI Platform, Vertex AI is a rebranding of the AI Platform. Additionally, Vertex AI adds new operational features, including Vertex Experiments to track, analyze and discover ML experiments for automated selection of best model candidates, and more. We have heard from the customers that they're interested in an ML Platform where they can manage datasets, models, retrain models using an automated ML pipeline, deploy model versions in a scalable way and split traffic depending on specific requirements. So then, if you are also interested in one of these features, give it a try on Vertex AI.

Vertex AI Stages

You can use Vertex AI to manage the following stages in the ML workflow:

  • Create a dataset and upload data

  • Train an ML model on your data

  • Train the model

  • Evaluate model accuracy

  • Deploy your trained model to an endpoint for serving predictions

  • Send prediction requests to your endpoint

  • Specify a prediction traffic split in your endpoint

  • Manage your models and endpoints

What Vertex AI Supports

Vertex AI supports managed data sets for vision, text, speech, forecasting, bot, and more. The Vertex AI enables users to choose the type of data, and each category has multiple sub-categories. In all, it eases the process of uploading the data and organizing it on the cloud.

  • Building Pipelines using Vertex AI: Vertex AI, Cloud Function, and Google Cloud Storage help build a machine learning pipeline that includes a trigger. It is a combination of lots of different AI-related services. Other steps below can also be taken into account.
  • Data Preparation: Data moves machine learning as a practice, and we can't do anything without it. So, we need to prepare the dataset first to start the process. Vertex AI also provides an easily managed dataset facility. Data can simply be imported from local storage, or one can point to GCS location if you already have the dataset added in an existing GCS bucket. Vertex AI Dataset also lets one import labels directly at the time of importing data.
  • Training: There are many options for training a model against the dataset. This post shows how to leverage Vertex AI AutoML features to orchestrate the MLOps. Here are three reasons why the team should choose AutoML. To start with, it is easy to integrate Vertex AI AutoML into the Vertex AI Pipeline. Also, Vertex AI Pipeline should act as a simple wrapper service for Kubeflow Pipeline and Google. It also has a defined bunch of Kubeflow components. The feature enables a smooth fusion of Kubeflow Pipeline. In all, it can help leverage Vertex AI AutoML while writing standard Python codes for custom components and connecting them.
  • Deploying models for prediction: You can deploy models on Vertex AI and get an endpoint to serve predictions on Vertex AI. In addition, you can deploy models on Vertex AI whether or not the model was trained on Vertex AI. Also, deployment gets treated as an operation combined with model export and serving an endpoint. The Vertex AI supports both of them via the Vertex AI Model and Endpoint. It also adds the Vertex AI Model to a central place where all the trained models are managed along with their versions.
  • Monitoring: Vertex AI Endpoint provides monitoring functionality for predictions/ requests/second, latency, and prediction error in terms of percentage. Of course, one needs more effort to handle data drift issues, but it is sufficient to see if there are errors in prediction requests and prediction delays.

Pipeline & Trigger

One can do dataset creation, model training, endpoint instantiation, model deployment. However, it is better to construct a pipeline that does all these jobs consistently. In addition, AutoML also likely guarantees for you to have the top model.

In this blog, we have explained the functions of Vertex AI. You may be interested to know how we can achieve Machine Learning Operations (MLOps) with Kubeflow, watch this Webcast to know more. Royal Cyber experts have in-depth experience to help you with CI/CD pipeline and automate machine learning model deployment. With unparalleled support, our experts can set up an automated pipeline for model retraining. For more information on MLOps and its components, reach out to our MLOps experts.

Leave a Reply