# Google-Cloud

## Docs

- [📄 Other Resources](https://huggingface.co/docs/google-cloud/resources.md)
- [Hugging Face on Google Cloud](https://huggingface.co/docs/google-cloud/index.md)
- [🔥 Features & benefits](https://huggingface.co/docs/google-cloud/features.md)
- [Using TPUs on Google Cloud](https://huggingface.co/docs/google-cloud/tpu.md)
- [Deploy Snowflake's Arctic Embed with TEI DLC on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tei-deployment.md)
- [Evaluate open LLMs with Vertex AI and Gemini](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-evaluate-llms-with-vertex-ai.md)
- [Deploy Meta Llama 3.1 405B with TGI DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-llama-3-1-405b-on-vertex-ai.md)
- [Deploy Llama 3.1 8B with TGI DLC on Cloud Run](https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-llama-3-1-on-cloud-run.md)
- [Deploy Meta Llama 3 8B with TGI DLC on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tgi-deployment.md)
- [Deploy Llama 3.1 405B with TGI DLC on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tgi-llama-405b-deployment.md)
- [Deploy BGE Base v1.5 with TEI DLC from GCS on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tei-from-gcs-deployment.md)
- [Deploy Gemma 7B with TGI DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-gemma-on-vertex-ai.md)
- [Deploy Gemma2 9B with TGI DLC on Cloud Run](https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-gemma-2-on-cloud-run.md)
- [Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT + LoRA on GKE](https://huggingface.co/docs/google-cloud/examples/gke-trl-lora-fine-tuning.md)
- [Deploy FLUX with PyTorch Inference DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-flux-on-vertex-ai.md)
- [Deploy Gemma 7B with TGI DLC from GCS on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-gemma-from-gcs-on-vertex-ai.md)
- [Deploy Embedding Models with TEI DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-embedding-on-vertex-ai.md)
- [Cloud Run Examples](https://huggingface.co/docs/google-cloud/examples/cloud-run-index.md)
- [Deploy BERT Models with PyTorch Inference DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-bert-on-vertex-ai.md)
- [Fine-tune Gemma 2B with PyTorch Training DLC using SFT + LoRA on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-trl-lora-sft-fine-tuning-on-vertex-ai.md)
- [Google Kubernetes Engine (GKE) Examples](https://huggingface.co/docs/google-cloud/examples/gke-index.md)
- [Deploy Qwen2 7B with TGI DLC from GCS on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tgi-from-gcs-deployment.md)
- [Deploy Llama 3.2 11B Vision with TGI DLC on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-llama-vision-on-vertex-ai.md)
- [Fine-tune Gemma 2B with PyTorch Training DLC using SFT on GKE](https://huggingface.co/docs/google-cloud/examples/gke-trl-full-fine-tuning.md)
- [Deploy Llama 3.2 11B Vision with TGI DLC on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tgi-llama-vision-deployment.md)
- [Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT on Vertex AI](https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-trl-full-sft-fine-tuning-on-vertex-ai.md)
- [Deploy Gemma2 with multiple LoRA adapters with TGI DLC on GKE](https://huggingface.co/docs/google-cloud/examples/gke-tgi-multi-lora-deployment.md)
- [Available DLCs on Google Cloud](https://huggingface.co/docs/google-cloud/containers/available.md)
- [Introduction](https://huggingface.co/docs/google-cloud/containers/introduction.md)

### 📄 Other Resources
https://huggingface.co/docs/google-cloud/resources.md

# 📄 Other Resources

Learn how to use Hugging Face in Google Cloud by reading our blog posts, presentations, Google documentation and examples below.

## Blog posts

- [Building for an Open Future - our new partnership with Google Cloud](https://huggingface.co/blog/google-cloud)
- [Hugging Face and Google partner for open AI collaboration](https://huggingface.co/blog/gcp-partnership)
- [Google Cloud TPUs made available to Hugging Face users](https://huggingface.co/blog/tpu-inference-endpoints-spaces)
- [Making thousands of open LLMs bloom in the Vertex AI Model Garden](https://huggingface.co/blog/google-cloud-model-garden)
- [Deploy Meta Llama 3.1 405B on Google Cloud Vertex AI](https://huggingface.co/blog/llama31-on-vertex-ai)

## Presentations

- [Gemma Developer Day in Tokyo | Unleashing Gemma in production with Hugging Face Text Generation Inference (TGI)](https://rsvp.withgoogle.com/events/gemma-dev-day_2024tokyo/sessions/hugging-face-transformers)

## Google Documentation

- [Google Cloud Hugging Face Deep Learning Containers](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face)
- [Google Cloud public Artifact Registry for DLCs](https://console.cloud.google.com/artifacts/docker/deeplearning-platform-release/us/gcr.io)
- [Serve Gemma open models using GPUs on GKE with Hugging Face TGI](https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi)
- [Generative AI on Vertex - Use Hugging Face text generation models](https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-hugging-face-models)

## Examples

- [All examples](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples)

### Vertex AI

- Inference

  - [Deploy BERT Models with PyTorch Inference DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai)
  - [Deploy Embedding Models with TEI DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai)
  - [Deploy FLUX with PyTorch Inference DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai)
  - [Deploy Gemma 7B with TGI DLC from GCS on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai)
  - [Deploy Gemma 7B with TGI DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai)
  - [Deploy Llama 3.2 11B Vision with TGI DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai)
  - [Deploy Meta Llama 3.1 405B with TGI DLC on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai)

- Training

  - [Fine-tune Gemma 2B with PyTorch Training DLC using SFT + LoRA on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai)
  - [Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai)

- Evaluation

  - [Evaluate open LLMs with Vertex AI and Gemini](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/evaluate-llms-with-vertex-ai)

### GKE

- Inference

  - [Deploy BGE Base v1.5 with TEI DLC from GCS on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment)
  - [Deploy Gemma2 with multiple LoRA adapters with TGI DLC on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment)
  - [Deploy Llama 3.1 405B with TGI DLC on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment)
  - [Deploy Llama 3.2 11B Vision with TGI DLC on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment)
  - [Deploy Meta Llama 3 8B with TGI DLC on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment)
  - [Deploy Qwen2 7B with TGI DLC from GCS on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment)
  - [Deploy Snowflake's Arctic Embed with TEI DLC on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment)

- Training

  - [Fine-tune Gemma 2B with PyTorch Training DLC using SFT on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-full-fine-tuning)
  - [Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT + LoRA on GKE](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-lora-fine-tuning)

### Cloud Run

- Inference

  - [Deploy Gemma2 9B with TGI DLC on Cloud Run](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-gemma-2-on-cloud-run)
  - [Deploy Llama 3.1 8B with TGI DLC on Cloud Run](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-llama-3-1-on-cloud-run)

### Hugging Face on Google Cloud
https://huggingface.co/docs/google-cloud/index.md

# Hugging Face on Google Cloud

![Hugging Face x Google Cloud](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/google-cloud/thumbnail.png)

Hugging Face collaborates with Google across open science, open source, cloud, and hardware to enable companies to build their own AI with the latest open models from Hugging Face and the latest cloud and hardware features from Google Cloud.

Hugging Face enables new experiences for Google Cloud customers. They can easily train and deploy Hugging Face models on Google Kubernetes Engine (GKE), Vertex AI, or Cloud Run on any hardware available in Google Cloud using Hugging Face Deep Learning Containers (DLCs).

If you have any issues using Hugging Face on Google Cloud, you can get community support by creating a new topic in the [Forum](https://discuss.huggingface.co/c/google-cloud/69/l/latest) dedicated to Google Cloud usage.

## Train and Deploy Models on Google Cloud with Hugging Face Deep Learning Containers

Hugging Face built Deep Learning Containers (DLCs) for Google Cloud customers to run any of their machine learning workload in an optimized environment, with no configuration or maintenance on their part. These are Docker images pre-installed with deep learning frameworks and libraries such as 🤗 Transformers, 🤗 Datasets, and 🤗 Tokenizers. The DLCs allow you to directly serve and train any models, skipping the complicated process of building and optimizing your serving and training environments from scratch.

For training, our DLCs are available for PyTorch via 🤗 Transformers. They include support for training on both GPUs and TPUs with libraries such as 🤗 TRL, Sentence Transformers, or 🧨 Diffusers.

For inference, we have a general-purpose PyTorch inference DLC, for serving models trained with any of those frameworks mentioned before on both CPU and GPU. There is also the Text Generation Inference (TGI) DLC for high-performance text generation of LLMs on both GPU and TPU. Finally, there is a Text Embeddings Inference (TEI) DLC for high-performance serving of embedding models on both CPU and GPU.

The DLCs are hosted in [Google Cloud Artifact Registry](https://console.cloud.google.com/artifacts/docker/deeplearning-platform-release/us/gcr.io) and can be used from any Google Cloud service such as Google Kubernetes Engine (GKE), Vertex AI, or Cloud Run.

Hugging Face DLCs are open source and licensed under Apache 2.0 within the [Google-Cloud-Containers](https://github.com/huggingface/Google-Cloud-Containers) repository. For premium support, our [Expert Support Program](https://huggingface.co/support) gives you direct dedicated support from our team.

You have two options to take advantage of these DLCs as a Google Cloud customer:

1. To [get started](https://huggingface.co/blog/google-cloud-model-garden), you can use our no-code integrations within Vertex AI or GKE.
2. For more advanced scenarios, you can pull the containers from the Google Cloud Artifact Registry directly in your environment. [Here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples) is a list of notebooks examples.

### 🔥 Features & benefits
https://huggingface.co/docs/google-cloud/features.md

# 🔥 Features & benefits

The Hugging Face DLCs provide ready-to-use, tested environments to train and deploy Hugging Face models. They can be used in combination with Google Cloud offerings including Google Kubernetes Engine (GKE), Vertex AI, and Cloud Run. GKE is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using Google Cloud's infrastructure. Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize Large Language Models (LLMs). Cloud Run is a serverless container platform that allows developers to deploy and manage containerized applications without managing infrastructure, enabling automatic scaling and billing only for usage.

## One command is all you need

With the new Hugging Face DLCs, train cutting-edge Transformers-based NLP models in a single line of code. The Hugging Face PyTorch DLCs for training come with all the libraries installed to run a single command e.g. via TRL CLI to fine-tune LLMs on any setting, either single-GPU, single-node multi-GPU, and more.

## Accelerate machine learning from science to production

In addition to Hugging Face DLCs, we created a first-class Hugging Face library for inference, [`huggingface-inference-toolkit`](https://github.com/huggingface/huggingface-inference-toolkit), that comes with the Hugging Face PyTorch DLCs for inference, with full support on serving any PyTorch model on Google Cloud.

Deploy your trained models for inference with just one more line of code or select [any of the 747,000+ Transformers-compatible and publicly available models from the model Hub](https://huggingface.co/models?library=transformers&sort=trending) and deploy them on either Vertex AI, GKE, or Cloud Run.

## High-performance text generation and embedding

Besides the PyTorch-oriented DLCs, Hugging Face also provides high-performance inference for both text generation and embedding models via the Hugging Face DLCs for both [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) and [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference), respectively.

The Hugging Face DLC for TGI enables you to deploy [any of the +140,000 text generation inference supported models from the Hugging Face Hub](https://huggingface.co/models?other=text-generation-inference&sort=trending), or any custom model as long as [its architecture is supported within TGI](https://huggingface.co/docs/text-generation-inference/supported_models).

The Hugging Face DLC for TEI enables you to deploy [any of the +10,000 embedding, re-ranking or sequence classification supported models from the Hugging Face Hub](https://huggingface.co/models?other=text-embeddings-inference&sort=trending), or any custom model as long as [its architecture is supported within TEI](https://huggingface.co/docs/text-embeddings-inference/en/supported_models).

Additionally, these DLCs come with full support for Google Cloud meaning that deploying models from Google Cloud Storage (GCS) is also straight forward and requires no configuration.

## Built-in performance

Hugging Face DLCs feature built-in performance optimizations for PyTorch to train models faster. The DLCs also give you the flexibility to choose a training infrastructure that best aligns with the price/performance ratio for your workload.

The Hugging Face Training DLCs are fully integrated with Google Cloud, enabling the use of [the latest generation of instances available on Google Cloud Compute Engine](https://cloud.google.com/products/compute?hl=en).

Hugging Face Inference DLCs provide you with production-ready endpoints that scale quickly with your Google Cloud environment, built-in monitoring, and a ton of enterprise features.

---

Read more about both Vertex AI in [their official documentation](https://docs.cloud.google.com/vertex-ai/docs), GKE in [their official documentation](https://docs.cloud.google.com/kubernetes-engine/docs) and Cloud Run in [their official documentation](https://docs.cloud.google.com/run/docs).

### Using TPUs on Google Cloud
https://huggingface.co/docs/google-cloud/tpu.md

# Using TPUs on Google Cloud

To learn how to use Google Cloud TPUs with Hugging Face libraries, check out [optimum-tpu](https://huggingface.co/docs/optimum-tpu), our dedicated library for TPU support.

### Deploy Snowflake's Arctic Embed with TEI DLC on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tei-deployment.md

# Deploy Snowflake's Arctic Embed with TEI DLC on GKE

Snowflake's Arctic Embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance, achieving state-of-the-art (SOTA) performance on the MTEB/BEIR leaderboard for each of their size variants. Text Embeddings Inference (TEI) is a toolkit developed by Hugging Face for deploying and serving open source text embeddings and sequence classification models; enabling high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to deploy a text embedding model from the Hugging Face Hub on a GKE Cluster running a purpose-built container to deploy text embedding models in a secure and managed environment with the Hugging Face DLC for TEI.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TEI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything is set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single CPU node as for most of the workloads CPU inference is enough to serve most of the text embeddings models, while it could benefit a lot from GPU serving.

CPU is being used to run the inference on top of the text embeddings models to showcase the current capabilities of TEI, but switching to GPU is as easy as replacing `spec.containers[0].image` with `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cu122.1-4.ubuntu2204`, and then updating the requested resources, as well as the `nodeSelector` requirements in the `deployment.yaml` file. For more information, please refer to the [`gpu-config`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/gpu-config/) directory that contains a pre-defined configuration for GPU serving in TEI with an NVIDIA Tesla T4 GPU (with a compute capability of 7.5 i.e. natively supported in TEI).

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the cluster versions support every CPU. Same applies for the GPU support e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tei-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Deploy TEI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TEI, serving the [`Snowflake/snowflake-arctic-embed-m`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) model from the Hugging Face Hub.

Recently, the Hugging Face Hub team has included the `text-embeddings-inference` tag in the Hub, so feel free to explore all the embedding models in the Hub that can be served via TEI at .

The Hugging Face DLC for TEI will be deployed via `kubectl`, from the configuration files in either the [`cpu-config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/cpu-config/) or the [`gpu-config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/gpu-config/) directories depending on whether you want to use the CPU or GPU accelerators, respectively:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/cpu-config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TEI setting the `MODEL_ID` to [`Snowflake/snowflake-arctic-embed-m`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m).
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/cpu-config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TEI service.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/cpu-config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tei-deployment/cpu-config
```

As already mentioned, for this example you will be deploying the container in a CPU node, but the configuration to deploy TEI in a GPU node is also available in the [`gpu-config`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment/gpu-config/) directory, so if you want to deploy TEI in a GPU node, please run `kubectl apply -f Google-Cloud-Containers/examples/gke/tei-deployment/gpu-config` instead of `kubectl apply -f Google-Cloud-Containers/examples/gke/tei-deployment/cpu-config`.

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tei-deployment/imgs/gke-deployment.png)

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s deployment/tei-deployment
```

## Inference with TEI

To run the inference over the deployed TEI service, you can either:

- Port-forwarding the deployed TEI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward service/tei-service 8080:8080
  ```

- Accessing the TEI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `cpu-config/ingress.yaml` or the `gpu-config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress tei-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

TEI exposes different inference endpoints based on the task that the model is serving:

- **Text Embeddings**: text embedding models expose the endpoint `/embed` expecting a payload with the key `inputs` which is either a string or a list of strings to be embedded.
- **Re-rank**: re-ranker models expose the endpoint `/rerank` expecting a payload with the keys `query` and `texts`, where the `query` is the reference used to rank the similarity against each text in `texts`.
- **Sequence Classification**: classic sequence classification models expose the endpoint `/predict` which expects a payload with the key `inputs` which is either a string or a list of strings to classify.
  More information at .

### Via cURL

To send a POST request to the TEI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/embed \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead:

```bash
curl http://$(kubectl get ingress tei-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):8080/embed \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'
```

Which produces the following output (truncated for brevity, but original tensor length is 768, which is the embedding dimension of [`Snowflake/snowflake-arctic-embed-m`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) i.e. the model you are serving):

```
[[-0.01483098,0.010846359,-0.024679236,0.012507628,0.034231555,...]]
```

## Delete GKE Cluster

Finally, once you are done using TEI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 deployment/tei-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment)!

### Evaluate open LLMs with Vertex AI and Gemini
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-evaluate-llms-with-vertex-ai.md

# Evaluate open LLMs with Vertex AI and Gemini

The [Gen AI Evaluation Service in Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview) lets us evaluate LLMs or Application using existing or your own evaluation criterias. It supports academic metrics like BLEU, ROUGE, or LLM as a Judge with Pointwise and Pairwise metrics or custom metrics you can define yourself. As default LLM as a Judge `Gemini 1.5 Pro` is used.

We can use the Gen AI Evaluation Service to evaluate the performance of open models and finetuned models using Vertex AI Endpoints and compute resources. In this example we will evaluate [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) generated summaries from news articles using a Pointwise metric based on [G-Eval](https://arxiv.org/abs/2303.16634) Coherence metric.

We will cover the following topics:

1. Setup / Configuration
2. Deploy Llama 3.1 8B on Vertex AI
3. Evaluate Llama 3.1 8B using different prompts on Coherence
4. Interpret the results 
5. Clean up resources

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet "google-cloud-aiplatform[evaluation]"  huggingface_hub transformers datasets
```

For ease of use we define the following environment variables for GCP.

_Note 1: Make sure to adapt the project ID to your GCP project._  
_Note 2: The Gen AI Evaluation Service is not available in all regions. If you want to use it, you need to select a region that supports it. `us-central1` is currently supported._

```python
%env PROJECT_ID=gcp-partnership-412108
%env LOCATION=us-central1
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-2.ubuntu2204.py310
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Deploy Llama 3.1 8B on Vertex AI

Once everything is set up, we can deploy the Llama 3.1 8B model on Vertex AI. We will use the `google-cloud-aiplatform` Python SDK to do so. [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) is a gated model, you need to login into your Hugging Face Hub account with a read-access token either fine-grained with access to the gated model, or just overall read-access to your account. More information on how to generate a read-only access token for the Hugging Face Hub in the instructions at .

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

After we are logged in we can "upload" the model i.e. register the model on Vertex AI. If you want to learn more about the arguments you can pass to the `upload` method, check out [Deploy Gemma 7B with TGI on Vertex AI](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/vertex-notebook.ipynb).

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

We will deploy the `meta-llama/Meta-Llama-3.1-8B-Instruct` to 1x NVIDIA L4 accelerator with 24GB memory. We set TGI parameters to allow for a maximum of 8000 input tokens, 8192 maximum total tokens, and 8192 maximum batch prefill tokens.

```python
from huggingface_hub import get_token

vertex_model_name = "llama-3-1-8b-instruct"

model = aiplatform.Model.upload(
    display_name=vertex_model_name,
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "MODEL_ID": "meta-llama/Meta-Llama-3.1-8B-Instruct",
        "MAX_INPUT_TOKENS": "8000",
        "MAX_TOTAL_TOKENS": "8192",
        "MAX_BATCH_PREFILL_TOKENS": "8192",
        "HUGGING_FACE_HUB_TOKEN": get_token(),
    },
    serving_container_ports=[8080],
)
model.wait() # wait for the model to be registered

# create endpoint
endpoint = aiplatform.Endpoint.create(display_name=f"{vertex_model_name}-endpoint")

# deploy model to 1x NVIDIA L4
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-4",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
)
```

The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes.

After the model is deployed, we can test our endpoint. We generate a helper `generate` function to send requests to the deployed model. This will be later used to send requests to the deployed model and collect the outputs for evaluation.

```python
import re
from transformers import AutoTokenizer

# grep the model id from the container spec environment variables
model_id = next((re.search(r'value: "(.+)"', str(item)).group(1) for item in list(model.container_spec.env) if 'MODEL_ID' in str(item)), None)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")

generation_config = {
  "max_new_tokens": 256,
  "do_sample": True,
  "top_p": 0.2,
  "temperature": 0.2,
}

def generate(prompt, generation_config=generation_config):
  formatted_prompt = tokenizer.apply_chat_template(
        [
          {"role": "user", "content": prompt},
        ],
        tokenize=False,
        add_generation_prompt=True,
      )
  
  payload = {
    "inputs": formatted_prompt,
    "parameters": generation_config
  }
  output = deployed_model.predict(instances=[payload])
  generated_text = output.predictions[0]
  return generated_text

generate("How many people live in Berlin?", generation_config)
# 'The population of Berlin is approximately 6.578 million as of my cut off data. However, considering it provides real-time updates, the current population might be slightly higher'
```

## Evaluate Llama 3.1 8B using different prompts on Coherence

We will evaluate the Llama 3.1 8B model using different prompts on Coherence. Coherence measures how well the individual sentences within a summarized news article connect together to form a unified and easily understandable narrative.

We are going to use the new [Generative AI Evaluation Service](https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview). The Gen AI Evaluation Service can be used to: 
* Model selection: Choose the best pre-trained model for your task based on benchmark results and its performance on your specific data.
* Generation settings: Tweak model parameters (like temperature) to optimize output for your needs.
* Prompt engineering: Craft effective prompts and prompt templates to guide the model towards your preferred behavior and responses.
* Improve and safeguard fine-tuning: Fine-tune a model to improve performance for your use case, while avoiding biases or undesirable behaviors.
* RAG optimization: Select the most effective Retrieval Augmented Generation (RAG) architecture to enhance performance for your application.
* Migration: Continuously assess and improve the performance of your AI solution by migrating to newer models when they provide a clear advantage for your specific use case.

In our case, we will use it to evaluate different prompt templates to achieve the most coherent summaries using Llama 3.1 8B Instruct. 

We are going to use a reference free Pointwise metric based on [G-Eval](https://arxiv.org/abs/2303.16634) Coherence metric. 

The first step is to define our prompt template and create our `PointwiseMetric`. Vertex AI returns our response from the model in the `response` field our news article will be made available in the `text` field.

```python
from vertexai.evaluation import EvalTask, PointwiseMetric

g_eval_coherence = """
You are an expert evaluator. You will be given one summary written for a news article.
Your task is to rate the summary on one metric.
Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.

Evaluation Criteria:

Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby "the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information about a topic."

Evaluation Steps:

1. Read the news article carefully and identify the main topic and key points.
2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order.
3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria.

Example:

Source Text:

{text}

Summary:

{response}

Evaluation Form (scores ONLY):

- Coherence:"""

metric = PointwiseMetric(
    metric="g-eval-coherence",
    metric_prompt_template=g_eval_coherence,
)
```

We are going to use [argilla/news-summary](https://huggingface.co/datasets/argilla/news-summary) dataset consisting of news article from Reuters. We are going to use a random subset of 15 articles to keep the evaluation fast. Feel free to change the dataset and the number of articles to evaluate the model with more data and different topics.

```python
from datasets import load_dataset

subset_size = 15
dataset = load_dataset("argilla/news-summary", split=f"train").shuffle(seed=42).select(range(subset_size))

# print first 150 characters of the first article
print(dataset[0]["text"][:150])
```

Before we can run the evaluation, we need to convert our dataset into a pandas dataframe. 

```python
# remove all columns except for "text"
to_remove = [col for col in dataset.features.keys() if col != "text"]
dataset = dataset.remove_columns(to_remove)
df = dataset.to_pandas()
df.head()
```

Awesome! We are almost ready. Last step is to define our different summarization prompts we want to use for evaluation. 

```python
summarization_prompts = {
  "simple": "Summarize the following news article: {text}",
  "eli5": "Summarize the following news article in a way a 5 year old would understand: {text}",
  "detailed": """Summarize the given news article, text, including all key points and supporting details? The summary should be comprehensive and accurately reflect the main message and arguments presented in the original text, while also being concise and easy to understand. To ensure accuracy, please read the text carefully and pay attention to any nuances or complexities in the language.
  
Article:
{text}"""
}
```

Now we can iterate over our prompts and create different evaluation tasks, use our coherence metric to evaluate the summaries and collect the results.

```python
import uuid

results = {}
for prompt_name, prompt in summarization_prompts.items():
    prompt = summarization_prompts[prompt_name]

    # 1. add new prompt column
    df["prompt"] = df["text"].apply(lambda x: prompt.format(text=x))

    # 2. create eval task
    eval_task = EvalTask(
        dataset=df,
        metrics=[metric],
        experiment="llama-3-1-8b-instruct",
    )
    # 3. run eval task
    # Note: If the last iteration takes > 1 minute you might need to retry the evaluation
    exp_results = eval_task.evaluate(model=generate, experiment_run_name=f"prompt-{prompt_name}-{str(uuid.uuid4())[:8]}")
    print(f"{prompt_name}: {exp_results.summary_metrics['g-eval-coherence/mean']}")
    results[prompt_name] = exp_results.summary_metrics["g-eval-coherence/mean"]

for prompt_name, score in sorted(results.items(), key=lambda x: x[1], reverse=True):
    print(f"{prompt_name}: {score}")
```

Nice, it looks like on our limited test the "simple" prompt yields the best results. We can inspect and compare the results in the GCP Console at [Vertex AI > Model Development > Experiments](https://console.cloud.google.com/vertex-ai/experiments).

![experiment-results](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/evaluate-llms-with-vertex-ai/assets/experiment-results.png)

The overview allows to compare the results across different experiments and to inspect the individual evaluations. Here we can see that the standard deviation of detailed is quite high. This could be because of the low sample size or that we need to improve the prompt further.

You can find more examples on how to use the Gen AI Evaluation Service in the [Vertex AI Generative AI documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/evaluation-overview) including how to:
* [how to customize the LLM as a Judge](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/bring_your_own_autorater_with_custom_metric.ipynb)
* [how to use Pairwise metrics and compare different LLMs](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/compare_generative_ai_models.ipynb)
* [how to evaluate different prompts more efficiently](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/evaluation/prompt_engineering_gen_ai_evaluation_service_sdk.ipynb)

## Resource clean-up

Finally, you can already release the resources that you've created as follows, to avoid unnecessary costs:

* `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
* `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully, after the `undeploy_all` method.
* `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/evaluate-llms-with-vertex-ai)!

### Deploy Meta Llama 3.1 405B with TGI DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-llama-3-1-405b-on-vertex-ai.md

# Deploy Meta Llama 3.1 405B with TGI DLC on Vertex AI

[Meta Llama 3.1](https://huggingface.co/blog/llama31) is the latest open LLM from Meta, a follow up iteration of Llama 3, released in July 2024. Meta Llama 3.1 comes in three sizes: 8B for efficient deployment and development on consumer-size GPU, 70B for large-scale AI native applications, and 405B for synthetic data, LLM as a Judge or distillation; among other use cases. Amongst Meta Llama 3.1 new features, the ones to highlight are: a large context length of 128K tokens (vs original 8K), multilingual capabilities, tool usage capabilities, and a more permissive license.

This example showcases how to deploy [`meta-llama/Meta-Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) on Vertex AI with an A3 accelerator-optimized instance with 8 NVIDIA H100s via the Hugging Face purpose-built Deep Learning Container (DLC) for Text Generation Inference (TGI) on Google Cloud.

!['meta-llama/Meta-Llama-3.1-405B-Instruct-FP8' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

Once everything is set up, you can already initialize the Vertex AI session via the [`google-cloud-aiplatform`](https://github.com/googleapis/python-aiplatform) Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

### Quotas on Google Cloud

To serve [`meta-llama/Meta-Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) you need an instance with at least 400GiB of GPU VRAM that supports the FP8 data-type, and the A3 accelerator-optimized machines on Google Cloud are the machines you would need to use.

Even if the A3 accelerator-optimized machines with 8 x NVIDIA H100 80GB GPUs are available within Google Cloud, you will still need to request a custom quota increase in Google Cloud, as those need a specific approval. Note that the A3 accelerator-optimized machines are only available in some zones, so make sure to check the availability of both A3 High or even A3 Mega per zone at [Compute Engine - GPU regions and zones](https://cloud.google.com/compute/docs/gpus/gpu-regions-zones).

![A3 availability in Google Cloud](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/a3-general-availability.png)

In this case, to request a quota increase to use the machine with 8 NVIDIA H100s you will need to increase the following quotas:

* `Service: Vertex AI API` and `Name: Custom model serving Nvidia H100 80GB GPUs per region` set to **8**
* `Service: Vertex AI API` and `Name: Custom model serving A3 CPUs per region` set to **208**

![A3 Quota Request in Google Cloud](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/a3-quota-request.png)

Read more on how to request a quota increase at [Google Cloud Documentation - View and manage quotas](https://cloud.google.com/docs/quotas/view-manage).

## Register model on Vertex AI

Since [`meta-llama/Meta-Llama-3.1-405B-Instruct-FP8`](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) is a gated model, you need to login into your Hugging Face Hub account, accept the gating requirements, and then generate an access token either with fine-grained read access to the gated model only (recommended), or read-access to your account.

Read more about [access tokens for the Hugging Face Hub](https://huggingface.co/docs/hub/en/security-tokens).

To authenticate, you can either use the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) Python SDK as shown below (recommended), or just set the environment variable `HF_TOKEN` instead.

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face DLC for TGI on startup via the `MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

- **`display_name`** is the name that will be shown in the Vertex AI Model Registry.
- **`serving_container_image_uri`** is the location of the Hugging Face DLC for TGI that will be used for serving the model.
- **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by TGI via the [`text-generation-launcher`](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher), which exposes some environment variables such as the following:
    - `MODEL_ID` the model ID on the Hugging Face Hub.
    - `NUM_SHARD` the number of shards to use i.e. the number of GPUs to use, in this case set to 8 as a node with 8 NVIDIA H100s will be used.
    - `HUGGING_FACE_HUB_TOKEN` is the Hugging Face Hub token, required as [`meta-llama/Meta-Llama-3.1-405B-Instruct-FP8`](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) is a gated model.
    - `HF_HUB_ENABLE_HF_TRANSFER` to enable a faster download speed via the [`hf_transfer`](https://github.com/huggingface/hf_transfer) library.

For more information on the supported arguments, check [`aiplatform.Model.upload` Python reference](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload).

Starting from TGI 2.3 DLC i.e. `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311`, and onwards, you can set the environment variable value `MESSAGES_API_ENABLED="true"` to deploy the [Messages API](https://huggingface.co/docs/text-generation-inference/main/en/messages_api) on Vertex AI, otherwise, the [Generate API](https://huggingface.co/docs/text-generation-inference/main/en/quicktour#consuming-tgi) will be deployed instead.

```python
from huggingface_hub import get_token

model = aiplatform.Model.upload(
    display_name="meta-llama--Meta-Llama-3.1-405B-Instruct-FP8",
    serving_container_image_uri="",
    serving_container_environment_variables={
        "MODEL_ID": "meta-llama/Meta-Llama-3.1-405B-Instruct-FP8",
        "HUGGING_FACE_HUB_TOKEN": get_token(),
        "HF_HUB_ENABLE_HF_TRANSFER": "1",
        "NUM_SHARD": "8",
    },
)
model.wait()
```

![Meta Llama 3.1 405B FP8 registered on Vertex AI](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/vertex-ai-model.png)

## Deploy model on Vertex AI

Once Meta Llama 3.1 405B is registered on Vertex AI Model Registry, you can already deploy it on a Vertex AI Endpoint with the Hugging Face DLC for TGI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quickly review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported arguments you can check [`aiplatform.Model.deploy` Python reference](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy).

As mentioned before, since Meta Llama 3.1 405B in FP8 takes ~400 GiB of disk space, that means you need at least 400 GiB of GPU VRAM to load the model, and the GPUs within the node need to support the FP8 data type. In this case, an A3 instance with 8 x NVIDIA H100 80GB with a total of ~640 GiB of VRAM will be used to load the model while also leaving some free VRAM for the KV Cache and the CUDA Graphs.

```python
deployed_model = model.deploy(
    endpoint=aiplatform.Endpoint.create(display_name="Meta-Llama-3.1-405B-FP8-Endpoint"),
    machine_type="a3-highgpu-8g",
    accelerator_type="NVIDIA_H100_80GB",
    accelerator_count=8,
    enable_access_logging=True,
)
```

[`meta-llama/Meta-Llama-3.1-405B-Instruct-FP8`](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8) deployment on Vertex AI will take \~30 minutes to deploy, as it needs to allocate the resources on Google Cloud, and then download the weights from the Hugging Face Hub (\~10 minutes) and load those for inference in TGI (\~3 minutes).

![Meta Llama 3.1 405B Instruct FP8 deployed on Vertex AI](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/vertex-ai-endpoint.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

As `/generate` is the endpoint that is being exposed through TGI on Vertex AI, you will need to format the messages with the chat template before sending the request to Vertex AI, so you will need to install 🤗`transformers` to use the `apply_chat_template` method from the `PreTrainedTokenizerFast`.

```python
%%bash
pip install --upgrade --quiet transformers
```

And then apply the chat template to a conversation using the tokenizer as follows:

```python
import os
from huggingface_hub import get_token
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    "meta-llama/Meta-Llama-3.1-405B-Instruct-FP8",
    token=get_token(),
)

messages = [
    {"role": "system", "content": "You are an assistant that responds as a pirate."},
    {"role": "user", "content": "What's the Theory of Relativity?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
# system\n\nYou are an assistant that responds as a pirate.user\n\nWhat's the Theory of Relativity?assistant\n\n
```

Which is what you will be sending within the payload to the deployed Vertex AI Endpoint, as well as the generation parameters as in [Consuming Text Generation Inference (TGI) -> Generate](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).

### Via Python

#### Within the same session

If you are willing to run the online prediction within the current session, you can send requests programmatically via the `aiplatform.Endpoint` (returned by the `aiplatform.Model.deploy` method) as in the following snippet:

```python
output = deployed_model.predict(
    instances=[
        {
            "inputs": "system\n\nYou are an assistant that responds as a pirate.user\n\nWhat's the Theory of Relativity?assistant\n\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": True,
                "top_p": 0.95,
                "temperature": 1.0,
            },
        },
    ]
)
print(output.predictions[0])
```

Producing the following `output`:

```
Prediction(predictions=["Yer want ta know about them fancy science things, eh? Alright then, matey, settle yerself down with a pint o' grog and listen close. I be tellin' ye about the Theory o' Relativity, as proposed by that swashbucklin' genius, Albert Einstein.\n\nNow, ye see, Einstein said that time and space be connected like the sea and the wind. Ye can't have one without the other, savvy? And he proposed that how ye see time and space depends on how fast ye be movin' and where ye be standin'. That be called relativity, me"], deployed_model_id='***', metadata=None, model_version_id='1', model_resource_name='projects/***/locations/us-central1/models/***', explanations=None)
```

#### From a different session

If the Vertex AI Endpoint was deployed in a different session and you want to use it but don't have access to the `deployed_model` variable returned by the `aiplatform.Model.deploy` method as in the previous section; you can also run the following snippet to instantiate the deployed `aiplatform.Endpoint` via its resource name as `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}`.

You will need to either retrieve the resource name i.e. the `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}` URL yourself via the Google Cloud Console, or just replace the `ENDPOINT_ID` below that can either be found via the previously instantiated `endpoint` as `endpoint.id` or via the Google Cloud Console under the Online predictions where the endpoint is listed.

```python
import os
from google.cloud import aiplatform

aiplatform.init(project=os.getenv("PROJECT_ID"), location=os.getenv("LOCATION"))

endpoint_display_name = "Meta-Llama-3.1-405B-FP8-Endpoint"  # TODO: change to your endpoint display name

# Iterates over all the Vertex AI Endpoints within the current project and keeps the first match (if any), otherwise set to None
ENDPOINT_ID = next(
    (endpoint.name for endpoint in aiplatform.Endpoint.list() 
     if endpoint.display_name == endpoint_display_name), 
    None
)
assert ENDPOINT_ID, (
    "`ENDPOINT_ID` is not set, please make sure that the `endpoint_display_name` is correct at "\
    f"https://console.cloud.google.com/vertex-ai/online-prediction/endpoints?project={os.getenv('PROJECT_ID')}"
)

endpoint = aiplatform.Endpoint(f"projects/{os.getenv('PROJECT_ID')}/locations/{os.getenv('LOCATION')}/endpoints/{ENDPOINT_ID}")
output = endpoint.predict(
    instances=[
        {
            "inputs": "system\n\nYou are an assistant that responds as a pirate.user\n\nWhat's the Theory of Relativity?assistant\n\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": True,
                "top_p": 0.95,
                "temperature": 0.7,
            },
        },
    ],
)
print(output.predictions[0])
```

Producing the following `output`:

```
Prediction(predictions=["Yer lookin' fer a treasure trove o' knowledge about them fancy physics, eh? Alright then, matey, settle yerself down with a pint o' grog and listen close, as I spin ye the yarn o' Einstein's Theory o' Relativity.\n\nIt be a tale o' two parts, me hearty: Special Relativity and General Relativity. Now, I know what ye be thinkin': what in blazes be the difference? Well, matey, let me break it down fer ye.\n\nSpecial Relativity be the idea that time and space be connected like the sea and the sky."], deployed_model_id='***', metadata=None, model_version_id='1', model_resource_name='projects/***/locations/us-central1/models/***', explanations=None)
```

### Via the Vertex AI Online Prediction UI

Alternatively, for testing purposes you can also use the Vertex AI Online Prediction UI, that provides a field that expects the JSON payload formatted according to the Vertex AI specification (as in the examples above) being:

```json
{
    "instances": [
        {
            "inputs": "system\n\nYou are an assistant that responds as a pirate.user\n\nWhat's the Theory of Relativity?assistant\n\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": true,
                "top_p": 0.95,
                "temperature": 0.7
            }
        }
    ]
}
```

![Meta Llama 3.1 405B Instruct FP8 online prediction on Vertex AI](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai/assets/vertex-ai-online-prediction.png)

## Resource clean-up

Finally, you can release the resources that you've created as follows, to avoid unnecessary costs:

- `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
- `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully, after the `undeploy_all` method.
- `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```

Alternatively, you can also remove those from the Google Cloud Console following the steps:
* Go to Vertex AI in Google Cloud
* Go to Deploy and use -> Online prediction
* Click on the endpoint and then on the deployed model/s to "Undeploy model from endpoint"
* Then go back to the endpoint list and remove the endpoint
* Finally, go to Deploy and use -> Model Registry, and remove the model
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-llama-3-1-405b-on-vertex-ai)!

### Deploy Llama 3.1 8B with TGI DLC on Cloud Run
https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-llama-3-1-on-cloud-run.md

# Deploy Llama 3.1 8B with TGI DLC on Cloud Run

Llama 3.1 is the latest open LLM from Meta, released in July 2024. Llama 3.1 comes in three sizes: 8B for efficient deployment and development on consumer-size GPU, 70B for large-scale AI native applications, and 405B for synthetic data, LLM as a Judge or distillation; among other use cases. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. Google Cloud Run is a serverless container platform that allows developers to deploy and manage containerized applications without managing infrastructure, enabling automatic scaling and billing only for usage.

This example showcases how to deploy an LLM from the Hugging Face Hub, in this case Llama 3.1 8B Instruct model quantized to INT4 using AWQ, with the Hugging Face DLC for TGI on Google Cloud Run with GPU support ([in preview](https://cloud.google.com/products#product-launch-stages)).

To access GPU on Cloud Run, [request a quota increase](https://cloud.google.com/run/quotas#increase) for `Total Nvidia L4 GPU allocation, per project per region`. At the time of writing this example, NVIDIA L4 GPUs (24GiB VRAM) are the only available GPUs on Cloud Run; enabling automatic scaling up to 7 instances by default (more available via quota), as well as scaling down to zero instances when there are no requests.

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=us-central1 # or any location where Cloud Run offers GPUs: https://cloud.google.com/run/docs/locations#gpu
export CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-2.ubuntu2204.py310
export SERVICE_NAME=text-generation-inference
```

Then you need to login into your Google Cloud account and set the project ID you want to use to deploy Cloud Run.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the Cloud Run API, which is required for the Hugging Face DLC for TGI deployment on Cloud Run.

```bash
gcloud services enable run.googleapis.com
```

## Deploy TGI on Cloud Run

Once you are all set, you can call the `gcloud beta run deploy` command (still on beta because GPU support is on preview, as mentioned above).

The `gcloud beta run deploy` command needs you to specify the following parameters:

- `--image`: The container image URI to deploy.
- `--args`: The arguments to pass to the container entrypoint, being `text-generation-launcher` for the Hugging Face DLC for TGI. Read more about the supported arguments at [Text-generation-launcher arguments](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher).
  - `--model-id`: The model ID to use, in this case, [`hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4`](https://huggingface.co/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4).
  - `--quantize`: The quantization method to use, in this case, `awq`. If not specified, it will be retrieved from the `quantization_config->quant_method` in the `config.json` file.
- `--port`: The port the container listens to.
- `--cpu` and `--memory`: The number of CPUs and amount of memory to allocate to the container. Needs to be set to 4 and 16Gi (16 GiB), respectively; as that's the minimum requirement for using the GPU.
- `--no-cpu-throttling`: Disables CPU throttling, which is required for using the GPU.
- `--gpu` and `--gpu-type`: The number of GPUs and the GPU type to use. Needs to be set to 1 and `nvidia-l4`, respectively; as at the time of writing this tutorial, those are the only available options as Cloud Run on GPUs is still under preview.
- `--max-instances`: The maximum number of instances to run, set to 3, but default maximum value is 7. Alternatively, one could set it to 1 too, but that could eventually lead to downtime during infrastructure migrations, so anything above 1 is recommended.
- `--concurrency`: the maximum number of concurrent requests per instance, set to 64. The value is not arbitrary, but determined after running and evaluating the results of [`text-generation-benchmark`](https://github.com/huggingface/text-generation-inference/tree/main/benchmark), as the most optimal balance between throughput and latency; where the current default for TGI being 128 is a bit too much. Note that this value is also aligned with the [`--max-concurrent-requests`](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher#maxconcurrentrequests) argument in TGI.
- `--region`: The region to deploy the Cloud Run service.
- `--no-allow-unauthenticated`: Disables unauthenticated access to the service, which is a good practice as adds an authentication layer managed by Google Cloud IAM.

```bash
gcloud beta run deploy $SERVICE_NAME \
    --image=$CONTAINER_URI \
    --args="--model-id=hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4,--quantize=awq,--max-concurrent-requests=64" \
    --port=8080 \
    --cpu=4 \
    --memory=16Gi \
    --no-cpu-throttling \
    --gpu=1 \
    --gpu-type=nvidia-l4 \
    --max-instances=3 \
    --concurrency=64 \
    --region=$LOCATION \
    --no-allow-unauthenticated
```

![Cloud Run Deployment](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/cloud-run/deploy-llama-3-1-on-cloud-run/imgs/cloud-run-deployment.png)

![Cloud Run Deployment Details](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/cloud-run/deploy-llama-3-1-on-cloud-run/imgs/cloud-run-details.png)

The first time you deploy a new container on Cloud Run it will take around 5 minutes to deploy as it needs to import it from the Google Cloud Artifact Registry, but on the follow up deployments it will take less time as the image has been already imported before.

## Inference on Cloud Run

Once deployed, you can send requests to the service via any of the supported TGI endpoints, check [TGI's OpenAPI Specification](https://huggingface.github.io/text-generation-inference/) to see all the available endpoints and their respective parameters.

All Cloud Run services are deployed privately by default, which means that they can't be accessed without providing authentication credentials in the request headers. These services are secured by IAM and are only callable by Project Owners, Project Editors, and Cloud Run Admins and Cloud Run Invokers.

In this case, a couple of alternatives to enable developer access will be showcased; while the other use cases are out of the scope of this example, as those are either not secure due to the authentication being disabled (for public access scenarios), or require additional setup for production-ready scenarios ([service-to-service authentication](https://cloud.google.com/run/docs/authenticating/service-to-service), [end-user access](https://cloud.google.com/run/docs/authenticating/end-users)).

The alternatives mentioned below are for development scenarios, and should not be used in production-ready scenarios as is. The approach below is following the guide defined in [Cloud Run Documentation - Authenticate Developers](https://cloud.google.com/run/docs/authenticating/developers); but you can find every other guide as mentioned above in [Cloud Run Documentation - Authentication overview](https://cloud.google.com/run/docs/authenticating/overview).

### Via Cloud Run Proxy

Cloud Run Proxy runs a server on localhost that proxies requests to the specified Cloud Run Service with credentials attached; which is useful for testing and experimentation.

```bash
gcloud run services proxy $SERVICE_NAME --region $LOCATION
```

Then you can send requests to the deployed service on Cloud Run, using the  URL, with no authentication, exposed by the proxy as shown in the examples below.

Note that the examples below are using the `/v1/chat/completions` TGI endpoint with is OpenAI-compatible, meaning that both `cURL` and Python are just some proposals, but any OpenAI-compatible client can be used instead.

#### cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -H 'Content-Type: application/json' \
    -d '{
        "model": "tgi",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "What is Deep Learning?"
            }
        ],
        "max_tokens": 128
    }'
```

#### Python

To run the inference using Python, you can either use the `huggingface_hub` Python SDK (recommended) or the `openai` Python SDK.

##### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run:

```python
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://localhost:8080", api_key="-")

chat_completion = client.chat.completions.create(
  model="hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)
```

##### `openai`

You can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key="-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Deep Learning?"},
    ],
    max_tokens=128,
)
```

### (recommended) Via Cloud Run Service URL

Cloud Run Service has an unique URL assigned that can be used to send requests from anywhere, using the Google Cloud Credentials with Cloud Run Invoke access to the service; which is the recommended approach as it's more secure and consistent than using the Cloud Run Proxy.

The URL of the Cloud Run service can be obtained via the following command (assigned to the `SERVICE_URL` variable for convenience):

```bash
SERVICE_URL=$(gcloud run services describe $SERVICE_NAME --region $LOCATION --format 'value(status.url)')
```

Then you can send requests to the deployed service on Cloud Run, using the `SERVICE_URL` and any Google Cloud Credentials with Cloud Run Invoke access. For setting up the credentials there are multiple approaches, some of those are listed below:

- Using the default identity token from the Google Cloud SDK:

  - Via `gcloud` as:

  ```bash
  gcloud auth print-identity-token
  ```

  - Via Python as:

  ```python
  import google.auth
  from google.auth.transport.requests import Request as GoogleAuthRequest

  auth_req = GoogleAuthRequest()
  creds, _ = google.auth.default()
  creds.refresh(auth_req)

  id_token = creds.id_token
  ```

- Using a Service Account with Cloud Run Invoke access, which can either be done with any of the following approaches:
  - Create a Service Account before the Cloud Run Service was created, and then set the `--service-account` flag to the Service Account email when creating the Cloud Run Service. And use an Access Token for that Service Account only using `gcloud auth print-access-token --impersonate-service-account=SERVICE_ACCOUNT_EMAIL`.
  - Create a Service Account after the Cloud Run Service was created, and then update the Cloud Run Service to use the Service Account. And use an Access Token for that Service Account only using `gcloud auth print-access-token --impersonate-service-account=SERVICE_ACCOUNT_EMAIL`.

The recommended approach is to use a Service Account (SA), as the access can be controlled better and the permissions are more granular; as the Cloud Run Service was not created using a SA, which is another nice option, you need to now create the SA, gran it the necessary permissions, update the Cloud Run Service to use the SA, and then generate an access token to set as the authentication token within the requests, that can be revoked later once you are done using it.

- Set the `SERVICE_ACCOUNT_NAME` environment variable for convenience:

  ```bash
  export SERVICE_ACCOUNT_NAME=tgi-invoker
  ```

- Create the Service Account:

  ```bash
  gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME
  ```

- Grant the Service Account the Cloud Run Invoker role:

  ```bash
  gcloud run services add-iam-policy-binding $SERVICE_NAME \
      --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \
      --role="roles/run.invoker" \
      --region=$LOCATION
  ```

- Generate the Access Token for the Service Account:

  ```bash
  export ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com)
  ```

The access token is short-lived and will expire, by default after 1 hour. If you want to extend the token lifetime beyond the default, you must create and organization policy and use the `--lifetime` argument when createing the token. Refer to [Access token lifetime](https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#extend_oauth_ttl) to learn more. Otherwise, you can also generate a new token by running the same command again.

Now you can already dive into the different alternatives for sending the requests to the deployed Cloud Run Service using the `SERVICE_URL` AND `ACCESS_TOKEN` as described above.

Note that the examples below are using the `/v1/chat/completions` TGI endpoint with is OpenAI-compatible, meaning that both `cURL` and Python are just some proposals, but any OpenAI-compatible client can be used instead.

#### cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl $SERVICE_URL/v1/chat/completions \
    -X POST \
    -H "Authorization: Bearer $ACCESS_TOKEN" \
    -H 'Content-Type: application/json' \
    -d '{
        "model": "tgi",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "What is Deep Learning?"
            }
        ],
        "max_tokens": 128
    }'
```

#### Python

To run the inference using Python, you can either use the `huggingface_hub` Python SDK (recommended) or the `openai` Python SDK.

##### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run:

```python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    base_url=os.getenv("SERVICE_URL"),
    api_key=os.getenv("ACCESS_TOKEN"),
)

chat_completion = client.chat.completions.create(
  model="hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)
```

##### `openai`

You can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
import os
from openai import OpenAI

client = OpenAI(
    base_url=os.getenv("SERVICE_URL"),
    api_key=os.getenv("ACCESS_TOKEN"),
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is Deep Learning?"},
    ],
    max_tokens=128,
)
```

## Resource clean up

Finally, once you are done using TGI on the Cloud Run Service, you can safely delete it to avoid incurring in unnecessary costs e.g. if the Cloud Run services are inadvertently invoked more times than your monthly Cloud Run invokement allocation in the free tier.

To delete the Cloud Run Service you can either go to the Google Cloud Console at  and delete it manually; or use the Google Cloud SDK via `gcloud` as follows:

```bash
gcloud run services delete $SERVICE_NAME --region $LOCATION
```

Additionally, if you followed the steps in [via Cloud Run Service URL](#via-cloud-run-service-url) and generated a Service Account and an access token, you can either remove the Service Account, or just revoke the access token if it is still valid.

- (recommended) Revoke the Access Token as

```bash
gcloud auth revoke --impersonate-service-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
```

- (optional) Delete the Service Account as:

```bash
gcloud iam service-accounts delete $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
```

## References

- [Cloud Run Documentation - Overview](https://cloud.google.com/run/docs)
- [Cloud Run Documentation - GPU services](https://cloud.google.com/run/docs/configuring/services/gpu)
- [Google Cloud Blog - Run your AI inference applications on Cloud Run with NVIDIA GPUs](https://cloud.google.com/blog/products/application-development/run-your-ai-inference-applications-on-cloud-run-with-nvidia-gpus)

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-llama-3-1-on-cloud-run)!

### Deploy Meta Llama 3 8B with TGI DLC on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tgi-deployment.md

# Deploy Meta Llama 3 8B with TGI DLC on GKE

Meta Llama 3 is the latest LLM from the Llama family, released by Meta; coming in two sizes 8B and 70B, including both the base model and the instruction-tuned model. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to deploy an LLM from the Hugging Face Hub, as Meta Llama 3 8B Instruct, on a GKE Cluster running a purpose-built container to deploy LLMs in a secure and managed environment with the Hugging Face DLC for TGI.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Optional: Set Secrets in GKE

As [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) is a gated model, you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at ; and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml | kubectl apply -f -
```

![GKE Secret in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-deployment/imgs/gke-secrets.png)

More information on how to set Kubernetes secrets in a GKE Cluster at .

## Deploy TGI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) model from the Hugging Face Hub.

To explore all the models that can be served via TGI, you can explore the models tagged with `text-generation-inference` in the Hub at .

The Hugging Face DLC for TGI will be deployed via `kubectl`, from the configuration files in the [`config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment/config/) directory:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment/config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TGI setting the `MODEL_ID` to [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment/config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TGI service.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment/config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tgi-deployment/config
```

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-deployment/imgs/gke-deployment.png)

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s deployment/tgi-deployment
```

## Inference with TGI

To run the inference over the deployed TGI service, you can either:

- Port-forwarding the deployed TGI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward service/tgi-service 8080:8080
  ```

- Accessing the TGI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

### Via cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/generate \
    -X POST \
    -d '{"inputs":"system\n\nYou are a helpful assistant.user\n\nWhat is 2+2?assistant\n\n","parameters":{"temperature":0.7, "top_p": 0.95, "max_new_tokens": 128}}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead:

```bash
curl http://$(kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/generate \
    -X POST \
    -d '{"inputs":"system\n\nYou are a helpful assistant.user\n\nWhat is 2+2?assistant\n\n","parameters":{"temperature":0.7, "top_p": 0.95, "max_new_tokens": 128}}' \
    -H 'Content-Type: application/json'
```

Which produces the following output:

```
{"generated_text":"The answer to 2+2 is 4."}
```

To generate the `inputs` with the expected chat template formatting, you could use the following snippet:

```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
tokenizer.apply_chat_template(
    [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is 2+2?"},
    ],
    tokenize=False,
    add_generation_prompt=True,
)
```

### Via Python

To run the inference using Python, you can use the `openai` Python SDK (see the installation notes at ), setting either the localhost or the ingress IP as the `base_url` for the client, and then running the following code:

```python
from huggingface_hub import get_token
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key=get_token() or "-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is 2+2?"},
    ],
    max_tokens=128,
)
```

Which produces the following output:

```
ChatCompletion(id='', choices=[Choice(finish_reason='eos_token', index=0, logprobs=None, message=ChatCompletionMessage(content='The answer to 2+2 is 4!', role='assistant', function_call=None, tool_calls=None))], created=1718108522, model='meta-llama/Meta-Llama-3-8B-Instruct', object='text_completion', system_fingerprint='2.0.2-sha-6073ece', usage=CompletionUsage(completion_tokens=12, prompt_tokens=28, total_tokens=40))
```

## Delete GKE Cluster

Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 deployment/tgi-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment)!

### Deploy Llama 3.1 405B with TGI DLC on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tgi-llama-405b-deployment.md

# Deploy Llama 3.1 405B with TGI DLC on GKE

[Llama 3.1](https://huggingface.co/blog/llama31) is one of the latest LLMs from the Llama family released by Meta (latest is Llama 3.2 as of October 2024); three sizes: 8B for efficient deployment and development on consumer-size GPU, 70B for large-scale AI native applications, and 405B for synthetic data, LLM as a Judge or distillation; among other use cases; whilst the 405B variant being one of the biggest open LLMs. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using Google infrastructure.

This example showcases how to deploy [`meta-llama/Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Llama-3.1-405B-Instruct-FP8) on a GKE Cluster on a node with 8 NVIDIA H100s via the Hugging Face purpose-built Deep Learning Container (DLC) for Text Generation Inference (TGI) on Google Cloud.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit [GKE Documentation - Install kubectl and configure cluster access](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin).

Finally, note that you most likely will need to request a quota increase in order to be able to access the A3 instance with 8 NVIDIA H100 GPUs, as those need a specific manual approval from Google Cloud. To do so you will need to go to [IAM Admin - Quotas](https://console.cloud.google.com/iam-admin/quotas) and apply the following filters:

- `Service: Compute Engine API`: as GKE relies on Compute Engine for the resource allocation.

- `Dimensions (e.g. location): region: $LOCATION`: replace the `$LOCATION` value with the location specified above, but note that not all the regions may have NVIDIA H100 GPUs available so check [Compute Engine Documentation - Available regions and zones](https://cloud.google.com/compute/docs/regions-zones#available).

- `gpu_family: NVIDIA_H100`: is the identified of the NVIDIA H100 GPUs on Google Cloud.

And then request a quota increase to 8 NVIDIA H100 GPUs in order to run [`meta-llama/Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Llama-3.1-405B-Instruct-FP8).

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.29 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit [GKE Documentation - Specifying cluster version](https://cloud.google.com/kubernetes-engine/versioning#specifying_cluster_version).

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-405b-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Get Hugging Face token and set secrets in GKE

As [`meta-llama/Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Llama-3.1-405B-Instruct-FP8) is a gated model, you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at [Hugging Face Hub - User access tokens](https://huggingface.co/docs/hub/en/security-tokens); and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml | kubectl apply -f -
```

![GKE Secret in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-405b-deployment/imgs/gke-secrets.png)

More information on how to set Kubernetes secrets in a GKE Cluster at [Secret Manager Documentation - Use Secret Manager add-on with Google Kubernetes Engine](https://cloud.google.com/secret-manager/docs/secret-manager-managed-csi-component).

## Deploy TGI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the [`meta-llama/Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Llama-3.1-405B-Instruct-FP8) model from the Hugging Face Hub.

To explore all the models that can be served via TGI, you can explore [the models tagged with `text-generation-inference` in the Hub](https://huggingface.co/models?other=text-generation-inference).

The Hugging Face DLC for TGI will be deployed via `kubectl`, from the configuration files in the [`config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment/config/) directory:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment/config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TGI setting the `MODEL_ID` to [`meta-llama/Llama-3.1-405B-Instruct-FP8`](https://hf.co/meta-llama/Llama-3.1-405B-Instruct-FP8).
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment/config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TGI service.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment/config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tgi-llama-405b-deployment/config
```

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s deployment/tgi-deployment
```

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-405b-deployment/imgs/gke-deployment.png)

![GKE Deployment Logs in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-405b-deployment/imgs/gke-deployment-logs.png)

## Inference with TGI

To run the inference over the deployed TGI service, you can either:

- Port-forwarding the deployed TGI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward service/tgi-service 8080:8080
  ```

- Accessing the TGI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

### Via cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"system","content": "You are a helpful assistant."},{"role":"user","content":"What'\''s Deep Learning?"}],"temperature":0.7,"top_p":0.95,"max_tokens":128}}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead (without specifying the port as it's not needed):

```bash
curl http://$(kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What'\''s Deep Learning?"}],"temperature":0.7,"top_p":0.95,"max_tokens":128}}' \
    -H 'Content-Type: application/json'
```

Which generates the following output:

```
{"object":"chat.completion","id":"","created":1727782287,"model":"meta-llama/Llama-3.1-405B-Instruct-FP8","system_fingerprint":"2.2.0-native","choices":[{"index":0,"message":{"role":"assistant","content":"Deep learning is a subset of machine learning, which is a field of artificial intelligence (AI) that enables computers to learn from data without being explicitly programmed. It's a type of neural network that's inspired by the structure and function of the human brain.\n\nIn traditional machine learning, computers are trained on data using algorithms that are designed to recognize patterns and make predictions. However, these algorithms are often limited in their ability to handle complex data, such as images, speech, and text.\n\nDeep learning, on the other hand, uses multiple layers of artificial neural networks to analyze data. Each layer processes the data in a different way, allowing the"},"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":46,"completion_tokens":128,"total_tokens":174}}
```

### Via Python

To run the inference using Python, you can either use the [`huggingface_hub` Python SDK](https://github.com/huggingface/huggingface_hub) (recommended) or the [`openai` Python SDK](https://github.com/openai/openai-python).

In the examples below `localhost` will be used, but if you did deploy TGI with the ingress, feel free to use the ingress IP as mentioned above (without specifying the port).

#### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run the following snippet to mimic the `cURL` commands above i.e. sending requests to the Messages API:

```python
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://localhost:8080", api_key="-")

chat_completion = client.chat.completions.create(
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's Deep Learning?"},
    ],
    max_tokens=128,
)
```

Which generates the following output:

```
ChatCompletionOutput(choices=[ChatCompletionOutputComplete(finish_reason='length', index=0, message=ChatCompletionOutputMessage(role='assistant', content='Deep learning is a subset of machine learning that focuses on neural networks with many layers, typically more than two. These neural networks are designed to mimic the structure and function of the human brain, with each layer processing and transforming inputs in a hierarchical manner.\n\nIn traditional machine learning, models are trained using a set of predefined features, such as edges, textures, or shapes. In contrast, deep learning models learn to extract features from raw data automatically, without the need for manual feature engineering.\n\nDeep learning models are trained using large amounts of data and computational power, which enables them to learn complex patterns and relationships in the data. These models can be', tool_calls=None), logprobs=None)], created=1727782322, id='', model='meta-llama/Llama-3.1-405B-Instruct-FP8', system_fingerprint='2.2.0-native', usage=ChatCompletionOutputUsage(completion_tokens=128, prompt_tokens=46, total_tokens=174))
```

Alternatively, you can also format the prompt yourself and send that via the Text Generation API:

```python
from huggingface_hub import InferenceClient, get_token
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-405B-Instruct-FP8", token=get_token())
client = InferenceClient("http://localhost:8080", api_key="-")

generation = client.text_generation(
    prompt=tokenizer.apply_chat_template(
        [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "What's Deep Learning?"},
        ],
        tokenize=False,
        add_generation_prompt=True,
    ),
    max_new_tokens=128,
)
```

Which generates the following output:

```
'Deep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. Inspired by the structure and function of the human brain, deep learning algorithms are designed to learn and improve on their own by automatically adjusting the connections between nodes or "neurons" in the network.\n\nIn traditional machine learning, algorithms are trained on a set of data and then use that training to make predictions or decisions on new, unseen data. However, these algorithms often rely on hand-engineered features and rules to extract relevant information from the data. In contrast, deep learning algorithms can automatically learn to extract relevant features and patterns from the'
```

#### `openai`

Additionally, you can also use the Messages API via `openai`; you can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key="-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's Deep Learning?"},
    ],
    max_tokens=128,
)
```

Which generates the following output:

```
ChatCompletion(id='', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content='Deep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and interpret data. Inspired by the structure and function of the human brain, deep learning algorithms are designed to learn and improve on their own by automatically adjusting the connections between nodes or "neurons" in the network.\n\nIn traditional machine learning, algorithms are trained using a set of predefined rules and features. In contrast, deep learning algorithms learn to identify patterns and features from the data itself, eliminating the need for manual feature engineering. This allows deep learning models to be highly accurate and efficient, especially when dealing with large and complex datasets.\n\nKey characteristics of deep', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1727782478, model='meta-llama/Llama-3.1-405B-Instruct-FP8', object='chat.completion', service_tier=None, system_fingerprint='2.2.0-native', usage=CompletionUsage(completion_tokens=128, prompt_tokens=46, total_tokens=174))
```

## Delete GKE Cluster

Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 deployment/tgi-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment)!

### Deploy BGE Base v1.5 with TEI DLC from GCS on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tei-from-gcs-deployment.md

# Deploy BGE Base v1.5 with TEI DLC from GCS on GKE

BGE, standing for BAAI General Embedding, is a collection of embedding models released by BAAI, which is an English base model for general embedding tasks ranked in the MTEB Leaderboard. Text Embeddings Inference (TEI) is a toolkit developed by Hugging Face for deploying and serving open source text embeddings and sequence classification models; enabling high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to deploy a text embedding model from a Google Cloud Storage (GCS) Bucket on a GKE Cluster running a purpose-built container to deploy text embedding models in a secure and managed environment with the Hugging Face DLC for TEI.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
export BUCKET_NAME=hf-models-gke-bucket
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TEI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything is set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single CPU node as for most of the workloads CPU inference is enough to serve most of the text embeddings models, while it could benefit a lot from GPU serving.

CPU is being used to run the inference on top of the text embeddings models to showcase the current capabilities of TEI, but switching to GPU is as easy as replacing `spec.containers[0].image` with `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cu122.1-4.ubuntu2204`, and then updating the requested resources, as well as the `nodeSelector` requirements in the `deployment.yaml` file. For more information, please refer to the [`gpu-config`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/gpu-config/) directory that contains a pre-defined configuration for GPU serving in TEI with an NVIDIA Tesla T4 GPU (with a compute capability of 7.5 i.e. natively supported in TEI).

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the cluster versions support every CPU. Same applies for the GPU support e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tei-from-gcs-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Optional: Upload a model from the Hugging Face Hub to GCS

This is an optional step in the tutorial, since you may want to reuse an existing model on a GCS Bucket, if that is the case, then feel free to jump to the next step of the tutorial on how to configure the IAM for GCS so that you can access the bucket from a pod in the GKE Cluster.

Otherwise, to upload a model from the Hugging Face Hub to a GCS Bucket, you can use the script [scripts/upload_model_to_gcs.sh](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/../../scripts/upload_model_to_gcs.sh), which will download the model from the Hugging Face Hub and upload it to the GCS Bucket (and create the bucket if not created already).

The `gsutil` component should be installed via `gcloud`, and the Python packages `huggingface_hub` with the extra `hf_transfer`, and the package `crcmod` should also be installed.

```bash
gcloud components install gsutil
pip install --upgrade --quiet "huggingface_hub[hf_transfer]" crcmod
```

Then, you can run the script to download the model from the Hugging Face Hub and then upload it to the GCS Bucket:

Make sure to set the proper permissions to run the script i.e. `chmod +x ./scripts/upload_model_to_gcs.sh`.

```bash
./scripts/upload_model_to_gcs.sh --model-id BAAI/bge-base-en-v1.5 --gcs gs://$BUCKET_NAME/bge-base-en-v1.5
```

![GCS Bucket in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tei-from-gcs-deployment/imgs/gcs-bucket.png)

## Configure IAM for GCS

Before you proceed with the deployment of the Hugging Face DLC for TEI on the GKE Cluster, you need to set the IAM permissions for the GCS Bucket so that the pod in the GKE Cluster can access the bucket. To do so, you need to create a namespace and a service account in the GKE Cluster, and then set the IAM permissions for the GCS Bucket that contains the model, either as uploaded from the Hugging Face Hub or as already existing in the GCS Bucket.

For convenience, as the reference to both the namespace and the service account will be used within the following steps, the environment variables `NAMESPACE` and `SERVICE_ACCOUNT` will be set.

```bash
export NAMESPACE=hf-gke-namespace
export SERVICE_ACCOUNT=hf-gke-service-account
```

Then you can create the namespace and the service account in the GKE Cluster, enabling the creation of the IAM permissions for the pods in that namespace to access the GCS Bucket when using that service account.

```bash
kubectl create namespace $NAMESPACE
kubectl create serviceaccount $SERVICE_ACCOUNT --namespace $NAMESPACE
```

Then you need to add the IAM policy binding to the bucket as follows:

```bash
gcloud storage buckets add-iam-policy-binding \
    gs://$BUCKET_NAME \
    --member "principal://iam.googleapis.com/projects/$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$SERVICE_ACCOUNT" \
    --role "roles/storage.objectUser"
```

## Deploy TEI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TEI, serving the [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) model, from a volume mounted in `/data`, copied from the GCS Bucket where the model is located.

Recently, the Hugging Face Hub team has included the `text-embeddings-inference` tag in the Hub, so feel free to explore all the embedding models in the Hub that can be served via TEI at .

The Hugging Face DLC for TEI will be deployed via `kubectl`, from the configuration files in either the [`cpu-config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/cpu-config/) or the [`gpu-config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/gpu-config/) directories depending on whether you want to use the CPU or GPU accelerators, respectively:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/cpu-config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TEI setting the `MODEL_ID` to the model path in the volume mount, in this case `/data/bge-base-en-v1.5`.
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/cpu-config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TEI service.
- [`storageclass.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/cpu-config): contains the storage class details of the pod, defining the storage class for the volume mount.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/cpu-config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tei-from-gcs-deployment/cpu-config
```

As already mentioned, for this example you will be deploying the container in a CPU node, but the configuration to deploy TEI in a GPU node is also available in the [`gpu-config`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment/gpu-config/) directory, so if you want to deploy TEI in a GPU node, please run `kubectl apply -f Google-Cloud-Containers/examples/gke/tei-from-gcs-deployment/gpu-config` instead of `kubectl apply -f Google-Cloud-Containers/examples/gke/tei-from-gcs-deployment/cpu-config`.

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tei-from-gcs-deployment/imgs/gke-deployment.png)

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods --namespace $NAMESPACE
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s --namespace $NAMESPACE deployment/tei-deployment
```

## Inference with TEI

To run the inference over the deployed TEI service, you can either:

- Port-forwarding the deployed TEI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward --namespace $NAMESPACE service/tei-service 8080:8080
  ```

- Accessing the TEI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in either the `cpu-configs/ingress.yaml` or the `gpu-config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress --namespace $NAMESPACE tei-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

TEI exposes different inference endpoints based on the task that the model is serving:

- **Text Embeddings**: text embedding models expose the endpoint `/embed` expecting a payload with the key `inputs` which is either a string or a list of strings to be embedded.
- **Re-rank**: re-ranker models expose the endpoint `/rerank` expecting a payload with the keys `query` and `texts`, where the `query` is the reference used to rank the similarity against each text in `texts`.
- **Sequence Classification**: classic sequence classification models expose the endpoint `/predict` which expects a payload with the key `inputs` which is either a string or a list of strings to classify.
  More information at .

### Via cURL

To send a POST request to the TEI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/embed \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead:

```bash
curl http://$(kubectl get ingress --namespace=$NAMESPACE tei-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/embed \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H 'Content-Type: application/json'
```

Which produces the following output (truncated for brevity, but original tensor length is 768, which is the embedding dimension of [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) i.e. the model you are serving):

```
[[-0.01483098,0.010846359,-0.024679236,0.012507628,0.034231555,...]]
```

## Delete GKE Cluster

Finally, once you are done using TEI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 --namespace=$NAMESPACE deployment/tei-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment)!

### Deploy Gemma 7B with TGI DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-gemma-on-vertex-ai.md

# Deploy Gemma 7B with TGI DLC on Vertex AI

Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models, developed by Google DeepMind and other teams across Google. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy any supported text-generation model, in this case [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it), from the Hugging Face Hub on Vertex AI using the TGI DLC available in Google Cloud Platform (GCP).

!['google/gemma-7b-it' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Register model on Vertex AI

Once everything is set up, you can already initialize the Vertex AI session via the `google-cloud-aiplatform` Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

As [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) is a gated model, you need to login into your Hugging Face Hub account with a read-access token either fine-grained with access to the gated model, or just overall read-access to your account. More information on how to generate a read-only access token for the Hugging Face Hub in the instructions at .

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face DLC for TGI on startup via the `MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

* **`display_name`** is the name that will be shown in the Vertex AI Model Registry.

* **`serving_container_image_uri`** is the location of the Hugging Face DLC for TGI that will be used for serving the model.

* **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by `text-generation-inference`, which are analog to the [`text-generation-launcher` arguments](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher). Additionally, the Hugging Face DLCs for TGI also capture the `AIP_` environment variables from Vertex AI as in [Vertex AI Documentation - Custom container requirements for prediction](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements).

    * `MODEL_ID` is the identifier of the model in the Hugging Face Hub. To explore all the supported models you can check .
    * `NUM_SHARD` is the number of shards to use if you don't want to use all GPUs on a given machine e.g. if you have two GPUs but you just want to use one for TGI then `NUM_SHARD=1`, otherwise it matches the `CUDA_VISIBLE_DEVICES`.
    * `MAX_INPUT_TOKENS` is the maximum allowed input length (expressed in number of tokens), the larger it is, the larger the prompt can be, but also more memory will be consumed.
    * `MAX_TOTAL_TOKENS` is the most important value to set as it defines the "memory budget" of running clients requests, the larger this value, the larger amount each request will be in your RAM and the less effective batching can be.
    * `MAX_BATCH_PREFILL_TOKENS` limits the number of tokens for the prefill operation, as it takes the most memory and is compute bound, it is interesting to limit the number of requests that can be sent.
    * `HUGGING_FACE_HUB_TOKEN` is the Hugging Face Hub token, required as [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) is a gated model.

* (optional) **`serving_container_ports`** is the port where the Vertex AI endpoint will be exposed, by default 8080.

For more information on the supported `aiplatform.Model.upload` arguments, check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload.

Starting from TGI 2.3 DLC i.e. `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311`, and onwards, you can set the environment variable value `MESSAGES_API_ENABLED="true"` to deploy the [Messages API](https://huggingface.co/docs/text-generation-inference/main/en/messages_api) on Vertex AI, otherwise, the [Generate API](https://huggingface.co/docs/text-generation-inference/main/en/quicktour#consuming-tgi) will be deployed instead.

```python
from huggingface_hub import get_token

model = aiplatform.Model.upload(
    display_name="google--gemma-7b-it",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "MODEL_ID": "google/gemma-7b-it",
        "NUM_SHARD": "1",
        "MAX_INPUT_TOKENS": "512",
        "MAX_TOTAL_TOKENS": "1024",
        "MAX_BATCH_PREFILL_TOKENS": "1512",
        "HUGGING_FACE_HUB_TOKEN": get_token(),
    },
    serving_container_ports=[8080],
)
model.wait()
```

![Model on Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-model.png)

## Deploy model on Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="google--gemma-7b-it-endpoint")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quickly review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported `aiplatform.Model.deploy` arguments, you can check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy.

```python
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-4",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
)
```

**WARNING**: _The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes._

![Vertex AI Endpoint running the model](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-endpoint-run.png)

![Vertex AI Endpoint logs in Cloud Logging](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-endpoint-logs.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

As you are serving a `text-generation` model, you will need to make sure that the chat template, if any, is applied correctly to the input conversation; meaning that `transformers` need to be installed so as to instantiate the `tokenizer` for [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) and run the `apply_chat_template` method over the input conversation before sending the input within the payload to the Vertex AI endpoint.

```python
!pip install --upgrade --quiet transformers
```

After the installation is complete, the following snippet will apply the chat template to the conversation:

```python
from huggingface_hub import get_token
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it", token=get_token())

messages = [
    {"role": "user", "content": "What's Deep Learning?"},
]

inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
# user\nWhat's Deep Learning?\nmodel\n
```

Which is what you will be sending within the payload to the deployed Vertex AI Endpoint, as well as the generation parameters as in .

### Via Python

#### Within the same session

If you are willing to run the online prediction within the current session, you can send requests programmatically via the `aiplatform.Endpoint` (returned by the `aiplatform.Model.deploy` method) as in the following snippet:

```python
output = deployed_model.predict(
    instances=[
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 256, "do_sample": True,
                "top_p": 0.95, "temperature": 1.0,
            },
        },
    ]
)
print(output.predictions[0])
```

Producing the following `output`:

```
Prediction(predictions=['\n\nDeep learning is a type of machine learning that uses artificial neural networks to learn from large amounts of data, making it a powerful tool for various tasks, including image recognition, natural language processing, and speech recognition.\n\n**Key Concepts:**\n\n* **Artificial Neural Networks (ANNs):** Structures that mimic the interconnected neurons in the brain.\n* **Deep Learning Architectures:** Multi-layered ANNs that learn hierarchical features from data.\n* **Transfer Learning:** Reusing learned features from one task to improve performance on another.\n\n**Types of Deep Learning:**\n\n* **Supervised Learning:** Models are trained on labeled data, where inputs are paired with corresponding outputs.\n* **Unsupervised Learning:** Models learn patterns from unlabeled data, such as clustering or dimensionality reduction.\n* **Reinforcement Learning:** Models learn through trial-and-error by interacting with an environment to optimize a task.\n\n**Benefits:**\n\n* **High Accuracy:** Deep learning models can achieve high accuracy on complex tasks.\n* **Adaptability:** Deep learning models can adapt to new data and tasks.\n* **Scalability:** Deep learning models can handle large amounts of data.\n\n**Applications:**\n\n* Image recognition\n* Natural language processing (NLP)\n'], deployed_model_id='***', metadata=None, model_version_id='1', model_resource_name='projects/***/locations/us-central1/models/***', explanations=None)
```

![Vertex AI Endpoint logs in Cloud Logging after predict](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-endpoint-logs-predict.png)

#### From a different session

If the Vertex AI Endpoint was deployed in a different session and you want to use it but don't have access to the `deployed_model` variable returned by the `aiplatform.Model.deploy` method as in the previous section; you can also run the following snippet to instantiate the deployed `aiplatform.Endpoint` via its resource name as `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}`.

You will need to either retrieve the resource name i.e. the `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}` URL yourself via the Google Cloud Console, or just replace the `ENDPOINT_ID` below that can either be found via the previously instantiated `endpoint` as `endpoint.id` or via the Google Cloud Console under the Online predictions where the endpoint is listed.

```python
import os
from google.cloud import aiplatform

aiplatform.init(project=os.getenv("PROJECT_ID"), location=os.getenv("LOCATION"))

endpoint_display_name = "google--gemma-7b-it-endpoint"  # TODO: change to your endpoint display name

# Iterates over all the Vertex AI Endpoints within the current project and keeps the first match (if any), otherwise set to None
ENDPOINT_ID = next(
    (endpoint.name for endpoint in aiplatform.Endpoint.list() 
     if endpoint.display_name == endpoint_display_name), 
    None
)
assert ENDPOINT_ID, (
    "`ENDPOINT_ID` is not set, please make sure that the `endpoint_display_name` is correct at "\
    f"https://console.cloud.google.com/vertex-ai/online-prediction/endpoints?project={os.getenv('PROJECT_ID')}"
)

endpoint = aiplatform.Endpoint(f"projects/{os.getenv('PROJECT_ID')}/locations/{os.getenv('LOCATION')}/endpoints/{ENDPOINT_ID}")
output = endpoint.predict(
    instances=[
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": True,
                "top_p": 0.95,
                "temperature": 0.7,
            },
        },
    ],
)
print(output.predictions[0])
```

### Via the Vertex AI Online Prediction UI

Alternatively, for testing purposes you can also use the Vertex AI Online Prediction UI, that provides a field that expects the JSON payload formatted according to the Vertex AI specification (as in the examples above) being:

```json
{
    "instances": [
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": true,
                "top_p": 0.95,
                "temperature": 0.7
            }
        }
    ]
}
```

![Vertex AI Endpoint online inference](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai/assets/vertex-ai-online-prediction.png)

## Resource clean-up

Finally, you can already release the resources that you've created as follows, to avoid unnecessary costs:

* `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
* `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully, after the `undeploy_all` method.
* `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```

Alternatively, you can also remove those from the Google Cloud Console following the steps:

* Go to Vertex AI in Google Cloud
* Go to Deploy and use -> Online prediction
* Click on the endpoint and then on the deployed model/s to "Undeploy model from endpoint"
* Then go back to the endpoint list and remove the endpoint
* Finally, go to Deploy and use -> Model Registry, and remove the model
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-on-vertex-ai)!

### Deploy Gemma2 9B with TGI DLC on Cloud Run
https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-gemma-2-on-cloud-run.md

# Deploy Gemma2 9B with TGI DLC on Cloud Run

Gemma 2 is an advanced, lightweight open model that enhances performance and efficiency while building on the research and technology of its predecessor and the Gemini models developed by Google DeepMind and other teams across Google. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. Google Cloud Run is a serverless container platform that allows developers to deploy and manage containerized applications without managing infrastructure, enabling automatic scaling and billing only for usage.

This example showcases how to deploy Gemma2 9B Instruct model quantized to INT4 using AWQ from the Hugging Face Hub with the Hugging Face DLC for TGI on Google Cloud Run with GPU support ([in preview](https://cloud.google.com/products#product-launch-stages)).

To access GPU on Cloud Run, [request a quota increase](https://cloud.google.com/run/quotas#increase) for `Total Nvidia L4 GPU allocation, per project per region`. At the time of writing this example, NVIDIA L4 GPUs (24GiB VRAM) are the only available GPUs on Cloud Run; enabling automatic scaling up to 7 instances by default (more available via quota), as well as scaling down to zero instances when there are no requests.

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=us-central1  # or any location where Cloud Run offers GPUs: https://cloud.google.com/run/docs/locations#gpu
export CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311
export SERVICE_NAME=gemma2-tgi
```

Then you need to login into your Google Cloud account and set the project ID you want to use to deploy Cloud Run.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the Cloud Run API, which is required for the Hugging Face DLC for TGI deployment on Cloud Run.

```bash
gcloud services enable run.googleapis.com
```

## Deploy TGI on Cloud Run

Once you are all set, you can call the `gcloud beta run deploy` command (still on beta because GPU support is on preview, as mentioned above).

The `gcloud beta run deploy` command needs you to specify the following parameters:

- `--image`: The container image URI to deploy.
- `--args`: The arguments to pass to the container entrypoint, being `text-generation-launcher` for the Hugging Face DLC for TGI. Read more about the supported arguments in [Text-generation-launcher arguments](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher).
  - `--model-id`: The model ID to use, in this case, [`hugging-quants/gemma-2-9b-it-AWQ-INT4`](https://huggingface.co/hugging-quants/gemma-2-9b-it-AWQ-INT4).
  - `--quantize`: The quantization method to use, in this case, `awq`. If not specified, it will be retrieved from the `quantization_config->quant_method` in the `config.json` file.
  - `--max-concurrent-requests`: The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle back pressure correctly. Set to 64, but default is 128.
- `--port`: The port the container listens to.
- `--cpu` and `--memory`: The number of CPUs and amount of memory to allocate to the container. Needs to be set to 4 and 16Gi (16 GiB), respectively; as that's the minimum requirement for using the GPU.
- `--no-cpu-throttling`: Disables CPU throttling, which is required for using the GPU.
- `--gpu` and `--gpu-type`: The number of GPUs and the GPU type to use. Needs to be set to 1 and `nvidia-l4`, respectively; as at the time of writing this tutorial, those are the only available options as Cloud Run on GPUs is still under preview.
- `--max-instances`: The maximum number of instances to run, set to 3, but default maximum value is 7. Alternatively, one could set it to 1 too, but that could eventually lead to downtime during infrastructure migrations, so anything above 1 is recommended.
- `--concurrency`: the maximum number of concurrent requests per instance, set to 64. The value is not arbitrary, but determined after running and evaluating the results of [`text-generation-benchmark`](https://github.com/huggingface/text-generation-inference/tree/main/benchmark), as the most optimal balance between throughput and latency; where the current default for TGI being 128 is a bit too much. Note that this value is also aligned with the [`--max-concurrent-requests`](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher#maxconcurrentrequests) argument in TGI.
- `--region`: The region to deploy the Cloud Run service.
- `--no-allow-unauthenticated`: Disables unauthenticated access to the service, which is a good practice as adds an authentication layer managed by Google Cloud IAM.

Optionally, you can include the arguments `--vpc-egress=all-traffic` and `--subnet=default`, as there is external traffic being sent to the public internet, so in order to speed the network, you need to route all traffic through the VPC network by setting those flags. Note that besides setting the flags, you need to set up Google Cloud NAT to reach the public internet, which is a paid product. Find more information in [Cloud Run Documentation - Networking best practices](https://cloud.google.com/run/docs/configuring/networking-best-practices).

```bash
gcloud compute routers create nat-router --network=default --region=$LOCATION
gcloud compute routers nats create vm-nat --router=nat-router --region=$LOCATION --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges
```

Finally, you can run the `gcloud beta run deploy` command to deploy TGI on Cloud Run as:

```bash
gcloud beta run deploy $SERVICE_NAME \
    --image=$CONTAINER_URI \
    --args="--model-id=hugging-quants/gemma-2-9b-it-AWQ-INT4,--max-concurrent-requests=64" \
    --set-env-vars=HF_HUB_ENABLE_HF_TRANSFER=1 \
    --port=8080 \
    --cpu=8 \
    --memory=32Gi \
    --no-cpu-throttling \
    --gpu=1 \
    --gpu-type=nvidia-l4 \
    --max-instances=3 \
    --concurrency=64 \
    --region=$LOCATION \
    --no-allow-unauthenticated
```

Or as it follows if you created the Cloud NAT:

```bash
gcloud beta run deploy $SERVICE_NAME \
    --image=$CONTAINER_URI \
    --args="--model-id=hugging-quants/gemma-2-9b-it-AWQ-INT4,--max-concurrent-requests=64" \
    --set-env-vars=HF_HUB_ENABLE_HF_TRANSFER=1 \
    --port=8080 \
    --cpu=8 \
    --memory=32Gi \
    --no-cpu-throttling \
    --gpu=1 \
    --gpu-type=nvidia-l4 \
    --max-instances=3 \
    --concurrency=64 \
    --region=$LOCATION \
    --no-allow-unauthenticated \
    --vpc-egress=all-traffic \
    --subnet=default
```

![Cloud Run Deployment](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/cloud-run/deploy-gemma-2-on-cloud-run/imgs/cloud-run-deployment.png)

![Cloud Run Deployment Details](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/cloud-run/deploy-gemma-2-on-cloud-run/imgs/cloud-run-details.png)

The first time you deploy a new container on Cloud Run it will take around 5 minutes to deploy as it needs to import it from the Google Cloud Artifact Registry, but on the follow up deployments it will take less time as the image has been already imported before.

## Inference on Cloud Run

Once deployed, you can send requests to the service via any of the supported TGI endpoints, check [TGI's OpenAPI Specification](https://huggingface.github.io/text-generation-inference/) to see all the available endpoints and their respective parameters.

All Cloud Run services are deployed privately by default, which means that they can't be accessed without providing authentication credentials in the request headers. These services are secured by IAM and are only callable by Project Owners, Project Editors, and Cloud Run Admins and Cloud Run Invokers.

In this case, a couple of alternatives to enable developer access will be showcased; while the other use cases are out of the scope of this example, as those are either not secure due to the authentication being disabled (for public access scenarios), or require additional setup for production-ready scenarios ([service-to-service authentication](https://cloud.google.com/run/docs/authenticating/service-to-service), [end-user access](https://cloud.google.com/run/docs/authenticating/end-users)).

The alternatives mentioned below are for development scenarios, and should not be used in production-ready scenarios as is. The approach below is following the guide defined in [Cloud Run Documentation - Authenticate Developers](https://cloud.google.com/run/docs/authenticating/developers); but you can find every other guide as mentioned above in [Cloud Run Documentation - Authentication overview](https://cloud.google.com/run/docs/authenticating/overview).

### Via Cloud Run Proxy

Cloud Run Proxy runs a server on localhost that proxies requests to the specified Cloud Run Service with credentials attached; which is useful for testing and experimentation.

```bash
gcloud run services proxy $SERVICE_NAME --region $LOCATION
```

Then you can send requests to the deployed service on Cloud Run, using the  URL, with no authentication, exposed by the proxy as shown in the examples below.

Note that the examples below are using the `/v1/chat/completions` TGI endpoint with is OpenAI-compatible, meaning that both `cURL` and Python are just some proposals, but any OpenAI-compatible client can be used instead.

#### cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -H 'Content-Type: application/json' \
    -d '{
        "model": "tgi",
        "messages": [
            {
                "role": "user",
                "content": "What is Deep Learning?"
            }
        ],
        "max_tokens": 128
    }'
```

#### Python

To run the inference using Python, you can either use the `huggingface_hub` Python SDK (recommended) or the `openai` Python SDK.

##### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run:

```python
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://localhost:8080", api_key="-")

chat_completion = client.chat.completions.create(
  model="hugging-quants/gemma-2-9b-it-AWQ-INT4",
  messages=[
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)
```

##### `openai`

You can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key="-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "user", "content": "What is Deep Learning?"},
    ],
    max_tokens=128,
)
```

### (recommended) Via Cloud Run Service URL

Cloud Run Service has an unique URL assigned that can be used to send requests from anywhere, using the Google Cloud Credentials with Cloud Run Invoke access to the service; which is the recommended approach as it's more secure and consistent than using the Cloud Run Proxy.

The URL of the Cloud Run service can be obtained via the following command (assigned to the `SERVICE_URL` variable for convenience):

```bash
SERVICE_URL=$(gcloud run services describe $SERVICE_NAME --region $LOCATION --format 'value(status.url)')
```

Then you can send requests to the deployed service on Cloud Run, using the `SERVICE_URL` and any Google Cloud Credentials with Cloud Run Invoke access. For setting up the credentials there are multiple approaches, some of those are listed below:

- Using the default identity token from the Google Cloud SDK:

  - Via `gcloud` as:

  ```bash
  gcloud auth print-identity-token
  ```

  - Via Python as:

  ```python
  import google.auth
  from google.auth.transport.requests import Request as GoogleAuthRequest

  auth_req = GoogleAuthRequest()
  creds, _ = google.auth.default()
  creds.refresh(auth_req)

  id_token = creds.id_token
  ```

- Using a Service Account with Cloud Run Invoke access, which can either be done with any of the following approaches:
  - Create a Service Account before the Cloud Run Service was created, and then set the `--service-account` flag to the Service Account email when creating the Cloud Run Service. And use an Access Token for that Service Account only using `gcloud auth print-access-token --impersonate-service-account=SERVICE_ACCOUNT_EMAIL`.
  - Create a Service Account after the Cloud Run Service was created, and then update the Cloud Run Service to use the Service Account. And use an Access Token for that Service Account only using `gcloud auth print-access-token --impersonate-service-account=SERVICE_ACCOUNT_EMAIL`.

The recommended approach is to use a Service Account (SA), as the access can be controlled better and the permissions are more granular; as the Cloud Run Service was not created using a SA, which is another nice option, you need to now create the SA, gran it the necessary permissions, update the Cloud Run Service to use the SA, and then generate an access token to set as the authentication token within the requests, that can be revoked later once you are done using it.

- Set the `SERVICE_ACCOUNT_NAME` environment variable for convenience:

  ```bash
  export SERVICE_ACCOUNT_NAME=tgi-invoker
  ```

- Create the Service Account:

  ```bash
  gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME
  ```

- Grant the Service Account the Cloud Run Invoker role:

  ```bash
  gcloud run services add-iam-policy-binding $SERVICE_NAME \
      --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \
      --role="roles/run.invoker" \
      --region=$LOCATION
  ```

- Generate the Access Token for the Service Account:

  ```bash
  export ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com)
  ```

The access token is short-lived and will expire, by default after 1 hour. If you want to extend the token lifetime beyond the default, you must create and organization policy and use the `--lifetime` argument when creating the token. Refer to [Access token lifetime](https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts#extend_oauth_ttl) to learn more. Otherwise, you can also generate a new token by running the same command again.

Now you can already dive into the different alternatives for sending the requests to the deployed Cloud Run Service using the `SERVICE_URL` AND `ACCESS_TOKEN` as described above.

Note that the examples below are using the `/v1/chat/completions` TGI endpoint with is OpenAI-compatible, meaning that both `cURL` and Python are just some proposals, but any OpenAI-compatible client can be used instead.

#### cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl $SERVICE_URL/v1/chat/completions \
    -X POST \
    -H "Authorization: Bearer $ACCESS_TOKEN" \
    -H 'Content-Type: application/json' \
    -d '{
        "model": "tgi",
        "messages": [
            {
                "role": "user",
                "content": "What is Deep Learning?"
            }
        ],
        "max_tokens": 128
    }'
```

#### Python

To run the inference using Python, you can either use the `huggingface_hub` Python SDK (recommended) or the `openai` Python SDK.

##### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run:

```python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    base_url=os.getenv("SERVICE_URL"),
    api_key=os.getenv("ACCESS_TOKEN"),
)

chat_completion = client.chat.completions.create(
  model="hugging-quants/gemma-2-9b-it-AWQ-INT4",
  messages=[
    {"role": "user", "content": "What is Deep Learning?"},
  ],
  max_tokens=128,
)
```

##### `openai`

You can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
import os
from openai import OpenAI

client = OpenAI(
    base_url=os.getenv("SERVICE_URL"),
    api_key=os.getenv("ACCESS_TOKEN"),
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "user", "content": "What is Deep Learning?"},
    ],
    max_tokens=128,
)
```

## Resource clean up

Finally, once you are done using TGI on the Cloud Run Service, you can safely delete it to avoid incurring in unnecessary costs e.g. if the Cloud Run services are inadvertently invoked more times than your monthly Cloud Run invoke allocation in the free tier.

To delete the Cloud Run Service you can either go to the Google Cloud Console at  and delete it manually; or use the Google Cloud SDK via `gcloud` as follows:

```bash
gcloud run services delete $SERVICE_NAME --region $LOCATION
```

Additionally, if you followed the steps in [via Cloud Run Service URL](#via-cloud-run-service-url) and generated a Service Account and an access token, you can either remove the Service Account, or just revoke the access token if it is still valid.

- (recommended) Revoke the Access Token as:

```bash
gcloud auth revoke --impersonate-service-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
```

- (optional) Delete the Service Account as:

```bash
gcloud iam service-accounts delete $SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
```

Finally, if you decided to enable the VPC network via Cloud NAT, you can also remove the Cloud NAT (which is a paid product) as:

```bash
gcloud compute routers nats delete vm-nat --router=nat-router --region=$LOCATION
gcloud compute routers delete nat-router --region=$LOCATION
```

## References

- [Cloud Run Documentation - Overview](https://cloud.google.com/run/docs)
- [Cloud Run Documentation - GPU services](https://cloud.google.com/run/docs/configuring/services/gpu)
- [Google Cloud Blog - Run your AI inference applications on Cloud Run with NVIDIA GPUs](https://cloud.google.com/blog/products/application-development/run-your-ai-inference-applications-on-cloud-run-with-nvidia-gpus)

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-gemma-2-on-cloud-run)!

### Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT + LoRA on GKE
https://huggingface.co/docs/google-cloud/examples/gke-trl-lora-fine-tuning.md

# Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT + LoRA on GKE

Mistral is a family of models with varying sizes, created by the Mistral AI team; the Mistral 7B v0.3 LLM is a Mistral 7B v0.2 with extended vocabulary. TRL is a full stack library to fine-tune and align Large Language Models (LLMs) developed by Hugging Face. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to fine-tune Mistral 7B v0.3 with TRL via Supervised Fine-Tuning (SFT) and Low-Rank Adaptation (LoRA) in a single GPU on a GKE Cluster.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Configure IAM for GCS

Before you run the fine-tuning job of the Hugging Face PyTorch DLC for training on the GKE Cluster, you need to set the IAM permissions for the GCS bucket so that the pod in the GKE Cluster can access the bucket, that will be mounted into the running container and use to write the generated artifacts so that those are automatically uploaded to the GCS Bucket. To do so, you need to create a namespace and a service account in the GKE Cluster, and then set the IAM permissions for the GCS Bucket.

For convenience, as the reference to both the namespace and the service account will be used within the following steps, the environment variables `NAMESPACE` and `SERVICE_ACCOUNT` will be set.

```bash
export NAMESPACE=hf-gke-namespace
export SERVICE_ACCOUNT=hf-gke-service-account
```

Then you can create the namespace and the service account in the GKE Cluster, enabling the creation of the IAM permissions for the pods in that namespace to access the GCS Bucket when using that service account.

```bash
kubectl create namespace $NAMESPACE
kubectl create serviceaccount $SERVICE_ACCOUNT --namespace $NAMESPACE
```

Then you need to add the IAM policy binding to the bucket as follows:

```bash
gcloud storage buckets add-iam-policy-binding \
    gs://$BUCKET_NAME \
    --member "principal://iam.googleapis.com/projects/$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$SERVICE_ACCOUNT" \
    --role "roles/storage.objectUser"
```

## Optional: Set Secrets in GKE

As [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) is a gated model, you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at ; and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml \
    --namespace $NAMESPACE | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml \
    --namespace $NAMESPACE | kubectl apply -f -
```

More information on how to set Kubernetes secrets in a GKE Cluster at .

## Define Job Configuration

Before proceeding into the Kubernetes deployment of the batch job via the Hugging Face PyTorch DLC for training, you need to define first the configuration required for the job to run successfully i.e. which GPU is capable of fine-tuning [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16` using LoRA.

As a rough calculation, you could assume that the amount of GPU VRAM required to fine-tune a model in half precision is about four times the model size (read more about it in [Eleuther AI - Transformer Math 101](https://blog.eleuther.ai/transformer-math/)).

Alternatively, if your model is uploaded to the Hugging Face Hub, you can check the numbers in the community space [`Vokturz/can-it-run-llm`](https://huggingface.co/spaces/Vokturz/can-it-run-llm), which does those calculations for you, based the model to fine-tune and the available hardware.

!['Vokturz/can-it-run-llm' for 'mistralai/Mistral-7B-v0.3'](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/can-it-run-llm.png)

## Run Job

Now you can already run the Kubernetes job in the Hugging Face PyTorch DLC for training on the GKE Cluster via `kubectl` from the [`job.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-lora-fine-tuning/job.yaml) configuration file, that contains the job specification for running the command `trl sft` provided by the TRL CLI for the SFT LoRA fine-tuning of [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16` using [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which is a subset from [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) with ~10k samples in a single L4 24GiB GPU, storing the generated artifacts into a volume mount under `/data` linked to a GCS Bucket.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/trl-lora-fine-tuning/job.yaml
```

![GKE Job Created in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gke-job-created.png)

![GKE Job Running in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gke-job-running.png)

In this case, since you are running a batch job, it will only use one node as specified within the [`job.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-lora-fine-tuning/job.yaml) file, since you don't need anything else than that. So on, the job will deploy one pod running the `trl sft` command on top of the Hugging Face PyTorch DLC container for training, and also the GCS FUSE container that is mounting the GCS Bucket into the `/data` path so as to store the generated artifacts in GCS. Once the job is completed, it will automatically scale back to 0, meaning that it will not consume resources.

Additionally, you can use `kubectl` to stream the logs of the job as it follows:

```bash
kubectl logs -f job/trl-lora-sft --container trl-container --namespace $NAMESPACE
```

Finally, once the job is completed, the pods will scale to 0 and the artifacts will be visible in the GCS Bucket mounted within the job.

![GKE Job Logs in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gke-job-logs.png)

![GKE Job Completed in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gke-job-completed.png)

![GCS Bucket with output artifacts in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-lora-fine-tuning/imgs/gcs-bucket.png)

## Delete GKE Cluster

Finally, once the fine-tuning job is completed, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you may decide to keep the GKE Cluster running even after the job is completed, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-lora-fine-tuning)!

### Deploy FLUX with PyTorch Inference DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-flux-on-vertex-ai.md

# Deploy FLUX with PyTorch Inference DLC on Vertex AI

FLUX is an open-weights 12B parameter rectified flow transformer that generates images from text descriptions, pushing the boundaries of text-to-image generation created by Black Forest Labs, with a non-commercial license making it widely accessible for exploration and experimentation. And, Google Cloud Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy any supported [`diffusers`](https://github.com/huggingface/diffusers) text-to-image model from the Hugging Face Hub, in this case [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev), on Vertex AI using the Hugging Face PyTorch DLC for Inference available in Google Cloud Platform (GCP) in both CPU and GPU instances.

!['black-forest-labs/FLUX.1-dev' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-44.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Register model in Vertex AI

Once everything is set up, you can already initialize the Vertex AI session via the [`google-cloud-aiplatform`](https://github.com/googleapis/python-aiplatform) Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

Since [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev) is a gated model, you need to login into your Hugging Face Hub account with a read-access token either fine-grained with access to the gated model, or just read-access to your account. More information on how to generate a read-only access token for the Hugging Face Hub at .

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face PyTorch DLC for Inference on startup via the `HF_MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

- **`display_name`** is the name that will be shown in the Vertex AI Model Registry.
- **`serving_container_image_uri`** is the location of the Hugging Face PyTorch DLC for Inference that will be used for serving the model.
- **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by [huggingface-inference-toolkit](https://github.com/huggingface/huggingface-inference-toolkit) Python SDK, which exposes some environment variables such as the following:
    - `HF_MODEL_ID` is the identifier of the model in the Hugging Face Hub. To explore all the supported models please check https://huggingface.co/models?sort=trending filtering by the task that you want to use e.g. `text-classification`.
    - `HF_TASK` is the task identifier within the Hugging Face Hub. To see all the supported tasks please check [https://huggingface.co/docs/transformers/en/task_summary#natural-language-processing.
    - `HUGGING_FACE_HUB_TOKEN` is the Hugging Face Hub token, required as [`black-forest-labs/FLUX.1-dev`](https://huggingface.co/black-forest-labs/FLUX.1-dev) is a gated model.

For more information on the supported `aiplatform.Model.upload` arguments, check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload.

```python
from huggingface_hub import get_token

model = aiplatform.Model.upload(
    display_name="black-forest-labs--FLUX.1-dev",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "HF_MODEL_ID": "black-forest-labs/FLUX.1-dev",
        "HF_TASK": "text-to-image",
        "HF_TOKEN": get_token(),
    },
)
model.wait()
```

![Model in Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/vertex-ai-model.png)

![Model Version in Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/vertex-ai-model-version.png)

## Deploy model in Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="black-forest-labs--FLUX.1-dev-endpoint")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quicklyl review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).
- **`enable_access_logging`** is an argument to enable endpoint access logging i.e. to record the information about requests made to the endpoint within Google Cloud Logging.

For more information on the supported `aiplatform.Model.deploy` arguments, you can check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy.

```python
%%time
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-48",
    accelerator_type="NVIDIA_L4",
    accelerator_count=4,
    enable_access_logging=True,
)
```

The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes.

![Vertex AI Endpoint Ready](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/vertex-ai-endpoint-ready.png)

![Vertex AI Model Ready](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/vertex-ai-model-ready.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

```python
%%time
output = deployed_model.predict(
    instances=["a photo of an astronaut riding a horse on mars"],
    parameters={
        "width": 512,
        "height": 512,
        "num_inference_steps": 8,
        "guidance_scale": 3.5,
    },
)
```

As the deployed model is a text-to-image model, the output payload will contain the generated image for the text description provided, but encoded in base64; meaning that you will need to decode it and load it using the [`Pillow`](https://github.com/python-pillow/Pillow) Python library.

```python
!pip install --upgrade --quiet Pillow
```

```python
import base64
import io

from IPython.display import display
from PIL import Image

image = Image.open(io.BytesIO(base64.b64decode(output.predictions[0])))
display(image)
```

![FLUX.1-dev output](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai/assets/flux-dev-output.png)

**WARNING** At the moment, the Hugging Face DLCs on Google Cloud have a request timeout of 60 seconds, meaning that if the prediction request takes longer than that, an HTTP 503 will be raised. To prevent that, you can increase the default prediction timeout as per [Vertex AI Documentation - Get online predictions from a custom trained model](https://cloud.google.com/vertex-ai/docs/predictions/get-online-predictions) by either filing a support ticket or contact your Google Cloud representative. Anyway, for the publicly released Hugging Face DLCs, the Google Cloud team is currently working on whitelisting those to increase the prediction timeout to 600 seconds.

## Resource clean-up

Finally, you can release the resources programmatically within the same Python session as follows:

- `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
- `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully after the `undeploy_all`.
- `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-flux-on-vertex-ai)!

### Deploy Gemma 7B with TGI DLC from GCS on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-gemma-from-gcs-on-vertex-ai.md

# Deploy Gemma 7B with TGI DLC from GCS on Vertex AI

Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models, developed by Google DeepMind and other teams across Google. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy any supported text-generation model, in this case [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it), downloaded from the Hugging Face Hub and uploaded to a Google Cloud Storage (GCS) Bucket, on Vertex AI using the Hugging Face DLC for TGI available in Google Cloud Platform (GCP).

!['google/gemma-7b-it' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env BUCKET_URI=gs://your-bucket
%env ARTIFACT_NAME=google--gemma-7b-it
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Optional: Create bucket and upload model from Hub in GCS

Unless you already have a GCS Bucket with the artifact that you want to serve, please follow the instructions below in order to create a new bucket and download and upload the model weights into it.

To create the bucket on Google Cloud Storage (GCS), you first need to ensure that the name is unique for the new bucket or if a bucket with the same name already exists. To do so, both the `gsutil` SDK and the `crcmod` Python package need to be installed in advance as follows:

```python
!gcloud components install gsutil
!pip install --upgrade --quiet crcmod
```

Then you can check whether the bucket exists in GCS, and create it if it doesn't, with the following bash script:

```python
%%bash

# Parse the bucket from the provided $BUCKET_URI path i.e. given gs://bucket-name/dir, extract bucket-name
BUCKET_NAME=$(echo $BUCKET_URI | cut -d'/' -f3)
# Check if the bucket exists, if not create it
if [ -z "$(gsutil ls | grep gs://$BUCKET_NAME)" ]; then
    gcloud storage buckets create gs://$BUCKET_NAME --project=$PROJECT_ID --location=$LOCATION --default-storage-class=STANDARD --uniform-bucket-level-access
fi
```

If either the bucket was created or if the bucket existed in advance, you can already upload [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) from either the Hugging Face Hub, or local storage.

### Artifact from disk / local storage

If the model is available locally, e.g. under the Hugging Face cache path at `~/.cache/huggingface/hub/models--google--gemma-7b-it/snapshots/8adab6a35fdbcdae0ae41ab1f711b1bc8d05727e`, you should run the following script to upload it to the GCS Bucket.

```python
%%bash
# Upload the model to Google Cloud Storage
LOCAL_DIR=~/.cache/huggingface/hub/models--google--gemma-7b-it/snapshots/8adab6a35fdbcdae0ae41ab1f711b1bc8d05727e
if [ -d "$LOCAL_DIR" ]; then
    gsutil -o GSUtil:parallel_composite_upload_threshold=150M -m cp -r $LOCAL_DIR/* $BUCKET_URI/$ARTIFACT_NAME
fi
```

### Artifact from Hugging Face Hub

Alternatively, you can also upload the model to the GCS Bucket from the Hugging Face Hub. As [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) is a gated model, you need to login into your Hugging Face Hub account with a read-access token either fine-grained with access to the gated model, or just overall read-access to your account.

More information on how to generate a read-only access token for the Hugging Face Hub in the instructions at .

```python
!pip install "huggingface_hub[hf_transfer]" --upgrade --quiet
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

After `huggingface_hub` installation and login are completed, you can run the following bash script to download the model locally within a temporary directory, and then upload those to the GCS Bucket.

```python
%%bash
# Ensure the necessary environment variables are set
export HF_HUB_ENABLE_HF_TRANSFER=1

# # Create a local directory to store the downloaded models
LOCAL_DIR="tmp/google--gemma-7b-it"
mkdir -p $LOCAL_DIR

# # Download models from HuggingFace, excluding certain file types
huggingface-cli download google/gemma-7b-it --exclude "*.bin" "*.pth" "*.gguf" ".gitattributes" --local-dir $LOCAL_DIR

# Upload the downloaded models to Google Cloud Storage
gsutil -o GSUtil:parallel_composite_upload_threshold=150M -m cp -e -r $LOCAL_DIR/* $BUCKET_URI/$ARTIFACT_NAME

# Remove all files and hidden files in the target directory
rm -rf tmp/
```

To see the end to end script, please check [`./scripts/upload_model_to_gcs.sh`](https://github.com/huggingface/Google-Cloud-Containers/blob/main/scripts/upload_model_to_gcs.sh) within the root directory of this repository.

![GCS Bucket with model artifact](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/gcs-model-artifact.png)

## Register model on Vertex AI

Once everything is set up, you can already initialize the Vertex AI session via the `google-cloud-aiplatform` Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
    staging_bucket=os.getenv("BUCKET_URI"),
)
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the GCS Bucket URI on startup, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

* **`display_name`** is the name that will be shown in the Vertex AI Model Registry.

* **`artifact_uri`** is the path to the directory with the artifact within the GCS Bucket.

* **`serving_container_image_uri`** is the location of the Hugging Face DLC for TGI that will be used for serving the model.

* **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by `text-generation-inference`, which are analog to the [`text-generation-launcher` arguments](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher). Additionally, the Hugging Face DLCs for TGI also capture the `AIP_` environment variables from Vertex AI as in [Vertex AI Documentation - Custom container requirements for prediction](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements).

    * `NUM_SHARD` is the number of shards to use if you don't want to use all GPUs on a given machine e.g. if you have two GPUs but you just want to use one for TGI then `NUM_SHARD=1`, otherwise it matches the `CUDA_VISIBLE_DEVICES`.
    * `MAX_INPUT_TOKENS` is the maximum allowed input length (expressed in number of tokens), the larger it is, the larger the prompt can be, but also more memory will be consumed.
    * `MAX_TOTAL_TOKENS` is the most important value to set as it defines the "memory budget" of running clients requests, the larger this value, the larger amount each request will be in your RAM and the less effective batching can be.
    * `MAX_BATCH_PREFILL_TOKENS` limits the number of tokens for the prefill operation, as it takes the most memory and is compute bound, it is interesting to limit the number of requests that can be sent.
    * `HUGGING_FACE_HUB_TOKEN` is the Hugging Face Hub token, required as [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) is a gated model.

* (optional) **`serving_container_ports`** is the port where the Vertex AI endpoint will be exposed, by default 8080.

For more information on the supported `aiplatform.Model.upload` arguments, check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload.

Starting from TGI 2.3 DLC i.e. `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311`, and onwards, you can set the environment variable value `MESSAGES_API_ENABLED="true"` to deploy the [Messages API](https://huggingface.co/docs/text-generation-inference/main/en/messages_api) on Vertex AI, otherwise, the [Generate API](https://huggingface.co/docs/text-generation-inference/main/en/quicktour#consuming-tgi) will be deployed instead.

```python
model = aiplatform.Model.upload(
    display_name="google--gemma-7b-it",
    artifact_uri=f"{os.getenv('BUCKET_URI')}/{os.getenv('ARTIFACT_NAME')}",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "NUM_SHARD": "1",
        "MAX_INPUT_TOKENS": "512",
        "MAX_TOTAL_TOKENS": "1024",
        "MAX_BATCH_PREFILL_TOKENS": "1512",
    },
    serving_container_ports=[8080],
)
model.wait()
```

![Model on Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-model.png)

![Model on Vertex AI Model Registry with path to GCS](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-model-path.png)

## Deploy model on Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="google--gemma-7b-it-endpoint")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quicklyl review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported `aiplatform.Model.deploy` arguments, you can check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy.

```python
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-4",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
)
```

**WARNING**: _The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes._

![Vertex AI Endpoint running the model](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-endpoint-run.png)

![Vertex AI Endpoint logs in Cloud Logging](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-endpoint-logs.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

As you are serving a `text-generation` model, you will need to make sure that the chat template, if any, is applied correctly to the input conversation; meaning that `transformers` need to be installed so as to instantiate the `tokenizer` for [`google/gemma-7b-it`](https://huggingface.co/google/gemma-7b-it) and run the `apply_chat_template` method over the input conversation before sending the input within the payload to the Vertex AI endpoint.

```python
!pip install --upgrade --quiet transformers
```

After the installation is complete, the following snippet will apply the chat template to the conversation:

```python
from huggingface_hub import get_token
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it", token=get_token())

messages = [
    {"role": "user", "content": "What's Deep Learning?"},
]

inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
# user\nWhat's Deep Learning?\nmodel\n
```

Which is what you will be sending within the payload to the deployed Vertex AI Endpoint, as well as the generation parameters as in .

### Via Python

#### Within the same session

If you are willing to run the online prediction within the current session, you can send requests programmatically via the `aiplatform.Endpoint` (returned by the `aiplatform.Model.deploy` method) as in the following snippet:

```python
output = deployed_model.predict(
    instances=[
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 256, "do_sample": True,
                "top_p": 0.95, "temperature": 1.0,
            },
        },
    ]
)
print(output.predictions[0])
```

Producing the following `output`:

```
Prediction(predictions=['\n\nDeep learning is a type of machine learning that uses artificial neural networks to learn from large amounts of data, making it a powerful tool for various tasks, including image recognition, natural language processing, and speech recognition.\n\n**Key Concepts:**\n\n* **Artificial Neural Networks (ANNs):** Structures that mimic the interconnected neurons in the brain.\n* **Deep Learning Architectures:** Multi-layered ANNs that learn hierarchical features from data.\n* **Transfer Learning:** Reusing learned features from one task to improve performance on another.\n\n**Types of Deep Learning:**\n\n* **Supervised Learning:** Models are trained on labeled data, where inputs are paired with corresponding outputs.\n* **Unsupervised Learning:** Models learn patterns from unlabeled data, such as clustering or dimensionality reduction.\n* **Reinforcement Learning:** Models learn through trial-and-error by interacting with an environment to optimize a task.\n\n**Benefits:**\n\n* **High Accuracy:** Deep learning models can achieve high accuracy on complex tasks.\n* **Adaptability:** Deep learning models can adapt to new data and tasks.\n* **Scalability:** Deep learning models can handle large amounts of data.\n\n**Applications:**\n\n* Image recognition\n* Natural language processing (NLP)\n'], deployed_model_id='***', metadata=None, model_version_id='1', model_resource_name='projects/***/locations/us-central1/models/***', explanations=None)
```

#### From a different session

If the Vertex AI Endpoint was deployed in a different session and you want to use it but don't have access to the `deployed_model` variable returned by the `aiplatform.Model.deploy` method as in the previous section; you can also run the following snippet to instantiate the deployed `aiplatform.Endpoint` via its resource name as `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}`.

You will need to either retrieve the resource name i.e. the `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}` URL yourself via the Google Cloud Console, or just replace the `ENDPOINT_ID` below that can either be found via the previously instantiated `endpoint` as `endpoint.id` or via the Google Cloud Console under the Online predictions where the endpoint is listed.

```python
import os
from google.cloud import aiplatform

aiplatform.init(project=os.getenv("PROJECT_ID"), location=os.getenv("LOCATION"))

endpoint_display_name = "google--gemma-7b-it-endpoint"  # TODO: change to your endpoint display name

# Iterates over all the Vertex AI Endpoints within the current project and keeps the first match (if any), otherwise set to None
ENDPOINT_ID = next(
    (endpoint.name for endpoint in aiplatform.Endpoint.list() 
     if endpoint.display_name == endpoint_display_name), 
    None
)
assert ENDPOINT_ID, (
    "`ENDPOINT_ID` is not set, please make sure that the `endpoint_display_name` is correct at "\
    f"https://console.cloud.google.com/vertex-ai/online-prediction/endpoints?project={os.getenv('PROJECT_ID')}"
)

endpoint = aiplatform.Endpoint(f"projects/{os.getenv('PROJECT_ID')}/locations/{os.getenv('LOCATION')}/endpoints/{ENDPOINT_ID}")
output = endpoint.predict(
    instances=[
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": True,
                "top_p": 0.95,
                "temperature": 0.7,
            },
        },
    ],
)
print(output.predictions[0])
```

### Via the Vertex AI Online Prediction UI

Alternatively, for testing purposes you can also use the Vertex AI Online Prediction UI, that provides a field that expects the JSON payload formatted according to the Vertex AI specification (as in the examples above) being:

```json
{
    "instances": [
        {
            "inputs": "user\nWhat's Deep Learning?\nmodel\n",
            "parameters": {
                "max_new_tokens": 128,
                "do_sample": true,
                "top_p": 0.95,
                "temperature": 0.7
            }
        }
    ]
}
```

![Vertex AI Endpoint online inference](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai/assets/vertex-ai-online-prediction.png)

## Resource clean-up

Finally, you can already release the resources that you've created as follows, to avoid unnecessary costs:

* `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
* `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully, after the `undeploy_all` method.
* `model.delete` to delete the model from the registry.

When deleting the model from Vertex AI, as it's stored within a GCS Bucket, neither the bucket nor its contents will be removed when removing the model from Vertex AI.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```

Alternatively, you can also remove those from the Google Cloud Console following the steps:

* Go to Vertex AI in Google Cloud
* Go to Deploy and use -> Online prediction
* Click on the endpoint and then on the deployed model/s to "Undeploy model from endpoint"
* Then go back to the endpoint list and remove the endpoint
* Finally, go to Deploy and use -> Model Registry, and remove the model

Additionally, you may also want to remove the GCS Bucket, to do so, you can use the following `gcloud` command:

```python
!gcloud storage rm -r $BUCKET_URI
```

Or, alternatively, just remove the bucket and/or its contents from the Google Cloud Console.
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-gemma-from-gcs-on-vertex-ai)!

### Deploy Embedding Models with TEI DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-embedding-on-vertex-ai.md

# Deploy Embedding Models with TEI DLC on Vertex AI

BGE, standing for BAAI General Embedding, is a collection of embedding models released by BAAI, which is an English base model for general embedding tasks ranked in the MTEB Leaderboard. Text Embeddings Inference (TEI) is a toolkit developed by Hugging Face for deploying and serving open source text embeddings and sequence classification models; enabling high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5. And, Google Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy any supported embedding model, in this case [`BAAI/bge-large-en-v1.5`](https://huggingface.co/BAAI/bge-large-en-v1.5), from the Hugging Face Hub on Vertex AI using the TEI DLC available in Google Cloud Platform (GCP) in both CPU and GPU instances.

!['BAAI/bge-large-en-v1.5' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cu122.1-4.ubuntu2204
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Register model on Vertex AI

Once everything is set up, you can already initialize the Vertex AI session via the `google-cloud-aiplatform` Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face DLC for TEI on startup via the `MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

* **`display_name`** is the name that will be shown in the Vertex AI Model Registry.

* **`serving_container_image_uri`** is the location of the Hugging Face DLC for TEI that will be used for serving the model.

* **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by `text-embeddings-inference`, which are analog to the [`text-embeddings-router` arguments](https://huggingface.co/docs/text-embeddings-inference/en/cli_arguments). Additionally, the Hugging Face DLCs for TEI also capture the `AIP_` environment variables from Vertex AI as in [Vertex AI Documentation - Custom container requirements for prediction](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements).

    * `MODEL_ID` is the identifier of the model in the Hugging Face Hub, to see all the TEI supported models please check https://huggingface.co/models?other=text-embeddings-inference&sort=trending.

* (optional) **`serving_container_ports`** is the port where the Vertex AI endpoint will be exposed, by default 8080.

For more information on the supported `aiplatform.Model.upload` arguments, check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload.

```python
model = aiplatform.Model.upload(
    display_name="BAAI--bge-large-en-v1.5",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "MODEL_ID": "BAAI/bge-large-en-v1.5",
    },
    serving_container_ports=[8080],
)
model.wait()
```

![Model on Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/vertex-ai-model.png)

## Deploy model on Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="BAAI--bge-large-en-v1.5-endpoint")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quickly review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported `aiplatform.Model.deploy` arguments, you can check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy.

```python
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-4",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
    sync=True,
)
```

The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes.

![Vertex AI Endpoint running the model](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/vertex-ai-endpoint-run.png)

![Vertex AI Endpoint logs in Cloud Logging](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/vertex-ai-endpoint-logs.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

```python
output = deployed_model.predict(instances=[{"inputs": "What is Deep Learning?"}])
print(output.predictions[0][0][:5], len(output.predictions[0][0])
```

Which produces the following output (truncated for brevity, but original tensor length is 1024, which is the embedding dimension of [`BAAI/bge-large-en-v1.5`](https://huggingface.co/BAAI/bge-large-en-v1.5)):

```
([0.018108826, 0.0029993146, -0.04984767, -0.035081815, 0.014210845], 1024)
```

![Vertex AI Endpoint logs in Cloud Logging after predict](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai/assets/vertex-ai-endpoint-logs-predict.png)

## Resource clean-up

Finally, you can release the resources programmatically within the same Python session as follows:

- `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
- `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully after the `undeploy_all`.
- `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-embedding-on-vertex-ai)!

### Cloud Run Examples
https://huggingface.co/docs/google-cloud/examples/cloud-run-index.md

# Cloud Run Examples

This directory contains usage examples of the Hugging Face Deep Learning Containers (DLCs) in Cloud Run with a focus on inference for Large Language Models (LLMs).

## Inference Examples

| Example                                                          | Title                                         |
| ---------------------------------------------------------------- | --------------------------------------------- |
| [deploy-gemma-2-on-cloud-run](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-gemma-2-on-cloud-run)     | Deploy Gemma2 9B with TGI DLC on Cloud Run    |
| [deploy-llama-3-1-on-cloud-run](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run/deploy-llama-3-1-on-cloud-run) | Deploy Llama 3.1 8B with TGI DLC on Cloud Run |

## Training Examples

Coming soon!

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/cloud-run)!

### Deploy BERT Models with PyTorch Inference DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-bert-on-vertex-ai.md

# Deploy BERT Models with PyTorch Inference DLC on Vertex AI

DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT, which is a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus. And, Google Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy any supported PyTorch model from the Hugging Face Hub, in this case [`distilbert/distilbert-base-uncased-finetuned-sst-2-english`](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english), on Vertex AI using the PyTorch Inference DLC available in Google Cloud Platform (GCP) in both CPU and GPU instances.

!['distilbert/distilbert-base-uncased-finetuned-sst-2-english' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-2.transformers.4-44.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Register model in Vertex AI

Once everything is set up, you can already initialize the Vertex AI session via the `google-cloud-aiplatform` Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face PyTorch DLC for inference on startup via the `HF_MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

- **`display_name`** is the name that will be shown in the Vertex AI Model Registry.
- **`serving_container_image_uri`** is the location of the Hugging Face PyTorch DLC for inference that will be used for serving the model.
- **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by [huggingface-inference-toolkit](https://github.com/huggingface/huggingface-inference-toolkit) Python SDK, which exposes some environment variables such as the following:
    - `HF_MODEL_ID` is the identifier of the model in the Hugging Face Hub. To explore all the supported models please check https://huggingface.co/models?sort=trending filtering by the task that you want to use e.g. `text-classification`.
    - `HF_TASK` is the task identifier within the Hugging Face Hub. To see all the supported tasks please check https://huggingface.co/docs/transformers/en/task_summary#natural-language-processing.

For more information on the supported `aiplatform.Model.upload` arguments, check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload.

```python
model = aiplatform.Model.upload(
    display_name="distilbert--distilbert-base-uncased-finetuned-sst-2-english",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "HF_MODEL_ID": "distilbert/distilbert-base-uncased-finetuned-sst-2-english",
        "HF_TASK": "text-classification",
    },
)
model.wait()
```

![Model in Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/vertex-ai-model.png)

![Model Version in Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/vertex-ai-model-version.png)

## Deploy model in Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="distilbert--distilbert-base-uncased-finetuned-sst-2-english-endpoint")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quicklyl review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.
- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported `aiplatform.Model.deploy` arguments, you can check its Python reference at https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy.

```python
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-12",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
)
```

The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes.

![Vertex AI Endpoint Ready](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/vertex-ai-endpoint-ready.png)

![Vertex AI Model Ready](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai/assets/vertex-ai-model-ready.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

```python
output = deployed_model.predict(instances=["I love this product", "I hate this product"], parameters={"top_k": 2})
print(output.predictions[0])
```

Which produces the following output for each of the instances provided i.e. being `POSITIVE` the label for the first sentence and `NEGATIVE` for the second, as those are the greater scores within each output instance, respectively:

```
[[{'score': 0.9998788833618164, 'label': 'POSITIVE'},
  {'score': 0.0001210561968036927, 'label': 'NEGATIVE'}],
 [{'score': 0.9997544884681702, 'label': 'NEGATIVE'},
  {'score': 0.0002454846107866615, 'label': 'POSITIVE'}]
```

## Resource clean-up

Finally, you can release the resources programmatically within the same Python session as follows:

- `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
- `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully after the `undeploy_all`.
- `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-bert-on-vertex-ai)!

### Fine-tune Gemma 2B with PyTorch Training DLC using SFT + LoRA on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-trl-lora-sft-fine-tuning-on-vertex-ai.md

# Fine-tune Gemma 2B with PyTorch Training DLC using SFT + LoRA on Vertex AI

[Transformer Reinforcement Learning (TRL)](https://github.com/huggingface/trl) is a framework developed by Hugging Face to fine-tune and align both transformer language and diffusion models using methods such as Supervised Fine-Tuning (SFT), Reward Modeling (RM), Proximal Policy Optimization (PPO), Direct Preference Optimization (DPO), and others. On the other hand, Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to create a custom training job on Vertex AI running the Hugging Face PyTorch DLC for training, using the TRL CLI to fine-tune a 7B LLM with SFT + LoRA in a single GPU.

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env BUCKET_URI=gs://hf-vertex-pipelines
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-training-cu121.2-3.transformers.4-42.ubuntu2204.py310
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Optional: Create bucket in GCS

You can use an existing bucket for storing the fine-tuning artifacts, if you already have a bucket, feel free to skip this step and jump onto the next one.

As the Vertex AI job will generate artifacts, you need to specify a Google Cloud Storage (GCS) Bucket to dump those artifacts into. So on, you need to create a GCS Bucket via the `gcloud storage buckets create` subcommand as follows:

```python
!gcloud storage buckets create $BUCKET_URI --project $PROJECT_ID --location=$LOCATION --default-storage-class=STANDARD --uniform-bucket-level-access
```

## Prepare `CustomContainerTrainingJob`

Once you have configured the environment and created the GCS Bucket (if applicable), you can proceed with the definition of the `CustomContainerTrainingJob`, which is a standard container job that runs on Vertex AI running a container, being the Hugging Face PyTorch DLC for training.

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
    staging_bucket=os.getenv("BUCKET_URI"),
)
```

You now need to define a `CustomContainerTrainingJob` that runs on the Hugging Face PyTorch DLC for training, that needs to set the `trl sft` command capturing the arguments that will be provided whenever the job runs.

The `CustomContainerTrainingJob` will override the default `ENTRYPOINT` provided within the container URI provided, so if the `ENTRYPOINT` is already prepared to receive the arguments, then there's no need to define a custom `command`.

```python
job = aiplatform.CustomContainerTrainingJob(
    display_name="trl-lora-sft",
    container_uri=os.getenv("CONTAINER_URI"),
    command=[
        "sh",
        "-c",
        'exec trl sft "$@"',
        "--",
    ],
)
```

## Define `CustomContainerTrainingJob` Requirements

Before proceeding to the `CustomContainerTrainingJob` via the Hugging Face PyTorch DLC for training, you need to define first the configuration required for the job to run successfully i.e. which GPU is capable of fine-tuning [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16` with LoRA adapters.

As a rough calculation, you could assume that the amount of GPU VRAM required to fine-tune a model in half precision is about four times the model size (read more about it in [Eleuther AI - Transformer Math 101](https://blog.eleuther.ai/transformer-math/)).

Alternatively, if your model is uploaded to the Hugging Face Hub, you can check the numbers in the community space [`Vokturz/can-it-run-llm`](https://huggingface.co/spaces/Vokturz/can-it-run-llm), which does those calculations for you, based the model to fine-tune and the available hardware.

!['Vokturz/can-it-run-llm' for 'mistralai/Mistral-7B-v0.3'](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai/assets/can-it-run-llm.png)

## Run `CustomContainerTrainingJob`

As mentioned before, the job will run the LoRA Supervised Fine-Tuning (SFT) with the TRL CLI on top of [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16` using [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which is a subset from [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) with ~10k samples.

Once you have decided which resources to use to run the job, you need to define the hyper parameters accordingly to ensure that the selected instance is capable of running the job.
Some of the hparams that you may want to look into to avoid running into OOM errors are the following:
* **LoRA / QLoRA**: you may need to tweak the rank, denoted by `r`, which defines the fraction of trainable parameters for each linear layer included meaning that the lower the less memory consumption.
* **Optimizer**: by default the AdamW optimizer will be used, but alternatively lower precision optimizers can be used to reduce the memory as well e.g. `adamw_bnb_8bit` (for more information on 8-bit optimizers check https://huggingface.co/docs/bitsandbytes/main/en/optimizers).
* **Batch size**: you can tweak this so as to use a lower batch size when running into OOM, or you can also tweak the gradient accumulation steps to simulate a similar batch size for updating the gradients, but providing less inputs within a batch a time e.g. `batch_size=8` and `gradient_accumulation=1` is effectively the same as `batch_size=4` and `gradient_accumulation=2`.

As the `CustomContainerTrainingJob` defines the command `trl sft` the arguments to be provided are listed either in the Python reference at [trl.SFTConfig](https://huggingface.co/docs/trl/en/sft_trainer#trl.SFTConfig) or via the `trl sft --help` command.

Read more about the TRL CLI at https://huggingface.co/docs/trl/en/clis.

It's important to note that since GCS FUSE is used to mount the bucket as a directory within the running container job, the mounted path follows the formatting `/gcs/`. More information at https://cloud.google.com/vertex-ai/docs/training/code-requirements. So the `output_dir` needs to be set to the mounted GCS Bucket, meaning that anything the `SFTTrainer` writes there will be automatically uploaded to the GCS Bucket.

```python
args = [
    # MODEL
    "--model_name_or_path=mistralai/Mistral-7B-v0.3",
    "--torch_dtype=bfloat16",
    "--attn_implementation=flash_attention_2",
    # DATASET
    "--dataset_name=timdettmers/openassistant-guanaco",
    "--dataset_text_field=text",
    # PEFT
    "--use_peft",
    "--lora_r=16",
    "--lora_alpha=32",
    "--lora_dropout=0.1",
    "--lora_target_modules=all-linear",
    # TRAINER
    "--bf16",
    "--max_seq_length=1024",
    "--per_device_train_batch_size=2",
    "--gradient_accumulation_steps=8",
    "--gradient_checkpointing",
    "--learning_rate=0.0002",
    "--lr_scheduler_type=cosine",
    "--optim=adamw_bnb_8bit",
    "--num_train_epochs=1",
    "--logging_steps=10",
    "--do_eval",
    "--eval_steps=100",
    "--report_to=none",
    f"--output_dir={os.getenv('BUCKET_URI').replace('gs://', '/gcs/')}/Mistral-7B-v0.3-LoRA-SFT-Guanaco",
    "--overwrite_output_dir",
    "--seed=42",
    "--log_level=debug",
]
```

Then you need to call the `submit` method on the `aiplatform.CustomContainerTrainingJob`, which is a non-blocking method that will schedule the job without blocking the execution.

The arguments provided to the `submit` method are listed below:

* **`args`** defines the list of arguments to be provided to the `trl sft` command, provided as `trl sft --arg_1=value ...`.

* **`replica_count`** defines the number of replicas to run the job in, for training normally this value will be set to one.

* **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** define the machine i.e. Compute Engine instance, the accelerator (if any), and the number of accelerators (ranging from 1 to 8); respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

* **`base_output_dir`** defines the base directory that will be mounted within the running container from the GCS Bucket, conditioned by the `staging_bucket` argument provided to the `aiplatform.init` initially.

* (optional) **`environment_variables`** defines the environment variables to define within the running container. As you are fine-tuning a gated model i.e. [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3), you need to set the `HF_TOKEN` environment variable. Additionally, some other environment variables are defined to set the cache path (`HF_HOME`) and to ensure that the logging messages are streamed to Google Cloud Logs Explorer properly (`TRL_USE_RICH`, `ACCELERATE_LOG_LEVEL`, `TRANSFORMERS_LOG_LEVEL`, and `TQDM_POSITION`).

* (optional) **`timeout`** and **`create_request_timeout`** define the timeouts in seconds before interrupting the job execution or the job creation request (time to allocate required resources and start the execution), respectively.

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

```python
from huggingface_hub import get_token

job.submit(
    args=args,
    replica_count=1,
    machine_type="g2-standard-12",
    accelerator_type="NVIDIA_L4",
    accelerator_count=1,
    base_output_dir=f"{os.getenv('BUCKET_URI')}/Mistral-7B-v0.3-LoRA-SFT-Guanaco",
    environment_variables={
        "HF_HOME": "/root/.cache/huggingface",
        "HF_TOKEN": get_token(),
        "TRL_USE_RICH": "0",
        "ACCELERATE_LOG_LEVEL": "INFO",
        "TRANSFORMERS_LOG_LEVEL": "INFO",
        "TQDM_POSITION": "-1",
    },
    timeout=60 * 60 * 3,  # 3 hours (10800s)
    create_request_timeout=60 * 10,  # 10 minutes (600s)
)
```

![Pipeline created in Vertex AI](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-scheduled.png)

![Vertex AI Pipeline successfully completed](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-completed.png)

![Vertex AI Pipeline logs](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-logs.png)

![GCS Bucket with uploaded artifacts](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai/assets/gcs-bucket-artifacts.png)

Finally, you can upload the fine-tuned model to the Hugging Face Hub, or just keep it within the Google Cloud Storage (GCS) Bucket. Later on, you will be able to run the inference on top of it after [merging the adapters](https://huggingface.co/docs/trl/main/en/use_model#use-adapters-peft) via either the Hugging Face PyTorch DLC for inference via the `pipeline` in `transformers`, or via the Hugging Face DLC for TGI (as the model is fine-tuned for `text-generation`).
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/trl-lora-sft-fine-tuning-on-vertex-ai)!

### Google Kubernetes Engine (GKE) Examples
https://huggingface.co/docs/google-cloud/examples/gke-index.md

# Google Kubernetes Engine (GKE) Examples

This directory contains usage examples of the Hugging Face Deep Learning Containers (DLCs) in Google Kubernetes Engine (GKE) for both training and inference, with a focus on Large Language Models (LLMs).

## Training Examples

| Example                                        | Title                                                                       |
| ---------------------------------------------- | --------------------------------------------------------------------------- |
| [trl-full-fine-tuning](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-full-fine-tuning) | Fine-tune Gemma 2B with PyTorch Training DLC using SFT on GKE               |
| [trl-lora-fine-tuning](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-lora-fine-tuning) | Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT + LoRA on GKE |

## Inference Examples

| Example                                                      | Title                                                         |
| ------------------------------------------------------------ | ------------------------------------------------------------- |
| [tei-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-deployment)                           | Deploy Snowflake's Arctic Embed with TEI DLC on GKE           |
| [tei-from-gcs-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tei-from-gcs-deployment)         | Deploy BGE Base v1.5 with TEI DLC from GCS on GKE             |
| [tgi-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-deployment)                           | Deploy Meta Llama 3 8B with TGI DLC on GKE                    |
| [tgi-from-gcs-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment)         | Deploy Qwen2 7B with TGI DLC from GCS on GKE                  |
| [tgi-llama-405b-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-405b-deployment)     | Deploy Llama 3.1 405B with TGI DLC on GKE                     |
| [tgi-llama-vision-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment) | Deploy Llama 3.2 11B Vision with TGI DLC on GKE               |
| [tgi-multi-lora-deployment](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment)     | Deploy Gemma2 with multiple LoRA adapters with TGI DLC on GKE |

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke)!

### Deploy Qwen2 7B with TGI DLC from GCS on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tgi-from-gcs-deployment.md

# Deploy Qwen2 7B with TGI DLC from GCS on GKE

Qwen2 is the new series of Qwen Large Language Models (LLMs) built by Alibaba Cloud, with both base and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model; the 7B variant sitting in the second place in the 7B size range in the Open LLM Leaderboard by Hugging Face and the 72B one in the first place amongst any size. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to deploy an LLM from a Google Cloud Storage (GCS) Bucket on a GKE Cluster running a purpose-built container to deploy LLMs in a secure and managed environment with the Hugging Face DLC for TGI.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
export BUCKET_NAME=your-bucket-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything is set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-from-gcs-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Optional: Upload a model from the Hugging Face Hub to GCS

This is an optional step in the tutorial, since you may want to reuse an existing model on a GCS Bucket, if that is the case, then feel free to jump to the next step of the tutorial on how to configure the IAM for GCS so that you can access the bucket from a pod in the GKE Cluster.

Otherwise, to upload a model from the Hugging Face Hub to a GCS Bucket, you can use the script [scripts/upload_model_to_gcs.sh](https://github.com/huggingface/Google-Cloud-Containers/blob/main/scripts/upload_model_to_gcs.sh), which will download the model from the Hugging Face Hub and upload it to the GCS Bucket (and create the bucket if not created already).

The `gsutil` component should be installed via `gcloud`, and the Python packages `huggingface_hub` with the extra `hf_transfer`, and the package `crcmod` should also be installed.

```bash
gcloud components install gsutil
pip install --upgrade --quiet "huggingface_hub[hf_transfer]" crcmod
```

Then, you can run the script to download the model from the Hugging Face Hub and then upload it to the GCS Bucket:

Make sure to set the proper permissions to run the script i.e. `chmod +x scripts/upload_model_to_gcs.sh`.

```bash
./scripts/upload_model_to_gcs.sh --model-id Qwen/Qwen2-7B-Instruct --gcs gs://$BUCKET_NAME/Qwen2-7B-Instruct
```

![GCS Bucket in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-from-gcs-deployment/imgs/gcs-bucket.png)

## Configure IAM for GCS

Before you proceed with the deployment of the Hugging Face DLC for TGI on the GKE Cluster, you need to set the IAM permissions for the GCS Bucket so that the pod in the GKE Cluster can access the bucket. To do so, you need to create a namespace and a service account in the GKE Cluster, and then set the IAM permissions for the GCS Bucket that contains the model, either as uploaded from the Hugging Face Hub or as already existing in the GCS Bucket.

For convenience, as the reference to both the namespace and the service account will be used within the following steps, the environment variables `NAMESPACE` and `SERVICE_ACCOUNT` will be set.

```bash
export NAMESPACE=hf-gke-namespace
export SERVICE_ACCOUNT=hf-gke-service-account
```

Then you can create the namespace and the service account in the GKE Cluster, enabling the creation of the IAM permissions for the pods in that namespace to access the GCS Bucket when using that service account.

```bash
kubectl create namespace $NAMESPACE
kubectl create serviceaccount $SERVICE_ACCOUNT --namespace $NAMESPACE
```

Then you need to add the IAM policy binding to the bucket as follows:

```bash
gcloud storage buckets add-iam-policy-binding \
    gs://$BUCKET_NAME \
    --member "principal://iam.googleapis.com/projects/$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$SERVICE_ACCOUNT" \
    --role "roles/storage.objectUser"
```

## Deploy TGI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) model, from a volume mounted in `/data`, copied from the GCS Bucket where the model is located.

To explore all the models that can be served via TGI, you can explore the models tagged with `text-generation-inference` in the Hub at .

The Hugging Face DLC for TGI will be deployed via `kubectl`, from the configuration files in the [`config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment/config/) directory:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment/config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TGI setting the `MODEL_ID` to the model path in the volume mount, in this case `/data/Qwen2-7B-Instruct`.
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment/config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TGI service.
- [`storageclass.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment/config/storageclass.yaml): contains the storage class details of the pod, defining the storage class for the volume mount.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment/config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tgi-from-gcs-deployment/config
```

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-from-gcs-deployment/imgs/gke-deployment.png)

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods --namespace $NAMESPACE
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s --namespace $NAMESPACE deployment/tgi-deployment
```

## Inference with TGI

To run the inference over the deployed TGI service, you can either:

- Port-forwarding the deployed TGI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward --namespace $NAMESPACE service/tgi-service 8080:8080
  ```

- Accessing the TGI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress --namespace $NAMESPACE tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

### Via cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/generate \
    -X POST \
    -d '{"inputs":"system\nYou are a helpful assistant.\nuser\nWhat is 2+2?\nassistant\n","parameters":{"temperature":0.7, "top_p": 0.95, "max_new_tokens": 128}}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead:

```bash
curl http://$(kubectl get ingress --namespace $NAMESPACE tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/generate \
    -X POST \
    -d '{"inputs":"system\nYou are a helpful assistant.\nuser\nWhat is 2+2?\nassistant\n","parameters":{"temperature":0.7, "top_p": 0.95, "max_new_tokens": 128}}' \
    -H 'Content-Type: application/json'
```

Which produces the following output:

```
{"generated_text":"2 + 2 equals 4."}
```

To generate the `inputs` with the expected chat template formatting, you could use the following snippet:

```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct")
tokenizer.apply_chat_template(
    [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is 2+2?"},
    ],
    tokenize=False,
    add_generation_prompt=True,
)
```

### Via Python

To run the inference using Python, you can use the `openai` Python SDK (see the installation notes at ), setting either the localhost or the ingress IP as the `base_url` for the client, and then running the following code:

```python
from huggingface_hub import get_token
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key=get_token() or "-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is 2+2?"},
    ],
    max_tokens=128,
)
```

Which produces the following output:

```
ChatCompletion(id='', choices=[Choice(finish_reason='eos_token', index=0, message=ChatCompletionMessage(content='2 + 2 equals 4.', role='assistant', function_call=None, tool_calls=None), logprobs=None)], created=1719996359, model='/data/Qwen2-7B-Instruct', object='chat.completion', system_fingerprint='2.1.0-native', usage=CompletionUsage(completion_tokens=9, prompt_tokens=26, total_tokens=35))
```

## Delete GKE Cluster

Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 --namespace $NAMESPACE deployment/tgi-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-from-gcs-deployment)!

### Deploy Llama 3.2 11B Vision with TGI DLC on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-deploy-llama-vision-on-vertex-ai.md

# Deploy Llama 3.2 11B Vision with TGI DLC on Vertex AI

[Llama 3.2](https://huggingface.co/blog/llama32) is the latest release of open LLMs from the Llama family released by Meta (as of October 2024); Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to deploy [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on Vertex AI via the Hugging Face purpose-built Deep Learning Container (DLC) for Text Generation Inference (TGI) on Google Cloud.

Regarding the licensing terms, Llama 3.2 comes with a very similar license to Llama 3.1, with one key difference in the acceptable use policy: any individual domiciled in, or a company with a principal place of business in, the European Union (EU) is not being granted the license rights to use multimodal models included in Llama 3.2. This restriction does not apply to end users of a product or service that incorporates any such multimodal models, so people can still build global products with the vision variants.

For full details, please make sure to read [the official license](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt) and [the acceptable use policy](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/USE_POLICY.md).

!['google/gemma-7b-it' in the Hugging Face Hub](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/assets/model-in-hf-hub.png)

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

Once everything is set up, you can already initialize the Vertex AI session via the `google-cloud-aiplatform` Python SDK as follows:

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
)
```

## Register model on Vertex AI

As [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) is a gated model with restricted access on the European Union (EU), meaning that you need to accept the license agreement.

To generate a token for the Hugging Face Hub, you can follow the instructions in [Hugging Face Hub - User access tokens](https://huggingface.co/docs/hub/en/security-tokens); the generated token can either be fine-grained to have access to the model, or just overall read-only access to your account.

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

Then you can already "upload" the model i.e. register the model on Vertex AI. It is not an upload per se, since the model will be automatically downloaded from the Hugging Face Hub in the Hugging Face DLC for TGI on startup via the `MODEL_ID` environment variable, so what is uploaded is only the configuration, not the model weights.

Before going into the code, let's quickly review the arguments provided to the `upload` method:

* **`display_name`** is the name that will be shown in the Vertex AI Model Registry.

* **`serving_container_image_uri`** is the location of the Hugging Face DLC for TGI that will be used for serving the model.

* **`serving_container_environment_variables`** are the environment variables that will be used during the container runtime, so these are aligned with the environment variables defined by `text-generation-inference`, which are analog to the [`text-generation-launcher` arguments](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/launcher). Additionally, the Hugging Face DLCs for TGI also capture the `AIP_` environment variables from Vertex AI as in [Vertex AI Documentation - Custom container requirements for prediction](https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements).

    * `MODEL_ID` is the identifier of the model in the Hugging Face Hub. To explore all the supported models you can check [the models tagged with `text-generation-inference` in the Hugging Face Hub](https://huggingface.co/models?sort=trending&other=text-generation-inference).
    * `NUM_SHARD` is the number of shards to use if you don't want to use all GPUs on a given machine e.g. if you have two GPUs but you just want to use one for TGI then `NUM_SHARD=1`, otherwise it matches the `CUDA_VISIBLE_DEVICES`.
    * `MAX_INPUT_TOKENS` is the maximum allowed input length (expressed in number of tokens), the larger it is, the larger the prompt can be, but also more memory will be consumed.
    * `MAX_TOTAL_TOKENS` is the most important value to set as it defines the "memory budget" of running clients requests, the larger this value, the larger amount each request will be in your RAM and the less effective batching can be.
    * `MAX_BATCH_PREFILL_TOKENS` limits the number of tokens for the prefill operation, as it takes the most memory and is compute bound, it is interesting to limit the number of requests that can be sent.
    * `HF_HUB_ENABLE_HF_TRANSFER` to enable a faster download speed via the hf_transfer library.
    * `HUGGING_FACE_HUB_TOKEN` is the Hugging Face Hub token, required as [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) is a gated model with restricted access in the European Union (EU).

    Additionally, you need to specify the `MESSAGES_API_ENABLED` environment variable that was introduced in the TGI 2.3.0 Release, since the Messages API is required to process both the text and the images within the input payload.

    * `MESSAGES_API_ENABLED` set to "true" to use the Messages API i.e. `/v1/chat/completions`, instead of the Generation API i.e. `/generation` (default).

* (optional) **`serving_container_ports`** is the port where the Vertex AI endpoint will be exposed, by default 8080.

For more information on the supported arguments you can check [`aiplatform.Model.upload` Python reference](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_upload).

Note that the `MESSAGES_API_ENABLED` flag will only work from the TGI 2.3 DLC i.e. `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-3.ubuntu2204.py311`, onwards.

For the previous releases the `MESSAGES_API_ENABLED` flag won't work as it was introduced [in the following TGI PR](https://github.com/huggingface/text-generation-inference/pull/2481), the uncompatible releases being:
- `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.1-4.ubuntu2204.py310`
- `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-0.ubuntu2204.py310`
- `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-1.ubuntu2204.py310`
- `us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu121.2-2.ubuntu2204.py310`

```python
from huggingface_hub import get_token

model = aiplatform.Model.upload(
    display_name="Llama-Vision-11B",
    serving_container_image_uri=os.getenv("CONTAINER_URI"),
    serving_container_environment_variables={
        "MODEL_ID": "meta-llama/Llama-3.2-11B-Vision-Instruct",
        "NUM_SHARD": "2",
        "MAX_INPUT_TOKENS": "512",
        "MAX_TOTAL_TOKENS": "1024",
        "MAX_BATCH_PREFILL_TOKENS": "1512",
        "HF_HUB_ENABLE_HF_TRANSFER": "1",
        "HUGGING_FACE_HUB_TOKEN": get_token(),
        "MESSAGES_API_ENABLED": "true",
    },
    serving_container_ports=[8080],
)
model.wait()
```

![Model on Vertex AI Model Registry](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/assets/vertex-ai-model.png)

## Deploy model on Vertex AI

After the model is registered on Vertex AI, you need to define the endpoint that you want to deploy the model to, and then link the model deployment to that endpoint resource.

To do so, you need to call the method `aiplatform.Endpoint.create` to create a new Vertex AI endpoint resource (which is not linked to a model or anything usable yet).

```python
endpoint = aiplatform.Endpoint.create(display_name="Llama-Vision-11B-API")
```

![Vertex AI Endpoint created](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/assets/vertex-ai-endpoint.png)

Now you can deploy the registered model in an endpoint on Vertex AI.

The `deploy` method will link the previously created endpoint resource with the model that contains the configuration of the serving container, and then, it will deploy the model on Vertex AI in the specified instance.

Before going into the code, let's quickly review the arguments provided to the `deploy` method:

- **`endpoint`** is the endpoint to deploy the model to, which is optional, and by default will be set to the model display name with the `_endpoint` suffix.

- **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** are arguments that define which instance to use, and additionally, the accelerator to use and the number of accelerators, respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

For more information on the supported arguments you can check [`aiplatform.Model.deploy` Python reference](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Model#google_cloud_aiplatform_Model_deploy).

```python
deployed_model = model.deploy(
    endpoint=endpoint,
    machine_type="g2-standard-24",
    accelerator_type="NVIDIA_L4",
    accelerator_count=2,
)
```

**WARNING**: _The Vertex AI endpoint deployment via the `deploy` method may take from 15 to 25 minutes._

![Vertex AI Endpoint running the model](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/assets/vertex-ai-endpoint-run.png)

## Online predictions on Vertex AI

Finally, you can run the online predictions on Vertex AI using the `predict` method, which will send the requests to the running endpoint in the `/predict` route specified within the container following Vertex AI I/O payload formatting.

Note that the input payload differs a bit from the standard Text Generation Inference (TGI), as [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) is a Visual Language Model (VLM), as those models consume both text and images. More information in [Vision Language Model Inference in TGI](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/visual_language_models).

### Via Python

#### Within the same session

If you are willing to run the online prediction within the current session, you can send requests programmatically via the `aiplatform.Endpoint` (returned by the `aiplatform.Model.deploy` method) as in the following snippet:

```python
output = deployed_model.predict(
    instances=[
        {
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": "What's in this image?"},
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
                            },
                        },
                    ],
                },
            ],
            "parameters": {
                "max_new_tokens": 256, "do_sample": True,
                "top_p": 0.95, "temperature": 1.0, "stream": False,
            },
        },
    ],
)
print(output.predictions[0])
```

    
        
        
            The image depicts a stylized illustration of an anthropomorphic rabbit dressed in a space suit, standing on a rocky, alien-like planet.
        
    

#### From a different session

If the Vertex AI Endpoint was deployed in a different session and you want to use it but don't have access to the `deployed_model` variable returned by the `aiplatform.Model.deploy` method as in the previous section; you can also run the following snippet to instantiate the deployed `aiplatform.Endpoint` via its resource name as `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}`.

Note that you will need to either retrieve the resource name i.e. the `projects/{PROJECT_ID}/locations/{LOCATION}/endpoints/{ENDPOINT_ID}` URL yourself via the Google Cloud Console, or just replace the `ENDPOINT_ID` below that can either be found via the previously instantiated endpoint as endpoint.id or via the Google Cloud Console under the Online predictions where the endpoint is listed.

```python
import os
from google.cloud import aiplatform

aiplatform.init(project=os.getenv("PROJECT_ID"), location=os.getenv("LOCATION"))

endpoint_display_name = "Llama-Vision-11B-API"  # TODO: change to your endpoint display name

# Iterates over all the Vertex AI Endpoints within the current project and keeps the first match (if any), otherwise set to None
ENDPOINT_ID = next(
    (endpoint.name for endpoint in aiplatform.Endpoint.list() 
     if endpoint.display_name == endpoint_display_name), 
    None
)
assert ENDPOINT_ID, (
    "`ENDPOINT_ID` is not set, please make sure that the `endpoint_display_name` is correct at "\
    f"https://console.cloud.google.com/vertex-ai/online-prediction/endpoints?project={os.getenv('PROJECT_ID')}"
)

endpoint = aiplatform.Endpoint(f"projects/{os.getenv('PROJECT_ID')}/locations/{os.getenv('LOCATION')}/endpoints/{ENDPOINT_ID}")
output = endpoint.predict(
    instances=[
        {
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": "How long does it take from invoice date to due date? Be short and concise."},
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": "https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png"
                            },
                        },
                    ],
                },
            ],
            "parameters": {
                "max_new_tokens": 256, "do_sample": True,
                "top_p": 0.95, "temperature": 1.0, "stream": False,
            },
        },
    ],
)
print(output.predictions[0])
```

    
        
        
            To calculate the time difference between the invoice date and the due date, we need to subtract the invoice date from the due date.
            Invoice Date: 11/02/2019
            Due Date: 26/02/2019
            Time Difference = Due Date - Invoice Date
            Time Difference = 26/02/2019 - 11/02/2019
            Time Difference = 15 days
            Therefore, it takes 15 days from the invoice date to the due date.
        
    

### Via the Vertex AI Online Prediction UI

Alternatively, for testing purposes you can also use the Vertex AI Online Prediction UI, that provides a field that expects the JSON payload formatted according to the Vertex AI specification (as in the examples above) being:

```json
{
    "instances": [
        {
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": "What's in this image?"
                        },
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
                            }
                        }
                    ]
                }
            ],
            "parameters": {
                "max_new_tokens": 256,
                "do_sample": true,
                "top_p": 0.95,
                "temperature": 1.0,
                "stream": false
            }
        }
    ]
}
```

![Vertex AI Endpoint online inference](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/assets/vertex-ai-online-prediction.png)

## Resource clean-up

Finally, you can already release the resources that you've created as follows, to avoid unnecessary costs:

* `deployed_model.undeploy_all` to undeploy the model from all the endpoints.
* `deployed_model.delete` to delete the endpoint/s where the model was deployed gracefully, after the `undeploy_all` method.
* `model.delete` to delete the model from the registry.

```python
deployed_model.undeploy_all()
deployed_model.delete()
model.delete()
```

Alternatively, you can also remove those from the Google Cloud Console following the steps:

* Go to Vertex AI in Google Cloud
* Go to Deploy and use -> Online prediction
* Click on the endpoint and then on the deployed model/s to "Undeploy model from endpoint"
* Then go back to the endpoint list and remove the endpoint
* Finally, go to Deploy and use -> Model Registry, and remove the model
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai)!

### Fine-tune Gemma 2B with PyTorch Training DLC using SFT on GKE
https://huggingface.co/docs/google-cloud/examples/gke-trl-full-fine-tuning.md

# Fine-tune Gemma 2B with PyTorch Training DLC using SFT on GKE

Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models, developed by Google DeepMind and other teams across Google. TRL is a full stack library to fine-tune and align Large Language Models (LLMs) developed by Hugging Face. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to full fine-tune Gemma 2B with TRL via Supervised Fine-Tuning (SFT) in a multi-GPU setting on a GKE Cluster.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit .

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.28 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit .

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Configure IAM for GCS

Before you run the fine-tuning job of the Hugging Face PyTorch DLC for training on the GKE Cluster, you need to set the IAM permissions for the GCS bucket so that the pod in the GKE Cluster can access the bucket, that will be mounted into the running container and use to write the generated artifacts so that those are automatically uploaded to the GCS Bucket. To do so, you need to create a namespace and a service account in the GKE Cluster, and then set the IAM permissions for the GCS Bucket.

For convenience, as the reference to both the namespace and the service account will be used within the following steps, the environment variables `NAMESPACE` and `SERVICE_ACCOUNT` will be set.

```bash
export NAMESPACE=hf-gke-namespace
export SERVICE_ACCOUNT=hf-gke-service-account
```

Then you can create the namespace and the service account in the GKE Cluster, enabling the creation of the IAM permissions for the pods in that namespace to access the GCS Bucket when using that service account.

```bash
kubectl create namespace $NAMESPACE
kubectl create serviceaccount $SERVICE_ACCOUNT --namespace $NAMESPACE
```

Then you need to add the IAM policy binding to the bucket as follows:

```bash
gcloud storage buckets add-iam-policy-binding \
    gs://$BUCKET_NAME \
    --member "principal://iam.googleapis.com/projects/$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/$NAMESPACE/sa/$SERVICE_ACCOUNT" \
    --role "roles/storage.objectUser"
```

## Optional: Set Secrets in GKE

As [`google/gemma-2b`](https://huggingface.co/google/gemma-2b) is a gated model, you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at ; and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml \
    --namespace $NAMESPACE | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml \
    --namespace $NAMESPACE | kubectl apply -f -
```

More information on how to set Kubernetes secrets in a GKE Cluster at .

## Define Job Configuration

Before proceeding into the Kubernetes deployment of the batch job via the Hugging Face PyTorch DLC for training, you need to define first the configuration required for the job to run successfully i.e. which GPU is capable of fine-tuning [`google/gemma-2b`](https://huggingface.co/google/gemma-2b) in `bfloat16`.

As a rough calculation, you could assume that the amount of GPU VRAM required to fine-tune a model in half precision is about four times the model size (read more about it in [Eleuther AI - Transformer Math 101](https://blog.eleuther.ai/transformer-math/)).

Alternatively, if your model is uploaded to the Hugging Face Hub, you can check the numbers in the community space [`Vokturz/can-it-run-llm`](https://huggingface.co/spaces/Vokturz/can-it-run-llm), which does those calculations for you, based the model to fine-tune and the available hardware.

!['Vokturz/can-it-run-llm' for 'google/gemma-2b'](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/can-it-run-llm.png)

## Run Job

Now you can already run the Kubernetes job in the Hugging Face PyTorch DLC for training on the GKE Cluster via `kubectl` from the [`job.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-full-fine-tuning/job.yaml) configuration file, that contains the job specification for running the command `trl sft` provided by the TRL CLI for the SFT full fine-tuning of [`google/gemma-2b`](https://huggingface.co/google/gemma-2b) in `bfloat16` using [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which is a subset from [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) with ~10k samples in 4 x A100 40GiB GPUs, storing the generated artifacts into a volume mount under `/data` linked to a GCS Bucket.

```bash

```

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/trl-full-fine-tuning/job.yaml
```

![GKE Job Created in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gke-job-created.png)

![GKE Job Running in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gke-job-running.png)

In this case, since you are running a batch job, it will only use one node as specified within the [`job.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-full-fine-tuning/job.yaml) file, since you don't need anything else than that. So on, the job will deploy one pod running the `trl sft` command on top of the Hugging Face PyTorch DLC container for training, and also the GCS FUSE container that is mounting the GCS Bucket into the `/data` path so as to store the generated artifacts in GCS. Once the job is completed, it will automatically scale back to 0, meaning that it will not consume resources.

Additionally, you can use `kubectl` to stream the logs of the job as it follows:

```bash
kubectl logs -f job/trl-full-sft --container trl-container --namespace $NAMESPACE
```

Finally, once the job is completed, the pods will scale to 0 and the artifacts will be visible in the GCS Bucket mounted within the job.

![GKE Job Logs in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gke-job-logs.png)

![GKE Job Completed in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gke-job-completed.png)

![GCS Bucket with output artifacts in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/trl-full-fine-tuning/imgs/gcs-bucket.png)

## Delete GKE Cluster

Finally, once the fine-tuning job is completed, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you may decide to keep the GKE Cluster running even after the job is completed, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/trl-full-fine-tuning)!

### Deploy Llama 3.2 11B Vision with TGI DLC on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tgi-llama-vision-deployment.md

# Deploy Llama 3.2 11B Vision with TGI DLC on GKE

[Llama 3.2](https://huggingface.co/blog/llama32) is the latest release of open LLMs from the Llama family released by Meta (as of October 2024); Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using Google infrastructure.

This example showcases how to deploy [`meta-llama/Llama-3.2-11B-Vision`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) on GKE via the Hugging Face purpose-built Deep Learning Container (DLC) for Text Generation Inference (TGI) on Google Cloud.

Regarding the licensing terms, Llama 3.2 comes with a very similar license to Llama 3.1, with one key difference in the acceptable use policy: any individual domiciled in, or a company with a principal place of business in, the European Union (EU) is not being granted the license rights to use multimodal models included in Llama 3.2. This restriction does not apply to end users of a product or service that incorporates any such multimodal models, so people can still build global products with the vision variants.

For full details, please make sure to read [the official license](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt) and [the acceptable use policy](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/USE_POLICY.md).

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Google Kubernetes Engine API, the Google Container Registry API, and the Google Container File System API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable containerfilesystem.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit [GKE Documentation - Install kubectl and configure cluster access](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin).

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster on either Autopilot or Standard mode; for Autopilot you just need to create the cluster, and the node pools will be created based on the deployment requirements; whilst on Standard mode you will need to create the node pool yourself, and manage most of the underlying infrastructure.

The "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google; but alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.29 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit [GKE Documentation - Specifying cluster version](https://cloud.google.com/kubernetes-engine/versioning#specifying_cluster_version).

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-vision-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Get Hugging Face token and set secrets in GKE

As [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) is a gated model with restricted access on the European Union (EU), you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at [Hugging Face Hub - User access tokens](https://huggingface.co/docs/hub/en/security-tokens); and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml | kubectl apply -f -
```

![GKE Secret in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-vision-deployment/imgs/gke-secrets.png)

More information on how to set Kubernetes secrets in a GKE Cluster at [Secret Manager Documentation - Use Secret Manager add-on with Google Kubernetes Engine](https://cloud.google.com/secret-manager/docs/secret-manager-managed-csi-component).

## Deploy TGI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) model from the Hugging Face Hub.

To explore all the models that can be served via TGI, you can explore [the models tagged with `text-generation-inference` in the Hub](https://huggingface.co/models?other=text-generation-inference); specifically, if you are interested in Vision Language Models (VLMs) you can explore [the models tagged with both `text-generation-inference` and `image-text-to-text` in the Hub](https://huggingface.co/models?pipeline_tag=image-text-to-text&other=text-generation-inference&sort=trending).

The Hugging Face DLC for TGI will be deployed via `kubectl`, from the configuration files in the [`config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment/config/) directory:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment/config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TGI setting the `MODEL_ID` to [`meta-llama/Llama-3.2-11B-Vision-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct). As the GKE Cluster was deployed in Autopilot mode, the specified resources i.e. 2 x L4s, will be automatically allocated; but if you used the Standard mode instead, you should make sure that your node pool has those GPUs available.
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment/config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TGI service.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment/config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tgi-llama-vision-deployment/config
```

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods
```

As well as checking the logs of the pod that's being deployed with:

```bash
kubectl logs -f 
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s deployment/tgi-deployment
```

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-llama-vision-deployment/imgs/gke-deployment.png)

## Inference with TGI

To run the inference over the deployed TGI service, you can either:

- Port-forwarding the deployed TGI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward service/tgi-service 8080:8080
  ```

- Accessing the TGI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

### Via cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":[{"type":"text","text":"What'\''s in this image?"},{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}}]}],"temperature":0.7,"top_p":0.95,"max_tokens":128,"stream":false}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead (without specifying the port as it's not needed):

```bash
curl http://$(kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":[{"type":"text","text":"What'\''s in this image?"},{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}}]}],"temperature":0.7,"top_p":0.95,"max_tokens":128,"stream":false}' \
    -H 'Content-Type: application/json'
```

Which generates the following output:

```
{"object":"chat.completion","id":"","created":1728041178,"model":"meta-llama/Llama-3.2-11B-Vision-Instruct","system_fingerprint":"2.3.1-native","choices":[{"index":0,"message":{"role":"assistant","content":"The image shows a rabbit wearing a space suit, standing on a rocky, orange-colored surface. The background is a reddish-brown color with a bright light shining from the right side of the image."},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":43,"completion_tokens":42,"total_tokens":85}}
```

### Via Python

To run the inference using Python, you can either use the [`huggingface_hub` Python SDK](https://github.com/huggingface/huggingface_hub) (recommended) or the [`openai` Python SDK](https://github.com/openai/openai-python).

In the examples below `localhost` will be used, but if you did deploy TGI with the ingress, feel free to use the ingress IP as mentioned above (without specifying the port).

#### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run the following snippet to mimic the `cURL` commands above i.e. sending requests to the Messages API:

```python
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://localhost:8080", api_key="-")

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
                    },
                },
            ],
        },
    ],
    max_tokens=128,
)
```

Which generates the following output:

```
ChatCompletionOutput(choices=[ChatCompletionOutputComplete(finish_reason='length', index=0, message=ChatCompletionOutputMessage(role='assistant', content="The image depicts an astronaut rabbit standing on a rocky surface, which is likely Mars or a similar planet. The astronaut's suit takes up most of the shot, with a clear distinction between its lighter parts, such as the chest and limbs, and its darker parts, like the shell that protects it from space's toxic environment. The head portion consists of the full-length helmet that's transparent at the front, allowing the rabbit's face to be visible. \n\nThe astronaut rabbit stands vertically, looking into the distance, its head slightly pointed forward and pointing with its right arm down. Its left arm hangs at its side, adding balance to its stance", tool_calls=None), logprobs=None)], created=1728041247, id='', model='meta-llama/Llama-3.2-11B-Vision-Instruct', system_fingerprint='2.3.1-native', usage=ChatCompletionOutputUsage(completion_tokens=128, prompt_tokens=43, total_tokens=171))
```

#### `openai`

Additionally, you can also use the Messages API via `openai`; you can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key="-",
)

chat_completion = client.chat.completions.create(
    model="tgi",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
                    },
                },
            ],
        },
    ],
    max_tokens=128,
)
```

Which generates the following output:

```
ChatCompletion(id='', choices=[Choice(finish_reason='length', index=0, logprobs=None, message=ChatCompletionMessage(content='The image features an astronaut rabbit on the surface of Mars. Given that rabbits (Oryctolagus cuniculus) are mammals, temperature regulation, gravity, and radiation exposure are all potential hazards in an extraterrestrial space suit designed for a rabbit. As optimal space exploration wear is human-centric, it is challenging to transpose a hypothetical rabbit astronaut suit to adapt to the specialized needs of rabbits.\n\n**Adaptations to Suit Rabbit Physiology**\n\nTo simulate a normalized temperature change, a cooling system designed for thermal comfort could be used. This system might involve a water-based cooling mechanism, similar to a panty hose or suit liner filled with', refusal=None, role='assistant', function_call=None, tool_calls=None))], created=1728041281, model='meta-llama/Llama-3.2-11B-Vision-Instruct', object='chat.completion', service_tier=None, system_fingerprint='2.3.1-native', usage=CompletionUsage(completion_tokens=128, prompt_tokens=43, total_tokens=171))
```

## Other use-cases

Additionally, some potential VLM use-cases, other than image captioning as shown above, are the following:

### Visual Question Answering (VQA)

Given an image and a question about the image, generate an answer to the question.

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":[{"type":"text","text":"Which era does this piece belong to? Give details about the era."},{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/huggingface/release-assets/resolve/main/rococo.jpg"}}]}],"temperature":0.7,"top_p":0.95,"max_tokens":128,"stream":false}' \
    -H 'Content-Type: application/json'
```

For example, given a piece of art you can ask the VLM questions about it.

    
        
        
            The piece you're referring to is a painting from the Rococo era, specifically from the 18th century. The Rococo style emerged in France in the early 1700s and spread throughout Europe, particularly in Germany and Russia, during the mid-1700s.
            Characteristics of Rococo Art:
            
                Lightness and Airiness: Rococo art is characterized by its lightness and airiness, often featuring delicate lines, pastel colors, and intricate details.
                Curves and Lines: Rococo artists favored curves and lines over straight lines, creating a sense of fluidity and movement in their work
            
        
    

### Image Information Retrieval

Given an image, retrieve information from the image.

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":[{"type":"text","text":"How long does it take from invoice date to due date? Be short and concise."},{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/huggingface/release-assets/resolve/main/invoice.png"}}]}],"temperature":0.7,"top_p":0.95,"max_tokens":128,"stream":false}' \
    -H 'Content-Type: application/json'
```

For example, given an invoice, you can ask questions to the VLM asking about information that is either present or can be inferred from the image provided.

    
        
        
            To calculate the time difference between the invoice date and the due date, we need to subtract the invoice date from the due date.
            Invoice Date: 11/02/2019
            Due Date: 26/02/2019
            Time Difference = Due Date - Invoice Date
            Time Difference = 26/02/2019 - 11/02/2019
            Time Difference = 15 days
            Therefore, it takes 15 days from the invoice date to the due date.
        
    

## Delete GKE Cluster

Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas to zero in case you want to preserve the cluster, as the GKE Cluster has been deployed on Autopilot mode i.e. node pools are created only when needed and destroyed when not needed; and by default it is just running on a single `e2-small` instance.

```bash
kubectl scale --replicas=0 deployment/tgi-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-llama-vision-deployment)!

### Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT on Vertex AI
https://huggingface.co/docs/google-cloud/examples/vertex-ai-notebooks-trl-full-sft-fine-tuning-on-vertex-ai.md

# Fine-tune Mistral 7B v0.3 with PyTorch Training DLC using SFT on Vertex AI

[Transformer Reinforcement Learning (TRL)](https://github.com/huggingface/trl) is a framework developed by Hugging Face to fine-tune and align both transformer language and diffusion models using methods such as Supervised Fine-Tuning (SFT), Reward Modeling (RM), Proximal Policy Optimization (PPO), Direct Preference Optimization (DPO), and others. On the other hand, Vertex AI is a Machine Learning (ML) platform that lets you train and deploy ML models and AI applications, and customize large language models (LLMs) for use in your AI-powered applications.

This example showcases how to create a custom training job on Vertex AI running the Hugging Face PyTorch DLC for training, using the TRL CLI to full fine-tune a 7B LLM with SFT in a multi-GPU setting.

## Setup / Configuration

First, you need to install `gcloud` in your local machine, which is the command-line tool for Google Cloud, following the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).

Then, you also need to install the `google-cloud-aiplatform` Python SDK, required to programmatically create the Vertex AI model, register it, acreate the endpoint, and deploy it on Vertex AI.

```python
!pip install --upgrade --quiet google-cloud-aiplatform
```

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```python
%env PROJECT_ID=your-project-id
%env LOCATION=your-location
%env BUCKET_URI=gs://hf-vertex-pipelines
%env CONTAINER_URI=us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-training-cu121.2-3.transformers.4-42.ubuntu2204.py310
```

Then you need to login into your GCP account and set the project ID to the one you want to use to register and deploy the models on Vertex AI.

```python
!gcloud auth login
!gcloud auth application-default login  # For local development
!gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP, such as the Vertex AI API, the Compute Engine API, and Google Container Registry related APIs.

```python
!gcloud services enable aiplatform.googleapis.com
!gcloud services enable compute.googleapis.com
!gcloud services enable container.googleapis.com
!gcloud services enable containerregistry.googleapis.com
!gcloud services enable containerfilesystem.googleapis.com
```

## Optional: Create bucket in GCS

You can use an existing bucket for storing the fine-tuning artifacts, if you already have a bucket, feel free to skip this step and jump onto the next one.

As the Vertex AI job will generate artifacts, you need to specify a Google Cloud Storage (GCS) Bucket to dump those artifacts into. So on, you need to create a GCS Bucket via the `gcloud storage buckets create` subcommand as follows:

```python
!gcloud storage buckets create $BUCKET_URI --project $PROJECT_ID --location=$LOCATION --default-storage-class=STANDARD --uniform-bucket-level-access
```

## Prepare `CustomContainerTrainingJob`

Once you have configured the environment and created the GCS Bucket (if applicable), you can proceed with the definition of the `CustomContainerTrainingJob`, which is a standard container job that runs on Vertex AI running a container, being the Hugging Face PyTorch DLC for training.

```python
import os
from google.cloud import aiplatform

aiplatform.init(
    project=os.getenv("PROJECT_ID"),
    location=os.getenv("LOCATION"),
    staging_bucket=os.getenv("BUCKET_URI"),
)
```

Before proceeding with the definition of the `CustomContainerTrainingJob`, you need to define the `accelerate` configuration file that you want to use when running the `trl sft` command, required as you are in a multi-GPU environment, otherwise the default configuration will be used and may not be getting the most when running the fine-tuning job on multiple GPUs.

You need to define the DeepSpeed Zero3 configuration by creating the following `deepspeed.yaml` file locally, containing the configuration that will be used to run the SFT fine-tuning in a distributed setting on multiple GPUs. Some of the values defined within the following configuration file are:
* `mixed_precision=bf16` as the fine-tuning will be in `bfloat16`
* `num_processes=4` as the fine-tuning will run on 4 A100 GPUs
* `num_machines=1` and `same_network=true` as the GPUs are within the same single instance

Note that DeepSpeed Zero3 has been selected as the distributed configuration for `accelerate`, but any other can be used and configured via the `accelerate config` command, that will prompt the different configurations; or just explore some pre-defined configuration files in the [Accelerate Config Zoo](https://github.com/huggingface/accelerate/tree/main/examples/config_yaml_templates).

```python
%%writefile "./assets/deepspeed.yaml"
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
  deepspeed_multinode_launcher: standard
  offload_optimizer_device: none
  offload_param_device: none
  zero3_init_flag: true
  zero3_save_16bit_model: true
  zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```

You now need to define a `CustomContainerTrainingJob` that runs on the Hugging Face PyTorch DLC for training, that needs to run the following sequential steps:

1. Create the `$HF_HOME/accelerate` path (if not existing already) as the `accelerate` config will be dumped there.
2. Write the content of the `deepspeed.yaml` configuration file into the cache under the `default_config.yaml` name (as that's `accelerate` default path i.e. the configuration that will be used for the fine-tuning job).
3. Add the `trl sft` command capturing the arguments that will be provided whenever the job runs.

The `CustomContainerTrainingJob` will override the default `ENTRYPOINT` provided within the container URI provided, so if the `ENTRYPOINT` is already suited to receive the arguments, then there's no need to define a custom `command`.

```python
job = aiplatform.CustomContainerTrainingJob(
    display_name="trl-full-sft",
    container_uri=os.getenv("CONTAINER_URI"),
    command=[
        "sh",
        "-c",
        " && ".join(
            (
                "mkdir -p $HF_HOME/accelerate",
                f"echo \"{open('./assets/deepspeed.yaml').read()}\" > $HF_HOME/accelerate/default_config.yaml",
                'exec trl sft "$@"',
            )
        ),
        "--",
    ],
)
```

## Define `CustomContainerTrainingJob` Requirements

Before proceeding to the `CustomContainerTrainingJob` via the Hugging Face PyTorch DLC for training, you need to define first the configuration required for the job to run successfully i.e. which GPU is capable of fine-tuning [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16`.

As a rough calculation, you could assume that the amount of GPU VRAM required to fine-tune a model in half precision is about four times the model size (read more about it in [Eleuther AI - Transformer Math 101](https://blog.eleuther.ai/transformer-math/)).

Alternatively, if your model is uploaded to the Hugging Face Hub, you can check the numbers in the community space [`Vokturz/can-it-run-llm`](https://huggingface.co/spaces/Vokturz/can-it-run-llm), which does those calculations for you, based the model to fine-tune and the available hardware.

!['Vokturz/can-it-run-llm' for 'mistralai/Mistral-7B-v0.3'](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/can-it-run-llm.png)

## Run `CustomContainerTrainingJob`

As mentioned before, the job will run the Supervised Fine-Tuning (SFT) with the TRL CLI on top of [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3) in `bfloat16` using [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which is a subset from [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) with ~10k samples.

Once you have decided which resources to use to run the job, you need to define the hyper parameters accordingly to ensure that the selected instance is capable of running the job.
Some of the hparams that you may want to look into to avoid running into OOM errors are the following:
* **Optimizer**: by default the AdamW optimizer will be used, but alternatively lower precision optimizers can be used to reduce the memory as well e.g. `adamw_bnb_8bit` (for more information on 8-bit optimizers check https://huggingface.co/docs/bitsandbytes/main/en/optimizers).
* **Batch size**: you can tweak this so as to use a lower batch size when running into OOM, or you can also tweak the gradient accumulation steps to simulate a similar batch size for updating the gradients, but providing less inputs within a batch a time e.g. `batch_size=8` and `gradient_accumulation=1` is effectively the same as `batch_size=4` and `gradient_accumulation=2`.

As the `CustomContainerTrainingJob` defines the command `trl sft` the arguments to be provided are listed either in the Python reference at [trl.SFTConfig](https://huggingface.co/docs/trl/en/sft_trainer#trl.SFTConfig) or via the `trl sft --help` command.

Read more about the TRL CLI at https://huggingface.co/docs/trl/en/clis.

Since GCS FUSE is used to mount the bucket as a directory within the running container job, the mounted path follows the formatting `/gcs/`. More information at https://cloud.google.com/vertex-ai/docs/training/code-requirements. So the `output_dir` needs to be set to the mounted GCS Bucket, meaning that anything the `SFTTrainer` writes there will be automatically uploaded to the GCS Bucket.

```python
args = [
    # MODEL
    "--model_name_or_path=mistralai/Mistral-7B-v0.3",
    "--torch_dtype=bfloat16",
    "--attn_implementation=flash_attention_2",
    # DATASET
    "--dataset_name=timdettmers/openassistant-guanaco",
    "--dataset_text_field=text",
    # TRAINER
    "--bf16",
    "--max_seq_length=1024",
    "--per_device_train_batch_size=2",
    "--gradient_accumulation_steps=4",
    "--gradient_checkpointing",
    "--gradient_checkpointing_use_reentrant",
    "--learning_rate=0.00002",
    "--lr_scheduler_type=cosine",
    "--optim=adamw_bnb_8bit",
    "--num_train_epochs=1",
    "--logging_steps=10",
    "--do_eval",
    "--eval_steps=100",
    "--save_strategy=epoch",
    "--report_to=none",
    f"--output_dir={os.getenv('BUCKET_URI').replace('gs://', '/gcs/')}/Mistral-7B-v0.3-SFT-Guanaco",
    "--overwrite_output_dir",
    "--seed=42",
    "--log_level=info",
]
```

Then you need to call the `submit` method on the `aiplatform.CustomContainerTrainingJob`, which is a non-blocking method that will schedule the job without blocking the execution.

The arguments provided to the `submit` method are listed below:

* **`args`** defines the list of arguments to be provided to the `trl sft` command, provided as `trl sft --arg_1=value ...`.

* **`replica_count`** defines the number of replicas to run the job in, for training normally this value will be set to one.

* **`machine_type`**, **`accelerator_type`** and **`accelerator_count`** define the machine i.e. Compute Engine instance, the accelerator (if any), and the number of accelerators (ranging from 1 to 8); respectively. The `machine_type` and the `accelerator_type` are tied together, so you will need to select an instance that supports the accelerator that you are using and vice-versa. More information about the different instances at [Compute Engine Documentation - GPU machine types](https://cloud.google.com/compute/docs/gpus), and about the `accelerator_type` naming at [Vertex AI Documentation - MachineSpec](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec).

* **`base_output_dir`** defines the base directory that will be mounted within the running container from the GCS Bucket, conditioned by the `staging_bucket` argument provided to the `aiplatform.init` initially.

* (optional) **`environment_variables`** defines the environment variables to define within the running container. As you are fine-tuning a gated model i.e. [`mistralai/Mistral-7B-v0.3`](https://huggingface.co/mistralai/Mistral-7B-v0.3), you need to set the `HF_TOKEN` environment variable. Additionally, some other environment variables are defined to set the cache path (`HF_HOME`) and to ensure that the logging messages are streamed to Google Cloud Logs Explorer properly (`TRL_USE_RICH`, `ACCELERATE_LOG_LEVEL`, `TRANSFORMERS_LOG_LEVEL`, and `TQDM_POSITION`).

* (optional) **`timeout`** and **`create_request_timeout`** define the timeouts in seconds before interrupting the job execution or the job creation request (time to allocate required resources and start the execution), respectively.

* (optional) **`boot_disk_size`** defines the size in GiB of the boot disk, increased to store not only the model weights but also all the intermediate checkpoints if any; otherwise, it defaults to 100GiB which may not be sufficient in some cases.

```python
!pip install --upgrade --quiet huggingface_hub
```

```python
from huggingface_hub import interpreter_login

interpreter_login()
```

```python
from huggingface_hub import get_token

job.submit(
    args=args,
    replica_count=1,
    machine_type="a2-highgpu-4g",
    accelerator_type="NVIDIA_TESLA_A100",
    accelerator_count=4,
    base_output_dir=f"{os.getenv('BUCKET_URI')}/Mistral-7B-v0.3-SFT-Guanaco",
    environment_variables={
        "HF_HOME": "/root/.cache/huggingface",
        "HF_TOKEN": get_token(),
        "TRL_USE_RICH": "0",
        "ACCELERATE_LOG_LEVEL": "INFO",
        "TRANSFORMERS_LOG_LEVEL": "INFO",
        "TQDM_POSITION": "-1",
    },
    timeout=60 * 60 * 3,  # 3 hours (10800s)
    create_request_timeout=60 * 10,  # 10 minutes (600s)
    boot_disk_size_gb=250,
)
```

![Pipeline created in Vertex AI](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-scheduled.png)

![Vertex AI Pipeline successfully completed](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-completed.png)

![Vertex AI Pipeline logs](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-pipeline-logs.png)

![Vertex AI Run](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/vertex-ai-run.png)

![GCS Bucket with uploaded artifacts](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai/assets/gcs-bucket-artifacts.png)

Finally, you can upload the fine-tuned model to the Hugging Face Hub, or just keep it within the Google Cloud Storage (GCS) Bucket. Later on, you will be able to run the inference on top of it via either the Hugging Face PyTorch DLC for inference via the `pipeline` in `transformers`, or via the Hugging Face DLC for TGI (as the model is fine-tuned for `text-generation`).
---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/vertex-ai/notebooks/trl-full-sft-fine-tuning-on-vertex-ai)!

### Deploy Gemma2 with multiple LoRA adapters with TGI DLC on GKE
https://huggingface.co/docs/google-cloud/examples/gke-tgi-multi-lora-deployment.md

# Deploy Gemma2 with multiple LoRA adapters with TGI DLC on GKE

Gemma 2 is an advanced, lightweight open model that enhances performance and efficiency while building on the research and technology of its predecessor and the Gemini models developed by Google DeepMind and other teams across Google. Text Generation Inference (TGI) is a toolkit developed by Hugging Face for deploying and serving LLMs, with high performance text generation. And, Google Kubernetes Engine (GKE) is a fully-managed Kubernetes service in Google Cloud that can be used to deploy and operate containerized applications at scale using GCP's infrastructure.

This example showcases how to deploy Gemma 2 2B from the Hugging Face Hub with multiple LoRA adapters fine-tuned for different purposes such as coding, SQL, or Japanese, on a GKE Cluster running the Hugging Face DLC for TGI i.e. a purpose-built container to deploy LLMs in a secure and managed environment.

## Setup / Configuration

First, you need to install both `gcloud` and `kubectl` in your local machine, which are the command-line tools for Google Cloud and Kubernetes, respectively, to interact with the GCP and the GKE Cluster.

- To install `gcloud`, follow the instructions at [Cloud SDK Documentation - Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).
- To install `kubectl`, follow the instructions at [Kubernetes Documentation - Install Tools](https://kubernetes.io/docs/tasks/tools/#kubectl).

Optionally, to ease the usage of the commands within this tutorial, you need to set the following environment variables for GCP:

```bash
export PROJECT_ID=your-project-id
export LOCATION=your-location
export CLUSTER_NAME=your-cluster-name
```

Then you need to login into your GCP account and set the project ID to the one you want to use for the deployment of the GKE Cluster.

```bash
gcloud auth login
gcloud auth application-default login  # For local development
gcloud config set project $PROJECT_ID
```

Once you are logged in, you need to enable the necessary service APIs in GCP i.e. the Google Kubernetes Engine API and the Google Container Registry API, which are necessary for the deployment of the GKE Cluster and the Hugging Face DLC for TGI.

```bash
gcloud services enable container.googleapis.com
gcloud services enable containerregistry.googleapis.com
```

Additionally, to use `kubectl` with the GKE Cluster credentials, you also need to install the `gke-gcloud-auth-plugin`, that can be installed with `gcloud` as follows:

```bash
gcloud components install gke-gcloud-auth-plugin
```

Installing the `gke-gcloud-auth-plugin` does not need to be installed via `gcloud` specifically, to read more about the alternative installation methods, please visit [GKE Documentation - Install kubectl and configure cluster access](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin).

## Create GKE Cluster

Once everything's set up, you can proceed with the creation of the GKE Cluster and the node pool, which in this case will be a single GPU node, in order to use the GPU accelerator for high performance inference, also following TGI recommendations based on their internal optimizations for GPUs.

To deploy the GKE Cluster, the "Autopilot" mode will be used as it is the recommended one for most of the workloads, since the underlying infrastructure is managed by Google. Alternatively, you can also use the "Standard" mode.

Important to check before creating the GKE Autopilot Cluster the [GKE Documentation - Optimize Autopilot Pod performance by choosing a machine series](https://cloud.google.com/kubernetes-engine/docs/how-to/performance-pods), since not all the versions support GPU accelerators e.g. `nvidia-l4` is not supported in the GKE cluster versions 1.28.3 or lower.

```bash
gcloud container clusters create-auto $CLUSTER_NAME \
    --project=$PROJECT_ID \
    --location=$LOCATION \
    --release-channel=stable \
    --cluster-version=1.29 \
    --no-autoprovisioning-enable-insecure-kubelet-readonly-port
```

To select the specific version in your location of the GKE Cluster, you can run the following command:

```bash
gcloud container get-server-config \
    --flatten="channels" \
    --filter="channels.channel=STABLE" \
    --format="yaml(channels.channel,channels.defaultVersion)" \
    --location=$LOCATION
```

For more information please visit [GKE Documentation - Specifying cluster version](https://cloud.google.com/kubernetes-engine/versioning#specifying_cluster_version).

![GKE Cluster in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-multi-lora-deployment/imgs/gke-cluster.png)

Once the GKE Cluster is created, you can get the credentials to access it via `kubectl` with the following command:

```bash
gcloud container clusters get-credentials $CLUSTER_NAME --location=$LOCATION
```

## Get Hugging Face token and set secrets in GKE

As [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) is a gated model, you need to set a Kubernetes secret with the Hugging Face Hub token via `kubectl`.

To generate a custom token for the Hugging Face Hub, you can follow the instructions at [Hugging Face Hub - User access tokens](https://huggingface.co/docs/hub/en/security-tokens); and the recommended way of setting it is to install the `huggingface_hub` Python SDK as follows:

```bash
pip install --upgrade --quiet huggingface_hub
```

And then login in with the generated token with read-access over the gated/private model:

```bash
huggingface-cli login
```

Finally, you can create the Kubernetes secret with the generated token for the Hugging Face Hub as follows using the `huggingface_hub` Python SDK to retrieve the token:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=$(python -c "from huggingface_hub import get_token; print(get_token())") \
    --dry-run=client -o yaml | kubectl apply -f -
```

Or, alternatively, you can directly set the token as follows:

```bash
kubectl create secret generic hf-secret \
    --from-literal=hf_token=hf_*** \
    --dry-run=client -o yaml | kubectl apply -f -
```

![GKE Secret in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-multi-lora-deployment/imgs/gke-secrets.png)

More information on how to set Kubernetes secrets in a GKE Cluster at [Secret Manager Documentation - Use Secret Manager add-on with Google Kubernetes Engine](https://cloud.google.com/secret-manager/docs/secret-manager-managed-csi-component).

## Deploy TGI

Now you can proceed to the Kubernetes deployment of the Hugging Face DLC for TGI, serving the [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) model and multiple LoRA adapters fine-tuned on top of it, from the Hugging Face Hub.

To explore all the models that can be served via TGI, you can explore [the models tagged with `text-generation-inference` in the Hub](https://huggingface.co/models?other=text-generation-inference).

The Hugging Face DLC for TGI will be deployed via `kubectl`, from the configuration files in the [`config/`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment/config/) directory:

- [`deployment.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment/config/deployment.yaml): contains the deployment details of the pod including the reference to the Hugging Face DLC for TGI setting the `MODEL_ID` to [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it), and the `LORA_ADAPTERS` to `google-cloud-partnership/gemma-2-2b-it-lora-magicoder,google-cloud-partnership/gemma-2-2b-it-lora-sql`, being the following adapters:
  - [`google-cloud-partnership/gemma-2-2b-it-lora-sql`](https://huggingface.co/google-cloud-partnership/gemma-2-2b-it-lora-sql): fine-tuned with [`gretelai/synthetic_text_to_sql`](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) to generate SQL queries with an explanation, given an SQL context and a prompt / question about it.
  - [`google-cloud-partnership/gemma-2-2b-it-lora-magicoder`](https://huggingface.co/google-cloud-partnership/gemma-2-2b-it-lora-magicoder): fine-tuned with [`ise-uiuc/Magicoder-OSS-Instruct-75K`](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K) to generate code in diverse programming languages such as Python, Rust, or C, among many others; based on an input problem.
  - [`google-cloud-partnership/gemma-2-2b-it-lora-jap-en`](https://huggingface.co/google-cloud-partnership/gemma-2-2b-it-lora-jap-en): fine-tuned with [`Jofthomas/japanese-english-translation`](https://huggingface.co/datasets/Jofthomas/japanese-english-translation), a synthetically generated dataset of short Japanese sentences translated to English; to translate English to Japanese and the other way around.
- [`service.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment/config/service.yaml): contains the service details of the pod, exposing the port 8080 for the TGI service.
- (optional) [`ingress.yaml`](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment/config/ingress.yaml): contains the ingress details of the pod, exposing the service to the external world so that it can be accessed via the ingress IP.

Note that the selected LoRA adapters are not intended to be used on production environments, as the fine-tuned adapters have not been tested extensively.

```bash
git clone https://github.com/huggingface/Google-Cloud-Containers
kubectl apply -f Google-Cloud-Containers/examples/gke/tgi-multi-lora-deployment/config
```

The Kubernetes deployment may take a few minutes to be ready, so you can check the status of the deployment with the following command:

```bash
kubectl get pods
```

Alternatively, you can just wait for the deployment to be ready with the following command:

```bash
kubectl wait --for=condition=Available --timeout=700s deployment/tgi-deployment
```

![GKE Deployment in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-multi-lora-deployment/imgs/gke-deployment.png)

![GKE Deployment Logs in the GCP Console](https://raw.githubusercontent.com/huggingface/Google-Cloud-Containers/main/examples/gke/tgi-multi-lora-deployment/imgs/gke-deployment-logs.png)

## Inference with TGI

To run the inference over the deployed TGI service, you need to make sure that the service is accessible first, you can do so by either:

- Port-forwarding the deployed TGI service to the port 8080, so as to access via `localhost` with the command:

  ```bash
  kubectl port-forward service/tgi-service 8080:8080
  ```

- Accessing the TGI service via the external IP of the ingress, which is the default scenario here since you have defined the ingress configuration in the `config/ingress.yaml` file (but it can be skipped in favour of the port-forwarding), that can be retrieved with the following command:

  ```bash
  kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
  ```

### Via cURL

To send a POST request to the TGI service using `cURL`, you can run the following command:

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":"What is Deep Learning?"}],"temperature":0.7,"top_p":0.95,"max_tokens":128}}' \
    -H 'Content-Type: application/json'
```

Or send a POST request to the ingress IP instead:

```bash
curl http://$(kubectl get ingress tgi-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":"What is Deep Learning?"}],"temperature":0.7,"top_p":0.95,"max_tokens":128}}' \
    -H 'Content-Type: application/json'
```

As in this case you are serving multiple LoRA adapters, to use those you will need to specify the `model` parameter when using the `/v1/chat/completions` endpoint (or the `adapter_id` parameter when using the `/generate` endpoint), so that the LoRA adapter is used. In any other case, the base model will be used instead, meaning that the adapters are only used when explicitly specified.

For example, say that you want to generate a piece of code for a problem that you cannot solve, then you should ideally use the fine-tuned adapter [`google-cloud-partnership/gemma-2-2b-it-lora-magicoder`](https://huggingface.co/google-cloud-partnership/gemma-2-2b-it-lora-magicoder) which is specifically fine-tuned for that; alternatively you could also use the base instruction-tuned model as it may be able to tackle a wide variety of tasks, but e.g. the Japanese to English model wouldn't be a nice pick for that task.

```bash
curl http://localhost:8080/v1/chat/completions \
    -X POST \
    -d '{"messages":[{"role":"user","content":"You are given a vector of integers, A, of length n. Your task is to implement a function that finds the maximum product of any two distinct elements in the vector. Write a function in Rust to return this maximum product. Function Signature: rust fn max_product(a: Vec) -> i32  Input: - A vector a of length n (2  45"}],"temperature":0.7,"top_p":0.95,"max_tokens":256,"model":"google-cloud-partnership/gemma-2-2b-it-lora-magicoder"}}' \
    -H 'Content-Type: application/json'
```

Which generates the following solution to the given prompt:

```
{"object":"chat.completion","id":"","created":1727378101,"model":"google/gemma-2-2b-it","system_fingerprint":"2.3.1-dev0-native","choices":[{"index":0,"message":{"role":"assistant","content":"\`\`\`rust\nfn max_product(a: Vec) -> i32 {\n    let mut max1 = a[0];\n    let mut max2 = a[1];\n    if max2  max1 {\n            max2 = max1;\n            max1 = a[i];\n        } else if a[i] > max2 {\n            "},"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":163,"completion_tokens":128,"total_tokens":291}}
```

Translated to Rust code that would be:

```rust
fn max_product(a: Vec) -> i32 {
    if a.len()  max_product {
                max_product = a[i] * a[j];
            }
        }
    }
    max_product
}
```

### Via Python

To run the inference using Python, you can either use the `huggingface_hub` Python SDK (recommended) or the `openai` Python SDK.

In the examples below `localhost` will be used, but if you did deploy TGI with the ingress, feel free to use the ingress IP as mentioned above (without specifying the port).

#### `huggingface_hub`

You can install it via `pip` as `pip install --upgrade --quiet huggingface_hub`, and then run the following snippet to mimic the `cURL` commands above i.e. sending requests to the Messages API providing the adapter identifier via the `model` parameter:

```python
from huggingface_hub import InferenceClient

client = InferenceClient(base_url="http://localhost:8080", api_key="-")

chat_completion = client.chat.completions.create(
    model="google-cloud-partnership/gemma-2-2b-it-lora-magicoder",
    messages=[
        {"role": "user", "content": "You are given a vector of integers, A, of length n. Your task is to implement a function that finds the maximum product of any two distinct elements in the vector. Write a function in Rust to return this maximum product. Function Signature: rust fn max_product(a: Vec) -> i32  Input: - A vector a of length n (2  45"},
    ],
    max_tokens=128,
)
```

Alternatively, you can also format the prompt yourself and send that via the Text Generation API providing the adapter identifier via the `adapter_id` argument as follows:

```python
from huggingface_hub import InferenceClient

client = InferenceClient("http://localhost:8080", api_key="-")

generation = client.text_generation(
    prompt="You are given a vector of integers, A, of length n. Your task is to implement a function that finds the maximum product of any two distinct elements in the vector. Write a function in Rust to return this maximum product. Function Signature: rust fn max_product(a: Vec) -> i32  Input: - A vector a of length n (2  45",
    max_new_tokens=128,
    adapter_id="google-cloud-partnership/gemma-2-2b-it-lora-magicoder",
)
```

#### `openai`

Additionally, you can also use the Messages API via `openai`; you can install it via `pip` as `pip install --upgrade openai`, and then run:

```python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1/",
    api_key="-",
)

chat_completion = client.chat.completions.create(
    model="google-cloud-partnership/gemma-2-2b-it-lora-magicoder",
    messages=[
        {"role": "user", "content": "You are given a vector of integers, A, of length n. Your task is to implement a function that finds the maximum product of any two distinct elements in the vector. Write a function in Rust to return this maximum product. Function Signature: rust fn max_product(a: Vec) -> i32  Input: - A vector a of length n (2  45"},
    ],
    max_tokens=128,
)
```

## Delete GKE Cluster

Finally, once you are done using TGI on the GKE Cluster, you can safely delete the GKE Cluster to avoid incurring in unnecessary costs.

```bash
gcloud container clusters delete $CLUSTER_NAME --location=$LOCATION
```

Alternatively, you can also downscale the replicas of the deployed pod to 0 in case you want to preserve the cluster, since the default GKE Cluster deployed with GKE Autopilot mode is running just a single `e2-small` instance.

```bash
kubectl scale --replicas=0 deployment/tgi-deployment
```

---

📍 Find the complete example on GitHub [here](https://github.com/huggingface/Google-Cloud-Containers/tree/main/examples/gke/tgi-multi-lora-deployment)!

### Available DLCs on Google Cloud
https://huggingface.co/docs/google-cloud/containers/available.md

# Available DLCs on Google Cloud

Below you can find a listing of all the Deep Learning Containers (DLCs) available on Google Cloud.

The listing below **only contains the latest version of each one of the Hugging Face DLCs**, the full listing of the available published containers in Google Cloud can be found either in the [Google Cloud Deep Learning Containers Documentation](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face), in the [Google Cloud Artifact Registry](https://console.cloud.google.com/artifacts/docker/deeplearning-platform-release/us/gcr.io) or via the `gcloud container images list --repository="us-docker.pkg.dev/deeplearning-platform-release/gcr.io" | grep "huggingface-"` command.

## Text Generation Inference (TGI)

| Container URI                                                                                                           | Path                                                                                                                                        | Accelerator |
| ----------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-generation-inference-cu124.2-4.ubuntu2204.py311 | [text-generation-inference-gpu.2.4.0](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/tgi/gpu/2.4.0/Dockerfile) | GPU         |

## Text Embeddings Inference (TEI)

| Container URI                                                                                                     | Path                                                                                                                                        | Accelerator |
| ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cu122.1-6.ubuntu2204 | [text-embeddings-inference-gpu.1.6.0](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/tei/gpu/1.6.0/Dockerfile) | GPU         |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-text-embeddings-inference-cpu.1-6              | [text-embeddings-inference-cpu.1.6.0](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/tei/cpu/1.6.0/Dockerfile) | CPU         |

## PyTorch Inference

| Container URI                                                                                                                     | Path                                                                                                                                                                                                              | Accelerator |
| --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cu121.2-3.transformers.4-48.ubuntu2204.py311 | [huggingface-pytorch-inference-gpu.2.3.1.transformers.4.48.0.py311](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/pytorch/inference/gpu/2.3.1/transformers/4.48.0/py311/Dockerfile) | GPU         |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-inference-cpu.2-3.transformers.4-48.ubuntu2204.py311   | [huggingface-pytorch-inference-cpu.2.3.1.transformers.4.48.0.py311](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/pytorch/inference/cpu/2.3.1/transformers/4.48.0/py311/Dockerfile) | CPU         |

## PyTorch Training

| Container URI                                                                                                                    | Path                                                                                                                                                                                                            | Accelerator |
| -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| us-docker.pkg.dev/deeplearning-platform-release/gcr.io/huggingface-pytorch-training-cu121.2-3.transformers.4-48.ubuntu2204.py311 | [huggingface-pytorch-training-gpu.2.3.1.transformers.4.48.0.py311](https://github.com/huggingface/Google-Cloud-Containers/tree/main/containers/pytorch/training/gpu/2.3.1/transformers/4.48.0/py311/Dockerfile) | GPU         |

### Introduction
https://huggingface.co/docs/google-cloud/containers/introduction.md

# Introduction

[Hugging Face Deep Learning Containers (DLCs) for Google Cloud](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face) are a set of Docker images for training and deploying Transformers, Sentence Transformers, and Diffusers models on Google Cloud Vertex AI and Google Kubernetes Engine (GKE).

The [Google-Cloud-Containers](https://github.com/huggingface/Google-Cloud-Containers) repository contains the container files for building Hugging Face-specific Deep Learning Containers (DLCs), examples on how to train and deploy models on Google Cloud. The containers are publicly maintained, updated and released periodically by Hugging Face and the Google Cloud Team and available for all Google Cloud Customers within the [Google Cloud's Artifact Registry](https://cloud.google.com/deep-learning-containers/docs/choosing-container#hugging-face). For each supported combination of use-case (training, inference), accelerator type (CPU, GPU, TPU), and framework (PyTorch, TGI, TEI) containers are created.
