# Computer-Vision-Course

## Docs

- [Introduction](https://huggingface.co/learn/computer-vision-course/unit10/point_clouds.md)
- [Synthetic Data Generation with Diffusion Models](https://huggingface.co/learn/computer-vision-course/unit10/datagen-diffusion-models.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit10/introduction.md)
- [Using a 3D Renderer to Generate Synthetic Data](https://huggingface.co/learn/computer-vision-course/unit10/blenderProc.md)
- [Synthetic Data Generation Using DCGAN](https://huggingface.co/learn/computer-vision-course/unit10/synthetic-lung-images.md)
- [Challenges and Opportunities Associated With Using Synthetic Data](https://huggingface.co/learn/computer-vision-course/unit10/challenges.md)
- [Synthetic Datasets](https://huggingface.co/learn/computer-vision-course/unit10/synthetic_datasets.md)
- [Generative Adversarial Networks](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/gans.md)
- [Variational Autoencoders](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/variational_autoencoders.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/introduction/introduction.md)
- [Introduction to Stable Diffusion](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/stable-diffusion.md)
- [Introduction to Diffusion Models](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/introduction.md)
- [Control over Diffusion Models](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/simple-explanation.md)
- [StyleGAN Variants](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/gans-vaes/stylegan.md)
- [Privacy, Bias and Societal Concerns](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/practical-applications/ethical-issues.md)
- [CycleGAN Introduction](https://huggingface.co/learn/computer-vision-course/unit5/generative-models/practical-applications/cycle_gan.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit11/1.md)
- [Zero-shot Learning](https://huggingface.co/learn/computer-vision-course/unit11/2.md)
- [Welcome to the Community Computer Vision Course](https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome.md)
- [Table of Contents for Notebooks](https://huggingface.co/learn/computer-vision-course/unit0/welcome/TableOfContents.md)
- [Object Detection](https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/object_detection.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/introduction.md)
- [Image Segmentation](https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/segmentation.md)
- [Metric and Relative Monocular Depth Estimation: An Overview. Fine-Tuning Depth Anything V2 👐 📚](https://huggingface.co/learn/computer-vision-course/unit8/monocular_depth_estimation.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit8/3d_measurements_stereo_vision.md)
- [Neural Radiance Fields (NeRFs)](https://huggingface.co/learn/computer-vision-course/unit8/nerf.md)
- [Camera models](https://huggingface.co/learn/computer-vision-course/unit8/terminologies/camera-models.md)
- [Basics of Linear Algebra for 3D Data](https://huggingface.co/learn/computer-vision-course/unit8/terminologies/linear-algebra.md)
- [Representations for 3D Data](https://huggingface.co/learn/computer-vision-course/unit8/terminologies/representations.md)
- [Applications of 3D Vision](https://huggingface.co/learn/computer-vision-course/unit8/introduction/applications.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit8/introduction/introduction.md)
- [A Brief History of 3D Vision](https://huggingface.co/learn/computer-vision-course/unit8/introduction/brief_history.md)
- [Novel View Synthesis](https://huggingface.co/learn/computer-vision-course/unit8/3d-vision/nvs.md)
- [Supplementary reading and resources 🤗🌎](https://huggingface.co/learn/computer-vision-course/unit12/supplementary-material.md)
- [Ethics and Bias in AI 🧑‍🤝‍🧑](https://huggingface.co/learn/computer-vision-course/unit12/ethics-bias-ai.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit12/introduction.md)
- [Exploring Ethical Foundations in CV Models](https://huggingface.co/learn/computer-vision-course/unit12/pre-intro.md)
- [Hugging Face's efforts: Ethics and Society 🤗🌎](https://huggingface.co/learn/computer-vision-course/unit12/conclusion.md)
- [Introduction to model optimization for deployment](https://huggingface.co/learn/computer-vision-course/unit9/intro_to_model_optimization.md)
- [Model Deployment Considerations](https://huggingface.co/learn/computer-vision-course/unit9/model_deployment.md)
- [Model optimization tools and frameworks](https://huggingface.co/learn/computer-vision-course/unit9/tools_and_frameworks.md)
- [Supplementary Reading and Resources 🤗](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/supplementary-material.md)
- [Multimodal Tasks and Models](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/tasks-models-part1.md)
- [A Multimodal World](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/a_multimodal_world.md)
- [Exploring Multimodal Text and Vision Models: Uniting Senses in AI](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/pre-intro.md)
- [Introduction to Vision Language Models](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/vlm-intro.md)
- [Transfer Learning of Multimodal Models](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/transfer_learning.md)
- [Multimodal Text Generation (BLIP)](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/blip.md)
- [Losses](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/losses.md)
- [CLIP and Relatives](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/Introduction.md)
- [Contrastive Language-Image Pre-training (CLIP)](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/clip.md)
- [Multimodal Object Detection (OWL-ViT)](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/owl_vit.md)
- [Feature Matching](https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/feature-matching.md)
- [Feature Description](https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/feature_description.md)
- [Real-world Applications of Feature Extraction in Computer Vision](https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/real-world-applications.md)
- [Image](https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/image.md)
- [Pre-processing for Computer Vision Tasks](https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/examples-preprocess.md)
- [Imaging in Real-life](https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/extension-image.md)
- [Image Acquisition Fundamentals in Digital Processing](https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/imaging.md)
- [Applications of Computer Vision](https://huggingface.co/learn/computer-vision-course/unit1/chapter1/applications.md)
- [What Is Computer Vision](https://huggingface.co/learn/computer-vision-course/unit1/chapter1/definition.md)
- [Vision](https://huggingface.co/learn/computer-vision-course/unit1/chapter1/motivation.md)
- [Transfer Learning and Fine-tuning Vision Transformers for Image Classification](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformers-for-image-classification.md)
- [Vision Transformers for Object Detection](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformer-for-object-detection.md)
- [Knowledge Distillation with Vision Transformers](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/knowledge-distillation.md)
- [Convolutional Vision Transformer (CvT)](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/cvt.md)
- [Swin Transformer](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/swin-transformer.md)
- [MobileViT v2](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/mobilevit.md)
- [Transformer-based image segmentation](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformers-for-image-segmentation.md)
- [Dilated Neighborhood Attention Transformer (DINAT)](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/dinat.md)
- [OneFormer: One Transformer to Rule Universal Image Segmentation](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/oneformer.md)
- [DEtection TRansformer (DETR)](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/detr.md)
- [Hyena](https://huggingface.co/learn/computer-vision-course/unit13/hyena.md)
- [Image-based Joint-Embedding Predictive Architecture (I-JEPA)](https://huggingface.co/learn/computer-vision-course/unit13/i-jepa.md)
- [Overview](https://huggingface.co/learn/computer-vision-course/unit13/hiera.md)
- [Retention In Vision](https://huggingface.co/learn/computer-vision-course/unit13/retention.md)
- [MobileNet](https://huggingface.co/learn/computer-vision-course/unit2/cnns/mobilenet.md)
- [GoogLeNet](https://huggingface.co/learn/computer-vision-course/unit2/cnns/googlenet.md)
- [Transfer Learning](https://huggingface.co/learn/computer-vision-course/unit2/cnns/intro-transfer-learning.md)
- [ConvNext - A ConvNet for the 2020s (2022)](https://huggingface.co/learn/computer-vision-course/unit2/cnns/convnext.md)
- [ResNet (Residual Network)](https://huggingface.co/learn/computer-vision-course/unit2/cnns/resnet.md)
- [YOLO](https://huggingface.co/learn/computer-vision-course/unit2/cnns/yolo.md)
- [Very Deep Convolutional Networks for Large Scale Image Recognition (2014)](https://huggingface.co/learn/computer-vision-course/unit2/cnns/vgg.md)
- [Introduction to Convolutional Neural Networks](https://huggingface.co/learn/computer-vision-course/unit2/cnns/introduction.md)
- [Let's Dive Further with MobileNet](https://huggingface.co/learn/computer-vision-course/unit2/cnns/mobilenetextra.md)
- [Video Processing Basics](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/video-processing-basics.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/introduction-to-video.md)
- [CNN Based Video Models](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/cnn-based-video-model.md)
- [Multimodal Based Video Models](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/multimodal-based-video-models.md)
- [Introduction](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/rnn-based-video-models.md)
- [Transformers in Video Processing (Part 1)](https://huggingface.co/learn/computer-vision-course/unit7/video-processing/transformers-based-models.md)

### Introduction
https://huggingface.co/learn/computer-vision-course/unit10/point_clouds.md

# Introduction

## What is a point cloud?

A point cloud is a collection of individual points, each representing a sample of a surface within a three-dimensional space denoted by [x, y, z] coordinates. Beyond their spatial coordinates, these points often carry additional attributes like normals, RGB color, albedo, and Bidirectional Reflectance Distribution Function (BRDF).

Here, albedo is the measure of how much light a surface reflects. It's essentially the ratio of reflected light to the incident light that strikes the surface. In simpler terms, it describes how much of the incoming light is bounced back. A high albedo indicates a surface that reflects a lot of light, such as snow, while a low albedo suggests a surface that absorbs more light, like asphalt.

The BRDF is a function that describes how light is scattered or reflected at an opaque surface. It details the way light is reflected at an intersection point on a surface, considering the incoming light direction and the outgoing direction. It provides a mathematical description of the surface's reflective properties, including factors like glossiness, roughness, and the distribution of reflected light over different angles.
These attributes serve crucial roles in various applications such as modeling, rendering, and scene comprehension.

While the concept of point cloud data isn't new and has been integral in fields like graphics and physics simulation for many years, its significance has notably surged due to two key trends.
Firstly, the widespread availability of cost-effective and user-friendly point cloud acquisition devices has significantly increased accessibility.

Augmented Reality and autonomous vehicles have further underscored their relevance in today's technological landscape.

![An Example of Point Clouds in Action](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/point_cloud_example.jpeg)

**Now that we know what a Point Cloud is, what can we do with them?**

The 3D Point Data is mainly used in self-driving capabilities, but now other AI models using computer vision like drones and robots are also using LiDAR for better visual perception. LiDAR is a remote sensing process that collects measurements used to create 3D models and maps of objects and environments. Using ultraviolet, visible, or near-infrared light, LiDAR gauges spatial relationships and shapes by measuring the time it takes for signals to bounce off objects and return to the scanner.

## Generation and Data Representation

We will be using the python library [point-cloud-utils](https://github.com/fwilliams/point-cloud-utils), and [open-3d](https://github.com/isl-org/Open3D), which can be installed by:

```bash
pip install point-cloud-utils
```

We will be also using the python library open-3d, which can be installed by:

```bash
pip install open3d
```

OR a Smaller CPU only version:

```bash
pip install open3d-cpu
```

Now, first we need to understand the formats in which these point clouds are stored in, and for that, we need to look at mesh cloud.

**Why?**

- `point-cloud-utils` supports reading common mesh formats (PLY, STL, OFF, OBJ, 3DS, VRML 2.0, X3D, COLLADA).
- If it can be imported into [MeshLab](https://github.com/cnr-isti-vclab/meshlab), we can read it! (from their readme)

The type of file is inferred from its file extension. Some of the extensions supported are:

** PLY (Polygon File Format) **

- A simple PLY object consists of a collection of elements for representation of the object. It consists of a list of (x,y,z) triplets of a vertex and a list of faces that are actually indices into the list of vertices.
- Vertices and faces are two examples of elements and the majority of the PLY file consists of these two elements.
- New properties can also be created and attached to the elements of an object, but these should be added in such a way that old programs do not break when these new properties are encountered.

** STL (Standard Tessellation Language) **

- This format approximates the surfaces of a solid model with triangles.
- These triangles are also known as facets, where each facet is described by a perpendicular direction and three points representing the vertices of the triangle.
- However, these files have no description of Color and Texture.

** OFF (Object File Format) **

- Object File Format (.OFF) files are used to represent the geometry of a model by specifying the polygons of the model's surface. The polygons can have any number of vertices.
- It supports ASCII text versions of objects for the purpose of interchange, and binary versions for efficiency of reading and writing
- It is also refered to as the .obj format.

** 3DS (3D Studio) **

- A file with .3ds extension represents the 3D Studio mesh file format used by Autodesk 3D Studio.
- The 3DS format utilizes a binary file structure, enabling faster and smaller file sizes compared to text-based formats, with data organized into chunks within the file.
- These Chunks store the shapes, lighting, and viewing information that together represent the three-dimensional scene.

** X3D (Extensible 3D Graphics) **

- X3D is an XML based 3D graphics file format for presentation of 3D information. It is a modular standard and is defined through several ISO specifications.
- The format supports vector and raster graphics, transparency, lighting effects, and animation settings including rotations, fades, and swings.
- X3D has the advantage of encoding color information (unlike STL) that is used during printing the model on a color 3D printer.

** DAE (Digital Asset Exchange) **

- This is an XML schema which is an open standard XML schema, from which DAE files are built.
- This file format is based on the COLLADA (COLLAborative Design Activity) XML schema which is an open standard XML schema for the exchange of digital assets among graphics software applications.
- The format's biggest selling point is its compatibility across multiple platforms.
- COLLADA files aren't restricted to one program or manufacturer. Instead, they offer a standard way to store 3D assets.

### Synthetic Data Generation with Diffusion Models
https://huggingface.co/learn/computer-vision-course/unit10/datagen-diffusion-models.md

# Synthetic Data Generation with Diffusion Models

Imagine trying to train a model for tumor segmentation. As it's hard to gather data for medical imaging, it'd be really difficult for the model to converge.
Ideally, we expect to have at least enough data to build a simple baseline, but what if you have just a few samples? Synthetic data generation methods try to solve this dilemma, and now we have many more options with the boom of generative models!

As you've seen in the previous sections, it is possible to use generative models such as DCGAN to generate synthetic images. In this section, we will focus on diffusion models using [diffusers](https://huggingface.co/docs/diffusers/index)!

## Recap of the Diffusion Models

Diffusion models are generative models that gained significant popularity in recent years, thanks to their capabilities to produce high-quality images. Nowadays, they're widely used in image, video, and text synthesis.

A diffusion model works by learning to denoise random Gaussian noise step-by-step. The training process requires adding Gaussian noise to the input samples and letting the model learn denoising. 

Diffusion models are generally conditioned to a kind of input besides the data distribution, such as text prompts, images or even audio. Additionally, it's also possible to [build an unconditional generator](https://huggingface.co/docs/diffusers/training/unconditional_training).

There are many underlying concepts behind the model's inner workings but the simplified version is similar to this:

Firstly, the model adds noise to the input and processes it.

![noising](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/noising.jpg?download=true)

Then the model learns to denoise the given data distribution.

![denoising](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/denoising.jpg?download=true)

We won't dive deep into the theory, but understanding how a diffusion model works will come really handy when we need to pick a technique to generate synthetic data for our use case.

## Text-To-Image Diffusion Model: Stable Diffusion

Essentially, the way Stable Diffusion (SD) works is the same as we mentioned above. It uses three main components that help it to produce high-quality images.

1. **The diffusion process:** The input is processed multiple times to generate useful information about the image. The "usefulness" is learned while training the model.

2. **Image encoder and decoder model:** Lets the model compress images from pixel space to a smaller dimensional space, abstracting meaningless information while increasing performance.

3. **Optional conditional encoder:** This component is used to condition to generation process on an input. This extra input can be text prompts, images, audio, and other representations. Originally, it was a text encoder.

So while the general workflow looks like this:
![general stable diffusion workflow](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/general-workflow.png?download=true)

Originally we used a text encoder to condition the model on our prompts:
![original stable diffusion workflow](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/stable-diffusion-workflow.png?download=true)

This short overview is just the tip of the iceberg! If you wish to dive deep into the theory behind stable diffusion (or diffusion models), you can out the [further reading section](#further-reading-about-stable-diffusion)!

[diffusers](https://huggingface.co/docs/diffusers/index) provide us ready to use pipelines for different tasks, such as:

| Task | Description | Pipeline |
|------|-------------|----------|
| Unconditional Image Generation | generate an image from Gaussian noise | [unconditional_image_generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation) |
| Text-Guided Image Generation | generate an image given a text prompt |[conditional_image_generation](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation) |
| Text-Guided Image-to-Image Translation | adapt an image guided by a text prompt | [img2img](https://huggingface.co/docs/diffusers/using-diffusers/img2img) |
| Text-Guided Image-Inpainting | fill the masked part of an image given the image, the mask and a text prompt | [inpaint](https://huggingface.co/docs/diffusers/using-diffusers/inpaint) |
| Text-Guided Depth-to-Image Translation | adapt parts of an image guided by a text prompt while preserving structure via depth estimation | [depth2img](https://huggingface.co/docs/diffusers/using-diffusers/depth2img) |

There's also a complete list of supported tasks that you can find from the [Diffusers Summary](https://huggingface.co/docs/diffusers/api/pipelines/overview#diffusers-summary) table.

This means we have many tools under our belt to generate synthetic data!

## Approaches to Synthetic Data Generation

There are generally three cases for needing synthetic data:

**Extending an existing dataset:**

- **There are not enough samples:** A nice example is a medical imaging dataset such as [DDSM](https://www.mammoimage.org/databases/) (Digital Database for Screening Mammography, ~2500 samples), a small amount of samples makes it harder to build a model for further analysis. It's also rather expensive to build such medical imaging datasets.

**Creating a Dataset from Scratch**:

- **There aren't any samples at all:** Let's assume that you want to build a weapon detection system on top of CCTV video streams. But there aren't any samples for the specific weapon you want to detect, you can use similar observations in different settings to apply style transfer to make them look like CCTV streams!

**Preserving Privacy**:

- Hospitals collect huge amounts of data on patients, surveillance cameras capture raw information about individuals' faces and activities, and all of these introduce a potential infringe on privacy. We can use diffusion models to generate privacy-preserving datasets to develop our solutions, without giving up on anyone's privacy rights.

There are different methods for us to utilize a text-to-image diffusion model to generate customized outputs. For example, by simply utilizing the pre-trained diffusion model (such as [Stable Diffusion XL](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl)), you can try to construct a nice prompt to generate images. But the quality of the generated images may not be consistent and it might be really difficult to construct such a prompt for your specific use case.

You will usually be required to change some parts of the model to generate the personalized output you want, here are a few techniques that you can use for that:

**Training with [Textual Inversion](https://huggingface.co/docs/diffusers/main/en/training/text_inversion):**

![textual-inversion](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/textual_inversion.png?download=true)

Textual inversion is a technique that works by interfering with the text embeddings in the model's architecture. You can add a new token to the vocabulary and then fine-tune the embeddings using a few examples.

By providing samples corresponding to the new token, we try to optimize the embeddings to capture the characteristics of the object.

**Training a [LoRA (Low-Rank Adaptation)](https://huggingface.co/docs/diffusers/main/en/training/lora?installation=PyTorch) model:**

LoRA has targeted the problem of fine-tuning LLMs. It represents the weight updates with two smaller update matrices through low-rank decomposition, which has substantially less amount of parameters. Then these matrices can be trained to adapt the new data!

The base model's weights remain frozen in the whole process, so we just train the new update matrices. Then lastly, the adapted weights are combined with the original weights.

This means training LoRA is essentially much faster than full model fine-tuning! As an important note, it's also possible to combine LoRA with other techniques since it can be added on top of the model itself.

**Training with [DreamBooth](https://huggingface.co/docs/diffusers/main/en/training/dreambooth):**

![dreambooth](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/dreambooth.png?download=true)

DreamBooth is a technique to fine-tune the model to personalize the outputs. Given a few images of a subject, it lets you fine-tune a pre-trained text-to-image model. The main idea is to associate a unique identifier with that specific subject.

For training, we use the tokens in the vocabulary and build a dataset using preferably a rare-token identifier. Because if you choose a rather common identifier, the model would also have to learn to disentangle from their original meaning.

In the original paper, authors find the rare tokens in the vocabulary and then choose identifiers from those. This reduces the risk of an identifier having a strong prior. It is also stated that the best results were achieved by fine-tuning all the layers of the model.

**Training with [Custom Diffusion](https://huggingface.co/docs/diffusers/main/en/training/custom_diffusion):**

![custom diffusion](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-diffusion-models/custom_diffusion.png?download=true)

Custom Diffusion is a really powerful technique to personalize the model. It requires just a few samples like the previously mentioned methods, but its power lies in being able to learn multiple concepts at the same time!

It works by training just a part of the diffusion process and the text encoder we mentioned above, meaning that there are fewer parameters to optimize. Thus, this method also enables fast fine-tuning!

## Real-world Examples of Using Diffusion Models for Dataset Generation

There are many unique cases where diffusion models are used to generate synthetic datasets!

Find them below.

- [Apple Images for Apple Detection in Orchards](https://arxiv.org/abs/2306.09762)
- [Automatically Labeled Polyp Images for Medical Image Segmentation](https://arxiv.org/abs/2310.16794)
- [3D Medical Image Generation with DDPMs](https://www.nature.com/articles/s41598-023-34341-2.pdf)
- [Generating Transmission Line Images with DDPM](https://ieeexplore.ieee.org/document/10281144)
- [Synthetic Aerial Dataset for Unmanned Aerial Vehicle Detection](https://ieeexplore.ieee.org/document/10195076)
- [Differentially Private Diffusion Models for Privacy-preserving Synthetic Image Generation](https://arxiv.org/pdf/2302.13861.pdf)

## Further reading about Stable Diffusion

- [The Illustrated Stable Diffusion](https://jalammar.github.io/illustrated-stable-diffusion/)
- [Diffusion Explainer: Stable Diffusion Explained with Visualization](https://poloclub.github.io/diffusion-explainer/)
- [Introduction to Diffusion Models](https://www.assemblyai.com/blog/diffusion-models-for-machine-learning-introduction/)
- [FastAI, Practical Deep Learning for Coders - Lesson 9: Stable Diffusion](https://course.fast.ai/Lessons/lesson9.html)
- [Original paper](https://arxiv.org/abs/2112.10752)

### Introduction
https://huggingface.co/learn/computer-vision-course/unit10/introduction.md

# Introduction

Have you ever tried to get hold of some data for your problem, be it a machine learning problem or some other development-related problem, and you just couldn't find enough data? Either the data is closed-source and unavailable to you, or it is prohibitively costly or time-consuming to acquire. How do we deal with such a situation?

Well, one solution is synthetic data. Synthetic data is generated by a model to be used in place of real data or with real data. Here, by model, we don't mean only machine learning or deep learning models; they can be simple mathematical or statistical models too, like a set of (stochastic) differential equations modeling a physical or economic [system](https://link.springer.com/book/10.1007/978-3-319-56436-4). Feeling excited yet? Let's dive more into the details of synthetic data: what it is, how it is generated, and its benefits. You might be able to answer the last one a little by now ;)

## What is synthetic data?

As [Royal Society](https://arxiv.org/abs/2205.03257) defines, synthetic data is the data generated using a purpose-built mathematical model or algorithm to solve a (set of) data science task(s). Keep in mind that synthetic data only mimics the real data and is not generated by real events. Ideally, the synthetic data should have the same statistical properties as the real data it is supplementing. It has many uses, such as improving AI models, protecting sensitive data, and mitigating bias.

## Why would you use synthetic data?

Before answering this question, let's talk a little bit about why real data is not sufficient anymore. Some of the non-exhaustive problems with real data are:
- It can be messy and very hard to deal with.
- Inter-company data sharing might not be possible due to privacy issues.
- Medical data is confidential and hence cannot be shared openly.
- It can be biased.
- Data collection and annotation can be expensive.

Most of the above-mentioned problems can potentially be solved by synthetic data:
- Synthetic data are generated in a structured form, and hence, they are easy to deal with.
- Companies can train synthetic data generation models that learn the distribution of the original data but don't reveal anything about individual data points in the original data and hence maintain privacy. A similar approach can be taken for medical data.
- We can train the data generator model to generate de-biased data.
- Synthetic data can be augmented with real data to make the models or applications more robust.

## How to generate synthetic data?

Here, we mention some of the ways to generate synthetic data:

- CAD & Blender: Allows the creation of photorealistic image datasets of 3D scenes while controlling parameters. It enables computing metrics by comparing the synthesized data to the ground truth (generation parameters). It is a very robust method but limited in generation quality, diversity, and quantity. Use cases include using [commercial applications](https://amazon-berkeley-objects.s3.amazonaws.com/static_html/ABO_CVPR2022.pdf), generating [synthetic faces](https://arxiv.org/abs/2109.15102), and [monitoring wildlife](https://openaccess.thecvf.com/content_CVPR_2020/papers/Mu_Learning_From_Synthetic_Animals_CVPR_2020_paper.pdf).
- Deep generative models (Transformers/GANs/Diffusion models): Allow expanding a dataset, tackling data imbalance, and solving privacy issues. Very convenient and powerful but can create datasets with biases, incoherence, and repetitiveness, which induces an important overtraining risk and produces a restricted set of predictions. Use cases include [medical image generation](https://rdcu.be/dokei), [efficient plant disease identification](https://www.mdpi.com/2073-4395/12/10/2395), [industrial waste sorting](https://arxiv.org/abs/2303.14828), [traffic sign recognition](https://arxiv.org/abs/2101.04927), and [detection of emergency vehicles for an autonomous driving car application](https://computer-vision-in-the-wild.github.io/eccv-2022/static/eccv2022/camera_ready/ECCV_2022_cvinw_Domain_Compatible_Synthetic_Data_Generation.pdf).

In this unit, we will introduce the following methods to generate synthetic data: physically-based rendering, point clouds,  and GANs.

## Challenges with synthetic data

Now that we have seen the power and uses of synthetic data, let's take some time out to discuss its challenges:
- Synthetic data is not inherently private: Synthetic data can also leak information about the data it was derived from and is vulnerable to privacy attacks. Significant care is required to generate private synthetic data.
- Outliers can be hard to capture privately: Outliers and low probability events, as are often found in real data, are particularly difficult to capture and to be privately included in a synthetic dataset. 
- Empirically evaluating the privacy of a single dataset can be problematic: Rigorous notions of privacy (e.g., differential privacy) are a requirement on the mechanism that generated a synthetic dataset rather than on the dataset itself.
- Black box models can be particularly opaque when it comes to generating synthetic data: Overparameterised generative models excel in producing high-dimensional synthetic data, but the levels of accuracy and privacy of these datasets are hard to estimate and can vary significantly across produced data points.

## Resources

- [Machine Learning for Synthetic Data Generation: A Review](https://arxiv.org/abs/2302.04062)
- [Synthetic Data -- what, why and how?](https://arxiv.org/abs/2205.03257)
- One very interesting application of synthetic data: [this person does not exist](https://www.thispersondoesnotexist.com/)

### Using a 3D Renderer to Generate Synthetic Data
https://huggingface.co/learn/computer-vision-course/unit10/blenderProc.md

# Using a 3D Renderer to Generate Synthetic Data

When creating computer-generated images to use as synthetic training data, ideally we want the images to look as realistic as possible.
Physically Based Renderers (PBR) such as [Blender Cycles](https://www.blender.org)
or [Unity](https://unity.com) help to create images that are super realistic and look and feel just like they do in the real world.

Imagine you're creating an image of a shiny apple. Now, when you color that apple, you want it to look realistic, right?
That's where something called PBR comes in.

![apple](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/apple.jpg)

Okay, let's break it down:

_Colors and Light:_

- When light shines on objects, it interacts with them in different ways. PBR tries to figure out and simulate this interaction.
- Think about how light hits the apple. Some parts may be brighter where the light directly hits, and some parts might be darker
  where the light is blocked or doesn't reach as much.

_Materials:_

- Different materials react to light differently. For example, a shiny metallic surface reflects light more than a soft, matte fabric.
- PBR takes into account the material of an object, so if you're rendering a metal vase, it will reflect light differently than a fluffy teddy bear.

_Textures:_

- PBR uses textures to add details like bumps, scratches, or tiny grooves on the surface of objects. This makes things look more real because, in the real world, very few things are perfectly smooth.

_Realism:_

- PBR aims to make things look as close to real life as possible. It does this by considering how light behaves in reality, how different materials interact with light, and how surfaces have small imperfections.

_Layers of Light:_

- Imagine you're looking at a glass of water. PBR will try to simulate the way light passes through the water and how it might distort what you see.
- It considers how multiple layers of how light interact with different parts of an object, making the rendered image more realistic.

PBR also simplifies the workflow. Instead of manually tweaking many parameters to get the right look,
you can use a set of standardized materials and lighting models.
This makes the process more intuitive and user-friendly.

Now, think about training AI models like those used in computer vision.
If you're teaching a computer to recognize objects in images, it's beneficial to have a diverse set of
images that closely mimic real-world scenarios. PBR helps in generating synthetic data that looks so real that it can be used
to train computer vision models effectively.

There are several 3D rendering engines that you can use for PBR, including [Blender Cycles](https://www.blender.org)
or [Unity](https://unity.com). We are going to focus on Blender because it is open source and there are a lot of resources about Blender.

## Blender

Blender is a powerful, open-source 3D computer graphics software used for creating animated films, visual effects, art, 3D games, and more.
It encompasses a wide range of features, making it a versatile tool for artists, animators, and developers.
Let's start off by walking through a basic example of rendering a synthetic image of an elephant.

Here are the essential steps:

- Create the elephant model. The one shown below was created with the [Metascan](https://metascan.ai) app using Photogrammetry.
  Photogrammetry is a way of turning regular photos into a 3D model. It's like taking a bunch of pictures of your toy from different angles
  and then using those pictures to make a computer version of it.
- Create the background - this was a multi-step process. See [here](https://github.com/kfahn22/Synthetic-Data-Creation-in-Blender/tree/main/BACKGROUND)
  for a more detailed explanation.
- Adjust the lighting and camera positions.
- Fix the location and rotation of the elephant so that it fits within the frame (or camera view).

Here is the elephant image generated in Blender:

![elephant image](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-PBR/rendered_elephant.png)

It is not completely photorealistic, but probably close enough to train a model to monitor elephant populations. Of course, to do
that we need to create a large dataset of synthetic elephant images! You can use the Blender python environment
[bpy](https://docs.blender.org/api/current/info_advanced_blender_as_bpy.html) to render
a large number of images with the location and rotation of the elephant randomized.
You can also use a script to help with segmentation, depth, normal, and pose estimation.

Great! How do we get started?

Unfortunately, there is a pretty steep learning curve associated with Blender. None of the steps are too complicated,
but wouldn't it be nice if we could render the dataset without trying to figure all of this out?
Luckily for us, there is a library called BlenderProc that has all the scripts we need to render realistic synthetic data
and annotations and it is built on top of Blender.

## BlenderProc

The BlenderProc pipeline was introduced in [BlenderProc](https://arxiv.org/abs/1911.01911), Denninger, et. al. and is a modular pipeline built on top of [Blender](https://www.blender.org/).
It can be used to generate images in a variety of use cases, including segmentation, depth, normal and pose estimation.

It is specifically created to help in the generation of realistic looking images for the training of convolutional neural networks.  
 It has the following properties which make it a great choice for synthetic data generation:

- Procedural Generation: Enables the automated creation of complex 3D scenes with variations using procedural techniques.
- Simulation: Supports the integration of simulations, including physics simulations, to enhance realism.
- Large-Scale Generation: Designed to handle large-scale scene generation efficiently, making it suitable for diverse applications.
- Automation and Scalability:
  - Scripting: Allows users to automate the generation process by employing python scripts to tailor BlenderProc to their specific needs and configure parameters.
  - Parallel Processing: Supports parallel processing for scalability, making it efficient for generating a large number of scenes.

You can install BlenderProc via pip:

```bash
pip install blenderProc
```

Alternately, you can clone the official [BlenderProc repository](https://github.com/DLR-RM/BlenderProc) from GitHub using Git:

```bash
git clone https://github.com/DLR-RM/BlenderProc
```

BlenderProc must be run inside the blender python environment (bpy), as this is the only way to access the Blender API.

```bash
blenderproc run 
```

You can check out this notebook to try BlenderProc in Google Colab, demos the basic examples provided [here](https://github.com/DLR-RM/BlenderProc/tree/main/examples/basics).
Here are some images rendered with the basic example:

![colors](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-PBR/colors.png)
![normals](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-PBR/normals.png)
![depth](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-PBR/depth.png)

## Blender Resources

- [User Manual](https://docs.blender.org/manual/en/latest/0)
- [Awesome-blender -- Extensive list of resources](https://awesome-blender.netlify.app)
- [Blender Youtube Channel](https://www.youtube.com/@BlenderOfficial)

### The following video explains how to render a 3D syntehtic dataset in Blender:

### The following video explains how to create a 3D object using Photogrammetry:

## Papers / Blogs

- [Developing digital twins of multi-camera metrology systems in Blender](https://iopscience.iop.org/article/10.1088/1361-6501/acc59e/pdf_)
- [Generate Depth and Normal Maps with Blender](https://www.saifkhichi.com/blog/blender-depth-map-surface-normals)
- [Object detection with synthetic training data](https://medium.com/rowden/object-detection-with-synthetic-training-data-f6735a5a34bc)

## BlenderProc Resources

- [BlenderProc Github Repo](https://github.com/DLR-RM/BlenderProc)
- [BlenderProc: Reducing the Reality Gap with Photorealistic Rendering](https://elib.dlr.de/139317/1/denninger.pdf)
- [Documentation](https://dlr-rm.github.io/BlenderProc/)

### The following video provides an overview of the BlenderProc pipeline:

## Papers

- [3D Menagerie: Modeling the 3D Shape and Pose of Animals]()
- [Fake It Till You Make It: Face analysis in the wild using synthetic data alone]()
- [Object Detection and Autoencoder-Based 6D Pose Estimation for Highly Cluttered Bin Picking](https://arxiv.org/pdf/2106.08045.pdf)
- [Learning from Synthetic Animals](https://arxiv.org/abs/1912.08265)

### Synthetic Data Generation Using DCGAN
https://huggingface.co/learn/computer-vision-course/unit10/synthetic-lung-images.md

# Synthetic Data Generation Using DCGAN

We learned in Unit 5 that a GAN is a framework in machine learning where two neural networks, a Generator and a Discriminator, are in a constant duel. The Generator creates synthetic images, and the Discriminator tries to distinguish between real and fake images. They keep improving through this adversarial process, with the Generator getting better at creating realistic images, and the Discriminator getting better at distinguishing between fake and real images.

We now will look at how we can use a GAN to generate medical images, a domain that is challenged with small datasets, privacy concerns, and a limited amount of annotated samples. Researchers have used GANs to generate synthetic images such as lung X-ray images, retina images, brain scans, and liver images. In [GAN-based synthetic brain PET image generation](https://braininformatics.springeropen.com/counter/pdf/10.1186/s40708-020-00104-2.pdf), the authors created brain PET images for three different stages of Alzheimer’s disease. [GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification](https://arxiv.org/abs/1803.01229) generated synthetic liver images. [BrainGAN: Brain MRI Image Generation and Classification Framework Using GAN Architectures and CNN Models](https://www.mdpi.com/1424-8220/22/11/4297) developed a framework for generating brain MRI images using multiple GAN architectures, and [A Novel COVID-19 Detection Model Based on DCGAN and Deep Transfer Learning](https://www.sciencedirect.com/science/article/pii/S1877050922007463) used DCGAN to generate synthetic lung X-ray images to aid in COVID-19 detection.

## DCGAN (Deep Convolutional Generative Adversarial Network)

DCGAN was proposed in [Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434) by Radford, et al. and is the model many researchers have used to generate synthetic medical images. We are going to use it to generate synthetic lung images. Before we use DCGAN to train the model, we will briefly review its architecture. The generator network takes random noise as input and generates synthetic lung images, while the discriminator network tries to distinguish between real and synthetic images. It uses convolutional layers in both the generator and discriminator to capture spatial features effectively. DCGAN also replaces max-pooling with strided convolutions to downsample the spatial dimensions.

The generator has the following model architecture:

- The input is a vector a 100 random numbers and the output is a image of size 128*128*3.
- The model has 4 convolutional layers:
  - Conv2D layer
  - Batch Normalization layer
  - ReLU activation
- Conv2D layer with Tanh activation.

The discriminator has the following model architecture:

- The input is an image and the output is a probability indicating whether the image is fake or real.
- The model has one convolutional layer:
  - Conv2D layer
  - Leaky ReLU activation
- Three convolutional layers with:
  - Conv2D layer
  - Batch Normalization layer
  - Leaky ReLU activation
- Conv2D layer with Sigmoid.

**Data Collection**

First, we need to obtain a [dataset](https://data.mendeley.com/datasets/rscbjbr9sj/2) of real lung images. We will be downloading the [Chest X-Ray Images (Pneumonia)](https://huggingface.co/datasets/hf-vision/chest-xray-pneumonia) dataset from the Hugging Face Hub.

Here is some information about the dataset:

    * From [Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning](https://www.cell.com/cell/fulltext/S0092-8674(18)30154-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867418301545%3Fshowall%3Dtrue):

    * The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).

    * Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care.

    * For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.

We will start by logging into the Hugging Face hub.

```python
from huggingface_hub import notebook_login

notebook_login()
```

Next, we will load the dataset.

```python
from datasets import load_dataset

dataset = load_dataset("hf-vision/chest-xray-pneumonia")
```

We will preprocess the lung images by resizing and normalizing the pixel values.

```python
import torchvision.transforms as transforms
from torchvision.transforms import CenterCrop, Compose, Normalize, Resize, ToTensor

transform = Compose(
    [
        transforms.Resize(image_size),
        transforms.CenterCrop(image_size),
        transforms.ToTensor(),
    ]
)
```

During training, the generator aims to produce synthetic lung images that are indistinguishable from real images, while the discriminator learns to correctly classify the images as real or synthetic. We start by initializing the generator with random noise and will train for 100 epochs.

Let's visualize the progress:

![trainig-gif](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/dcgan_training_animation.gif)

## How did we do?

Here are 64 "good" synthetic images, defined as receiving a "real" label from the discriminator with a 70% probability.

![lung-images](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/good_images.png)

We can see that some of the synthetic lung images look good, but others are hazy. There are some important things to mention. First, researchers generating synthetic medical images typically employ a "human in the middle"--in this case, a expert radiologist--to evaluate the synthetic images. Then, only the images that fool the expert are included with the real data to train the model. Second, the generated images that appear to look OK (at least to an amateur) look very similar. This is another known issue with GANs - they can suffer from "mode collapse." Essentially, this happens when the generator starts producing the same type of output. Think of it as someone who has gotten a lot of praise for making chocolate chip cookies, and therefore, gets _really, really_ good at making those cookies but can't make any other type of cookies.

Given the known challenges associated with employing GANs to train high-quality medical images, some researchers have explored the use of diffusion models to generate lung images. Medfusion, a conditional latent DDPM for medical images, was proposed in [Diffusion Probabilistic Models Beat GAN On Medical 2D Images](https://arxiv.org/pdf/2212.07501.pdf). [Synthetically Enhanced: Unveiling Synthetic Data's Potential In Medical Imaging Research](https://arxiv.org/pdf/2311.09402.pdf), Khosravi et. al find that using a mix of real and synthetically generated lung images using a diffusion process improved model performance.

## Resources and Further Reading

- [A Novel COVID-19 Detection Model Based on DCGAN and Deep Transfer Learning](https://www.sciencedirect.com/science/article/pii/S1877050922007463)
- [Augmentation_Gan](https://github.com/rossettisimone/AUGMENTATION_GAN)
- [BrainGAN: Brain MRI Image Generation and Classification Framework Using GAN Architectures and CNN Models](https://www.mdpi.com/1424-8220/22/11/4297)
- [dentifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning](https://www.cell.com/action/showPdf?pii=S0092-8674%2818%2930154-5)
- [Diffusion Probabilistic Models beat GANs on Medical Images](https://arxiv.org/abs/2212.07501)
- [DR-DCGAN: A Deep Convolutional Generative Adversarial Network (DC-GAN) for Diabetic Retinopathy Image Synthesis]()
- [Deepfake Image Generation for Improved Brain Tumor Segmentation](https://aps.arxiv.org/abs/2307.14273)
- [GAN Lab](https://poloclub.github.io/ganlab/)
- [GANs for Medical Image Synthesis: An Empirical Study](https://arxiv.org/abs/2105.05318)
- [Medfusion Github repo](https://github.com/mueller-franzes/medfusion)
- [Medical image editing in the latent space of Generative Adversarial Networks](https://www.sciencedirect.com/science/article/pii/S2666521221000168?ref=pdf_download&fr=RR-2&rr=833e48fa5e777142)
- [Medical Image Synthesis with Context-Aware Generative Adversarial Networks](https://arxiv.org/abs/1612.05362)
- [MedSynAnalyzer](https://github.com/ayanglab/MedSynAnalyzer)[dcgan_faces_tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
- [pytorch-fid](https://github.com/mseitzer/pytorch-fid/blob/master/src/pytorch_fid/fid_score.py)
- [StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis](https://arxiv.org/abs/2206.09479)
- [PyTorch-StudioGAN](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN)
- [Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks](https://arxiv.org/abs/1511.06434)

### Challenges and Opportunities Associated With Using Synthetic Data
https://huggingface.co/learn/computer-vision-course/unit10/challenges.md

# Challenges and Opportunities Associated With Using Synthetic Data

Training machine learning models requires vast amounts of data. Synthetic data can help by addressing privacy issues, augmenting limited data, and correcting imbalances in the real data. We have learned how to generate synthetic data using several different methods. Before using synthetic data to train a model, however, there are several important things that need to be considered.

## Overfitting the Model

Overfitting occurs when a machine learning model learns the training data so well that it doesn't perform well on new, unseen data.
It's akin to learning a specific way to solve a problem but then encountering a new situation where the strategy doesn't work. If the process of generating synthetic data is too simple or there are are overly-consistent patterns, your model might overfit to the limited variations present in the synthetic data. As a very simple example, suppose you trained a model using a synthetic dataset of 25 red circles and 25 blue squares. The model will probably learn to associate circles with the color red and squares with the color blue. This model would likely fail if presented with a red square.

  Be sure to double check that your dataset doesn't have the following types of
  patterns!

_Overly Consistent Color_
![consistent-color](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-overfit/overfit-color.jpg)

_Overly Consistent Size_
![consistent-size](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-overfit/overfit-size.jpg)

_Overly Consistent Background_
![consistent-background](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-overfit/overfit-background.jpg)

_Overly Consistent Location_
![consistent-location](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-overfit/overfit-location.jpg)

## Are there biases in the synthetic data?

If the process of generating synthetic data has biases or inaccuracies, your model may unintentionally learn and perpetuate those biases. Beware of the following pitfalls:

**Limited Diversity**

One challenge is that synthetic data may fail to adequately represent the complexity and diversity of the real data. The shape example might seem trivial, but there are lots of situations where failing to account for the wide variety of people, places, animals, or objects will result in a model that doesn't perform well. For example, suppose you wanted to train a model to monitor the population of an endangered species, such as aye-aye lemurs. If your dataset only contains images of ring-tailed lemurs, the model might struggle to accurately identify aye-aye lemurs in the wild. This limitation could lead to errors in population assessments. The great thing is that if you are mindful of any imbalances in the underlying dataset, you can potentially use synthetic data to de-bias the real data by augmenting with synthetic data from the under-represented class.

  Try to make sure your dataset reflects the variety found in the real world!

**Nice Variety**
![nice-variety](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/synthetic-data-creation-overfit/good-variety.jpg)

**Copying Existing Biases:**

If the data you used to create the synthetic images already had biases, your model might unintentionally learn and replicate those biases. It's like copying a friend's notes without realizing they made a mistake – your computer might end up with the same errors.

## Does the benefit of using synthetic data outweigh the computational cost?

Generating high-quality synthetic data can be computationally expensive. This may pose challenges in terms of both time and resources, especially for complex models or large datasets. As a general rule, generating and using a synthetic dataset only makes sense if it ultimately saves resources (money, time, etc.).

### What is the perceived quality of the synthetic images?

Let's consider the lung images we generated using DCGAN. While some of the images looked pretty realistic, others were not so good. A model trained with the low-quality images might fail to detect pneumonia because they contained noise that isn't present in the real images. It is also possible that your model might get really good at recognizing patterns in the synthetic data, but those patterns might not exist or may be different in the real world.

A good practice is to evaluate your dataset using a metric such as Frechet Inception Distance (FID), Inception Score (IS), or the Classification Accuracy Score (CAS).

_FID:_

FID uses a pre-trained neural network model, often [Inception](https://huggingface.co/docs/timm/models/inception-v4), which is good at recognizing objects in images. The model is used to extract features from both the real and generated images.FID is a measure of how "far" one distribution is from another, taking into account both the mean and covariance of the distributions.

A low FID suggests that the feature distributions of real and generated images are similar and the generated images are more likely
to be realistic.

_IS:_

IS uses a pre-trained Inception model to evaluate the quality of generated images produced by generative models, particularly GANs.
For each generated image, the Inception model assigns a score based on its confidence in recognizing objects within that image. High scores are better and indicate that the Inception model is confident about the content of the image.

_CAS:_

Classification accuracy is another measure of how well your model is performing on the synthetic data. A higher accuracy indicates that the model is effectively capturing the features and patterns of the real images. Low accuracy scores for certain classes may indicate issues with the generation process, such as unrealistic backgrounds, incorrect textures, or inconsistent lighting conditions. You can use CIS to help you identify and address these problems to improve the overall quality of the synthetic dataset.

## Conclusion

Even after training your model, it is crucial to continuously monitor its performance in real-world scenarios. If your model encounters new situations or trends that weren't present in the synthetic data, it might struggle to adapt. Addressing these challenges involves mindful design of the synthetic data generation process and evaluation of the model's performance on real data. Applying these principles will help to unlock the potential of synthetic data!

## Resources and Further Reading

- [Analyzing Effects of Fake Training Data on the Performance of Deep Learning Systems](https://arxiv.org/pdf/2303.01268.pdf)
- [Bridging the Gap: Enhancing the Utility of Synthetic Data Via Post-Processing Techniques](https://arxiv.org/pdf/2305.10118.pdf)
- [CIFAKE: Image Classification and Explanable Identification of AI-Generated Synthetic Images](https://arxiv.org/pdf/2303.14126.pdf)
- [Classification Accuracy Score for Conditional Generative Models](https://arxiv.org/abs/1905.10887)
- [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/abs/1706.08500)
- [Improved Techniques for Training GANs](https://arxiv.org/abs/1606.03498)
- [Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/pdf/1512.00567v3.pdf)
- [Metrics](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/metrics)
- [pytorch-fid](https://github.com/mseitzer/pytorch-fid)

### Synthetic Datasets
https://huggingface.co/learn/computer-vision-course/unit10/synthetic_datasets.md

# Synthetic Datasets

## Introduction
Welcome to the fascinating world of synthetic datasets in computer vision! As we've transitioned from classical unsupervised methods to advanced deep learning techniques, the demand for extensive and diverse datasets has skyrocketed. Synthetic datasets have emerged as a pivotal resource in training state-of-the-art models, providing an abundance of data that's often impractical or impossible to collect in the real world. In this section, we'll explore some of the most influential synthetic datasets, their applications, and how they're shaping the future of computer vision.

## Low-Level Computer Vision Problems

### Optical Flow and Motion Analysis

Optical flow and motion analysis are critical in understanding image dynamics. Here are some datasets that have significantly contributed to advancements in this area:

| Dataset Name           | Year                  | Description                                                  | Paper                                                        | Additional Links                                             |
| ---------------------- | --------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Middlebury             | 2021 (latest release) | The **Middlebury** Stereo dataset consists of high-resolution stereo sequences with complex geometry and pixel-accurate ground-truth disparity data. The ground-truth disparities are acquired using a technique that employs structured lighting and does not require the calibration of the light projectors. | [A database and  evaluation method for Optical Flow](https://vision.middlebury.edu/flow/floweval-ijcv2011.pdf) (Cited by 3192 at the time of writing) | [Papers with Code](https://paperswithcode.com/dataset/middlebury) - [Website](https://vision.middlebury.edu/stereo/data/) |
| Playing for Benchmarks | 2017                  | more than 250K high-resolution video frames, all annotated with ground-truth data for high level tasks but also for low-level tasks like optical flow estimation and visual odometry. | [Playing for benchmarks](https://arxiv.org/abs/1709.07322)   | [Website](https://playing-for-benchmarks.org/)               |
| MPI-Sintel             | 2012                  | A synthetic dataset for optical flow. The main characteristic feature of MPI-Sintel is that it contains the same scenes with different render settings, varying quality and complexity; this approach can provide a deeper understanding of where different optical flow algorithms break down. (paper quote) | [A Naturalistic Open Source Movie for Optical Flow Evaluation](https://link.springer.com/chapter/10.1007/978-3-642-33783-3_44) (551 citations at time of writing) | [Website](http://sintel.is.tue.mpg.de/)                      |

### Stereo Image Matching

Stereo image matching involves identifying corresponding elements in different images of the same scene. The following datasets have been instrumental in this field:

| Name             | Year | Description                                                  | Paper                                                        | Additional Links                                       |
| ---------------- | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------ |
| Flying Chairs    | 2015 | 22k frame pairs with ground truth flow                       | [Learning optical flow with convolutional networks.](https://arxiv.org/abs/1504.06852)           |                                                        |
| Flying Chairs 3D | 2015 | 22k stereo frames                                            | [A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation.](https://arxiv.org/abs/1512.02134) |                                                        |
| Driving          | 2015 | 4392 stereo frames                                          | [A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation.](https://arxiv.org/abs/1512.02134) |                                                        |
| Monkaa           | 2015 | 8591 stereo frames                                           | [A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation.](https://arxiv.org/abs/1512.02134) |                                                        |
| Middlebury 2014  | 2014 | 33 high-resolution stereo datasets                                  | [High-resolution stereo datasets with subpixel-accurate ground truth](https://link.springer.com/chapter/10.1007/978-3-319-11752-2_3) |                                                        |
| Tsukuba Stereo   | 2012 | This dataset includes 1800 stereo pairs accompanied by ground truth disparity maps, occlusion maps, and discontinuity maps. | Towards a simulation-driven stereo vision system             | [Project](https://home.cvlab.cs.tsukuba.ac.jp/dataset) |                                                                                                                                                                    

## High-Level Computer Vision Problems

### Semantic Segmentation for Autonomous Driving

Semantic segmentation is vital for autonomous vehicles to interpret and navigate their surroundings safely. These datasets provide rich, annotated data for this purpose:

| Name        | Year | Description | Paper |  | Additional Links |
|---------------------|--------------|-------------|----------------|---------------------|---------------------|
| Virtual KITTI 2      | 2020          | Virtual Worlds as Proxy for Multi-Object Tracking Analysis | [Virtual KITTI 2](https://arxiv.org/pdf/2001.10773.pdf) |  | [Website](https://europe.naverlabs.com/Research/Computer-Vision/Proxy-Virtual-Worlds/) |
| ApolloScape | 2019 | Compared with existing public datasets from real scenes, e.g. KITTI [2] or Cityscapes [3], ApolloScape contains much large and richer labeling including holistic semantic dense point cloud for each site, stereo, per-pixel semantic labeling, lane-mark labeling, instance segmentation, 3D car instance, high accurate location for every frame in various driving videos from multiple sites, cities, and daytimes. | [The ApolloScape Open Dataset for Autonomous Driving and its Application](https://arxiv.org/abs/1803.06184) | | [Website](https://apolloscape.auto/) |
| Driving in the Matrix | 2017       | The core idea behind "Driving in the Matrix" is to use photo-realistic computer-generated images from a simulation engine to produce annotated data quickly.  | [Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?](https://arxiv.org/pdf/1610.01983.pdf) |  | [GitHub](https://github.com/umautobots/driving-in-the-matrix)  ![GitHub stars](https://img.shields.io/github/stars/umautobots/driving-in-the-matrix.svg?style=social&label=Star) |
| CARLA | 2017 | **CARLA** (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation). | [CARLA: An Open Urban Driving Simulator](https://arxiv.org/pdf/1711.03938v1.pdf) | | [Website](https://carla.org/) |
| Synthia             | 2016         | A large collection of synthetic images for semantic segmentation of urban scenes. SYNTHIA consists of a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations for 13 classes: misc, sky, building, road, sidewalk, fence, vegetation, pole, car, sign, pedestrian, cyclist, lane-marking. | [The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes](https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Ros_The_SYNTHIA_Dataset_CVPR_2016_paper.html) |  | [Website](https://synthia-dataset.net/) |
| GTA5 | 2016 | The **GTA5** dataset contains 24966 synthetic images with pixel-level semantic annotation. The images have been rendered using the open-world video game **Grand Theft Auto 5** and are all from the car perspective in the streets of American-style virtual cities. 19 semantic classes are compatible with the ones of the Cityscapes dataset. | [Playing for Data: Ground Truth from Computer Games](https://arxiv.org/abs/1608.02192v1) |  | [BitBucket](https://bitbucket.org/visinf/projects-2016-playing-for-data/src/master/) |
| ProcSy |  | A synthetic dataset for semantic segmentation, modeled on a real-world urban environment and features a range of variable influence factors, such as weather and lighting. | [ProcSy: Procedural Synthetic Dataset Generation Towards Influence Factor Studies Of Semantic Segmentation Networks](https://openaccess.thecvf.com/content_CVPRW_2019/papers/Vision%20for%20All%20Seasons%20Bad%20Weather%20and%20Nighttime/Khan_ProcSy_Procedural_Synthetic_Dataset_Generation_Towards_Influence_Factor_Studies_Of_CVPRW_2019_paper.pdf) | | [Website](https://uwaterloo.ca/waterloo-intelligent-systems-engineering-lab/procsy) |

### Indoor Simulation and Navigation

Navigating indoor environments can be challenging due to their complexity. These datasets aid in developing systems capable of indoor simulation and navigation:

| Name | Year | Description | Paper | Additional Links |
|--------------|--------------|-------------|----------------|--------------|
|Habitat       |  2023         | An Embodied AI simulation platform for studying collaborative human-robot interaction tasks in home environments. | [HABITAT 3.0: A CO-HABITAT FOR HUMANS, AVATARS AND ROBOTS](https://ai.meta.com/static-resource/habitat3) | [Website](https://aihabitat.org/habitat3/) |
| Minos        | 2017          | Multimodal Indoor Simulator | [MINOS: Multimodal Indoor Simulator for Navigation in Complex Environments](https://arxiv.org/pdf/1712.03931.pdf) | [GitHub](https://github.com/minosworld/minos) ![GitHub stars](https://img.shields.io/github/stars/minosworld/minos.svg?style=social&label=Star) |
| House3D      | 2017 (archived in 2021) | A Rich and Realistic 3D Environment | [Building generalisable agents with a realistic and rich 3D environment](https://arxiv.org/pdf/1801.02209v2.pdf) | [GitHub](https://github.com/facebookresearch/House3D) ![GitHub stars](https://img.shields.io/github/stars/facebookresearch/House3D.svg?style=social&label=Star) |

### Human Action Recognition and Simulation

Recognizing and simulating human actions is a complex task that these datasets help to address:

| Name    | Year | Description                                                  | Paper                                                        | Additional Links                                             |
| ------- | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| PHAV    | 2017 | Synthetic dataset of procedurally generated human action recognition videos. | [Procedural Generation of Videos to Train Deep Action Recognition Networks](https://openaccess.thecvf.com/content_cvpr_2017/papers/de_Souza_Procedural_Generation_of_CVPR_2017_paper.pdf) | [Website](http://adas.cvc.uab.es/phav/)                      |
| Surreal | 2017 | (change description - this is for human depth estimation and human part segmentation) Large-scale dataset with synthetically generated but realistic images of people  rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth poses, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. | [Learning from Synthetic Humans](https://arxiv.org/abs/1701.01370) | [GitHub](https://github.com/gulvarol/surreal) ![GitHub stars](https://img.shields.io/github/stars/gulvarol/surreal.svg?style=social&label=Star)- [Website](https://www.di.ens.fr/willow/research/surreal/) |

### Face Recognition

Face recognition technology has numerous applications, from security to user identification. Here's a look at datasets that drive innovations in this field:

| Name           | Year | Description                                                  | Paper                                                        | Additional Links                                             |
| -------------- | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| FaceSynthetics | 2021 | The Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. | [Fake It Till You Make It: Face Analysis in the Wild Using Synthetic Data Alone](https://openaccess.thecvf.com/content/ICCV2021/html/Wood_Fake_It_Till_You_Make_It_Face_Analysis_in_the_ICCV_2021_paper.html) | [Website](https://microsoft.github.io/FaceSynthetics/) - [GitHub](https://github.com/microsoft/FaceSynthetics) ![GitHub stars](https://img.shields.io/github/stars/microsoft/FaceSynthetics.svg?style=social&label=Star) |
| FFHQ           | 2018 | consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. | [A Style-Based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948) | [GitHub](https://github.com/NVlabs/ffhq-dataset?tab=readme-ov-file) ![GitHub stars](https://img.shields.io/github/stars/NVlabs/ffhq-dataset.svg?style=social&label=Star) |
|                |      |                                                              |                                                              |                                                              |

### 3D Shape Modeling from single images

Creating 3D models from single images is a challenging yet exciting area. These datasets are at the forefront of research in 3D shape modeling:

| Name  | Year | Description                                                  | Paper                                                        |
| ----- | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Pix3D | 2018 | A large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, and viewpoint estimation. | [Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling](http://pix3d.csail.mit.edu/papers/pix3d_cvpr.pdf) |
|       |      |                                                              |                                                              |

### Diverse Applications

The following datasets are either tailored for niche applications or cover multiple ones:

| Dataset Name          | Release Year | Description | Paper | External Links | Applications |
|-----------------------|--------------|-------------|----------------|-----------------------|-----------------------|
| CIFAKE | 2023 | CIFAKE is a dataset that contains 60,000 synthetically generated images and 60,000 real images (collected from CIFAR-10). | [CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images](https://arxiv.org/pdf/2303.14126v1.pdf) |[Kaggle](https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images/data)| Real-Fake Images Classification |
| ABO | 2022 | **ABO** is a large-scale dataset designed for material prediction and multi-view retrieval experiments. The dataset contains Blender renderings of 30 viewpoints for each of the 7,953 3D objects, as well as camera intrinsics and extrinsic for each rendering. | [ABO: Dataset and Benchmarks for Real-World 3D Object Understanding](https://arxiv.org/pdf/2110.06199.pdf) |[Website](https://amazon-berkeley-objects.s3.amazonaws.com/index.html)| Material Prediction; Multi-View Retrieval; 3D Objects understanding; 3D Shape Reconstruction; |
| NTIRE 2021 HDR | 2021 | This dataset is composed of approximately 1500 training, 60 validation, and 201 testing examples. Each example in the dataset is in turn composed of three input LDR images, i.e. short, medium, and long exposures, and a related ground-truth HDR image aligned with the central medium frame. | [NTIRE 2021 Challenge on High Dynamic Range Imaging: Dataset, Methods and Results](https://arxiv.org/pdf/2106.01439.pdf) |[Papers with Code](https://paperswithcode.com/dataset/ntire-2021-hdr)| Image Super Resolution |
| YCB-Video | 2017 | a large-scale video dataset for 6D object pose estimation. provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. | [PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes](PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes) |[Website](https://opendatalab.com/OpenDataLab/YCB-Video)| 6D Pose Estimation |
| Playing for benchmarks | 2017 | more than 250K high-resolution video frames, all annotated with ground-truth data. | [Playing for benchmarks](https://arxiv.org/abs/1709.07322) |[Website](https://playing-for-benchmarks.org/)| Semantic Instance Segmentation; Object Detection and Tracking; Object-Level 3D Scene Layout; |
| 4D Light Field Dataset| 2016      | 24 synthetic, densely sampled 4D light fields with highly accurate disparity ground truth. | [A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields](https://lightfield-analysis.net/benchmark/paper/lightfield_benchmark_accv_2016.pdf) | [GitHub](https://github.com/lightfield-analysis) ![GitHub stars](https://img.shields.io/github/stars/lightfield-analysis.svg?style=social&label=Star) -  [Website](https://lightfield-analysis.uni-konstanz.de/) | Depth Estimation of 4D light fields |
| ICL-NUIM Dataset      | 2014      | RGB-D with noise models, 2 scenes. This is for indoor environments. | [A benchmark for rgb-d visual odometry, 3d reconstruction, and slam.](https://ieeexplore.ieee.org/document/6907054) |[Website](https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html)| RGB-D, Visual Odometry and SLAM algorithms. |
|  |  |  |  ||  |

## 3D Objects datasets

Basic high-level computer vision problems, such as object detection or segmentation, fully enjoy the benefits of perfect labeling provided by synthetic data, and there is plenty of effort devoted to making synthetic data work for these problems. Since making synthetic data requires the development of 3D models, datasets usually also feature 3D-related labeling such as the depth map, labeled 3D parts of a shape, volumetric 3D data, and so on.

| Dataset        | Year | Description                                         | Paper                                                        | Citations at the time of writing | Additional Links                                             |
| -------------- | ---- | --------------------------------------------------- | ------------------------------------------------------------ | -------------------------------- | ------------------------------------------------------------ |
| ADORESet       | 2019 | Hybrid dataset for object recognition testing       | [A hybrid image dataset toward bridging the gap between real and simulation environments for robotics.](https://link.springer.com/article/10.1007/s00138-018-0966-3) | 13                               | [GitHub](https://github.com/bayraktare/ADORESet) ![GitHub stars](https://img.shields.io/github/stars/bayraktare/ADORESet.svg?style=social&label=Star) |
| Falling Things | 2018 | 61.5K images of YCB objects in virtual envs         | [Falling things: A synthetic dataset for 3d object detection and pose estimation.](https://arxiv.org/abs/1804.06534) | 171                              | [Website](https://research.nvidia.com/publication/2018-06_falling-things-synthetic-dataset-3d-object-detection-and-pose-estimation) |
| PartNet        | 2018 | 26671 models, 573535 annotated part instances       | [Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding.](https://arxiv.org/abs/1812.02713) | 552                              | [Website](https://opendatalab.com/OpenDataLab/PartNet)       |
| ShapeNetCore   | 2017 | 51K manually verified models from 55 categories     | [Large-scale 3d shape reconstruction and segmentation from shapenet core55.](https://arxiv.org/abs/1710.06104) | 71                               | [Website](https://shapenet.org/)                             |
| VANDAL         | 2017 | 4.1M depth images, >9K objects in 319 categories    | [A deep representation for depth images from synthetic data.](https://arxiv.org/pdf/1609.09713.pdf) | 43                               | N/A                                                          |
| UnrealCV       | 2017 | Plugin for UE4 to generate synthetic data           | [Unrealcv: Virtual worlds for computer vision.](https://dl.acm.org/doi/pdf/10.1145/3123266.3129396) | 95                               | N/A                                                          |
| SceneNet RGB-D | 2017 | 5M RGB-D images from 16K 3D trajectories            | [Scenenet rgb-d: Can 5m synthetic images beat generic ImageNet pre-training on indoor segmentation?](https://openaccess.thecvf.com/content_ICCV_2017/papers/McCormac_SceneNet_RGB-D_Can_ICCV_2017_paper.pdf) | 309                              | [Website](https://robotvault.bitbucket.io/scenenet-rgbd.html) |
| DepthSynth     | 2017 | Framework for realistic simulation of depth sensors | [Real-time realistic synthetic data generation from cad models for 2.5d recognition.](https://arxiv.org/pdf/1702.08558.pdf) | 84                               | N/A                                                          |
| 3DScan         | 2016 | a large dataset of object scans                     | [A large dataset of objects scan.](https://arxiv.org/abs/1602.02481) | 223                              | [Website]( http://redwood-data.org/3dscan/)                  |

## Conclusion

The development and utilization of synthetic datasets have been a game-changer in the field of computer vision. They not only offer a solution to the data scarcity problem but also ensure a level of accuracy and variability that's hard to achieve with real-world data alone. As technology progresses, we can anticipate even more sophisticated and realistic datasets that will continue to push the boundaries of what's possible in computer vision.

## References 

- [Synthetic Data for Computer Vision](https://github.com/unrealcv/synthetic-computer-vision)
- [Overview of Synthetic Data Generation](https://arxiv.org/pdf/1909.11512.pdf)

### Generative Adversarial Networks
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/gans.md

# Generative Adversarial Networks

## Introduction
Generative Adversarial Networks (GANs) are a class of deep learning models introduced by [Ian Goodfellow](https://scholar.google.ca/citations?user=iYN86KEAAAAJ&hl=en) and his colleagues in 2014. The core idea behind GANs is to train a generator network to produce data that is indistinguishable from real data, while simultaneously training a discriminator network to differentiate between real and generated data.
* **Architecture overview:** GANs consist of two main components: `the generator` and `the discriminator`.
* **Generator:** The generator takes random noise \\(z\\) as input and generates synthetic data samples. Its goal is to create data that is realistic enough to deceive the discriminator.
* **Discriminator:** The discriminator, akin to a detective, evaluates whether a given sample is real (from the actual dataset) or fake (generated by the generator). Its objective is to become increasingly accurate in distinguishing between real and generated samples.

A common analogy that can be found online is that of an art forger/painter (the generator) which tries to forge paintings and an art investigator/critic (the discriminator) which tries to detect limitations.

![Lilian Weng GAN Figure](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/GAN.png)

## GANs vs VAEs
GANs and VAEs are both popular generative models in machine learning, but they have different strengths and weaknesses. Whether one is "better" depends on the specific task and requirements. Here's a breakdown of their strengths and weaknesses.
* **Image Generation:**
    - **GANs:**
        * **Strengths:** Generate higher quality images, especially for complex data with sharp details and realistic textures.
        * **Weaknesses:** Can be more difficult to train and prone to instability.
        * **Example:** A GAN-generated image of a bedroom is likely to be indistinguishable from a real one, while a VAE-generated bedroom might appear blurry or have unrealistic lighting.
        ![Example of GAN-Generated bedrooms taken from Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 2015](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/bedroom.png)
    - **VAEs:**
        * **Strengths:** Easier to train and more stable than GANs.
        * **Weaknesses:** May generate blurry, less detailed images with unrealistic features.
* **Other Tasks:**
    - **GANs:**
        * **Strengths:** Can be used for tasks like super-resolution and image-to-image translation.
        * **Weaknesses:** May not be the best choice for tasks that require a smooth transition between data points.
    - **VAEs:**
        * **Strengths:** Widely used for tasks like image denoising and anomaly detection.
        * **Weaknesses:** May not be as effective as GANs for tasks that require high-quality image generation.

Here's a table summarizing the key differences:

|Feature|GANs|VAEs|
|-------|-----|---|
|Image Quality|Higher|Lower|
|Ease of Training|More difficult|Easier|
|Stability|Less Stable|More Stable|
|Applications|Image Generation, Super-resolution, image-to-image translation|Image Denoising, Anamoly Detection, Signal Analysis|

Ultimately, the best choice depends on one's specific needs and priorities. If one needs high-quality images for tasks like generating realistic faces or landscapes, then a GAN might be the better choice. However, if one needs a model that is easier to train and more stable, then a VAE might be a better option.

## Training GANs
Training GANs involves a unique adversarial process where the generator and discriminator play a cat-and-mouse game.

* **Adversarial Training Process:** The generator and discriminator are trained simultaneously. The generator aims to produce data that is indistinguishable from real data, while the discriminator strives to improve its ability to differentiate between real and fake samples.
* **Objective Function:** The training process is guided by a min-max game type objective function which is used to optimize both the generator and the discriminator. The generator aims to minimize the probability of the discriminator correctly classifying generated samples as fake, while the discriminator seeks to maximize this probability. This objective function is represented as:
$$\min_G \max_D L(D, G)=\mathbb{E}_{x \sim p_{r}(x)} [\log D(x)] + \mathbb{E}_{x \sim p_g(x)} [\log(1 - D(x))]$$
Here, the discriminator tries to maximize this loss function whereas the generator tries to minimize it, hence the adversarial nature.
* **Iterative Improvement:** As training progresses, the generator becomes adept at producing realistic samples, and the discriminator becomes more discerning. This adversarial loop continues until the generator generates data that is virtually indistinguishable from real data.

## References:
1. [Lilian Weng's Awesome Blog on GANs](https://lilianweng.github.io/posts/2017-08-20-gan/)
2. [GAN — What is Generative Adversarial Networks](https://jonathan-hui.medium.com/gan-whats-generative-adversarial-networks-and-its-application-f39ed278ef09)
3. [What are the fundamental differences between VAE and GAN for image generation?](https://ai.stackexchange.com/questions/25601/what-are-the-fundamental-differences-between-vae-and-gan-for-image-generation)
4. [Issues with GAN and VAE models](https://stats.stackexchange.com/questions/541775/issues-with-gan-and-vae-models)
5. [VAE Vs. GAN For Image Generation](https://www.baeldung.com/cs/vae-vs-gan-image-generation)
6. [Diffusion Models vs. GANs vs. VAEs: Comparison of Deep Generative Models](https://towardsai.net/p/machine-learning/diffusion-models-vs-gans-vs-vaes-comparison-of-deep-generative-models)

### Variational Autoencoders
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/variational_autoencoders.md

# Variational Autoencoders

## Introduction to Autoencoders
Autoencoders are a class of neural networks primarily used for unsupervised learning and dimensionality reduction. The fundamental idea behind autoencoders is to encode input data into a lower-dimensional representation and then decode it back to the original data, aiming to minimize the reconstruction error. The basic architecture of an autoencoder consists of two main components - `the encoder` and `the decoder`.
* **Encoder:** The encoder is responsible for transforming the input data into a compressed or latent representation. It typically consists of one or more layers of neurons that progressively reduce the dimensions of the input.
* **Decoder:** The decoder, on the other hand, takes the compressed representation produced by the encoder and attempts to reconstruct the original input data. Like the encoder, it often consists of one or more layers, but in the reverse order, gradually increasing the dimensions.

![Vanilla Autoencoder Image - Lilian Weng Blog](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/autoencoder.png)

This encoder model consists of an encoder network (represented as $g_\phi$) and a decoder network (represented as $f_\theta$). The low-dimensional representation is learned in the bottleneck layer as $z$, and the reconstructed output is represented as $x' = f_\theta(g_\phi(x))$ with the goal of $x \approx x'$.

A common loss function used in such vanilla autoencoders is 

$$L(\theta, \phi) = \frac{1}{n}\sum_{i=1}^n (\mathbf{x}^{(i)} - f_\theta(g_\phi(\mathbf{x}^{(i)})))^2$$ 

which tries to minimize the error between the original image and the reconstructed one. This is also known as the `reconstruction loss`.

Autoencoders are useful for tasks such as data denoising, feature learning, and compression. However, traditional autoencoders lack the probabilistic nature that makes VAEs particularly intriguing and also useful for generational tasks.

## Variational Autoencoders (VAEs) Overview
Variational Autoencoders (VAEs) address some of the limitations of traditional autoencoders by introducing a `probabilistic approach` to encoding and decoding. The motivation behind VAEs lies in their ability to generate new data samples by sampling from a learned distribution in the latent space rather than from a latent vector as was the case with Vanilla Autoencoders which makes them suitable for generation tasks.
* **Probabilistic Nature:** Unlike deterministic autoencoders, VAEs model the latent space as a probability distribution. This produces a probability distribution function over the input encodings instead of just a single fixed vector. This allows for a more nuanced representation of uncertainty in the data. The decoder then samples from this probability distribution.
* **Role of Latent Space:** The latent space in VAEs serves as a continuous, structured representation of the input data. Since it is continuous by design, this allows for easy interpolation. Each point in the latent space corresponds to a potential output, enabling smooth transitions between different data points and also making sure that points which are closer to the latent space lead to similar generation

The concept can be elucidated through a straightforward example, as presented below. Encoders within a neural network are tasked with acquiring a representation of input images in the form of a vector. This vector encapsulates various features such as a subject's smile, hair color, gender, age, etc., denoted as a vector akin to [0.4, 0.03, 0.032, ...]. In this illustration, the focus is narrowed to a singular latent representation, specifically the attribute of a "smile."
![Autoencoders vs VAEs - Sciforce Medium](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/comparison.png)
In the context of Vanilla Autoencoders (AE), the smile feature is encapsulated as a fixed, deterministic value. In contrast, Variational Autoencoders (VAEs) are deliberately crafted to encapsulate this feature as a probabilistic distribution. This design choice facilitates the introduction of variability in generated images by enabling the sampling of values from the specified probability distribution.

## Mathematics Behind VAEs
Understanding the mathematical concepts behind VAEs involves grasping the principles of probabilistic modeling and variational inference.
![Variational Autoencoder - Lilian Weng Blog](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/generative_models/vae.png)
* **Probabilistic Modeling:** In VAEs, the latent space is modeled as a probability distribution, often assumed to be a multivariate Gaussian. This distribution is parameterized by the mean and standard deviation vectors, which are outputs of the probabilistic encoder  \\( q_\phi(z|x) \\). This comprosises of our learned representation z which is further used to sample from the decoder as \\(p_\theta(x|z) \\).
* **Loss Function:** The loss function for VAEs comprises two components: the reconstruction loss (measuring how well the model reconstructs the input) similar to the vanilla autoencoders and the KL divergence (measuring how closely the learned distribution resembles a chosen prior distribution, usually gaussian). The combination of these components encourages the model to learn a latent representation that captures both the data distribution and the specified prior.
* **Encouraging Meaningful Latent Representations:** By incorporating the KL divergence term into the loss function, VAEs are encouraged to learn a latent space where similar data points are closer, ensuring a meaningful and structured representation. The autoencoder's loss function aims to minimize both the reconstruction loss and the latent loss. A smaller latent loss implies a limited encoding of information that would otherwise enhance the reconstruction loss. Consequently, the Variational Autoencoder (VAE) finds itself in a delicate balance between the latent loss and the reconstruction loss. This equilibrium becomes pivotal, as a `smaller latent loss` tends to result in generated images closely resembling those present in the training set but lacking in visual quality. Conversely, a `smaller reconstruction loss` leads to well-reconstructed images during training but hampers the generation of novel images that deviate significantly from the training set. Striking a harmonious balance between these two aspects becomes imperative to achieve desirable outcomes in both image reconstruction and generation.

In summary, VAEs go beyond mere data reconstruction; they generate new samples and provide a probabilistic framework for understanding latent representations. The inclusion of probabilistic elements in the model's architecture sets VAEs apart from traditional autoencoders. Compared to traditional autoencoders, VAEs provide a richer understanding of the data distribution, making them particularly powerful for generative tasks.

## References
1. [Lilian Weng's Awesome Blog on Autoencoders](https://lilianweng.github.io/posts/2018-08-12-vae/)
2. [Generative models under a microscope: Comparing VAEs, GANs, and Flow-Based Models](https://medium.com/sciforce/generative-models-under-a-microscope-comparing-vaes-gans-and-flow-based-models-344f20085d83)
3. [Autoencoders, Variational Autoencoders (VAE) and β-VAE](https://medium.com/@rushikesh.shende/autoencoders-variational-autoencoders-vae-and-%CE%B2-vae-ceba9998773d)

### Introduction
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/introduction/introduction.md

# Introduction

In the last unit we have learned about multimodality and especially about how to fuse vision and language models to harness the best of the two worlds and outperform simple vision models in tasks like Zero-Shot Image Classification.
Another area where multimodal models have had an significant impact, are generative vision models. In this unit, we will have a deeper look at these types of Neural Networks.

## Definition

What are generative vision models and how do they differ from other models?

Mathematical models can generally be separated into two large families, generative models and discriminative models.
The main difference between discriminative models and generative models is that discriminative models learn boundaries that separate different classes, while generative models learn the distribution of different classes.

Discriminative models can be applied to standard computer vision tasks such as classification and regression,
these tasks can be expanded into more complex processes such as semantic segmentation or object detection.

For the sake of brevity, in this chapter, we will consider generative models that solve these tasks:

* noise to image (DCGAN)
* text to image (diffusion models)
* image to image (StyleGAN, cycleGAN, diffusion models)

This section will cover 2 kinds of generative models. GAN-based models, and diffusion-based models.

## Evaluation of generative models in computer vision

Generally, it is really hard to come up with meaningful metrics for evaluating generative models. Because often you don't have a solid "ground truth", and it is difficult to quantify the quality of an image. FID is the most commonly used metric, but it is not perfect.

Let's quickly go over FID. FID stands for Fréchet Inception Distance, it is an improvement on the Inception Score and was introduced in [GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium](https://arxiv.org/pdf/1706.08500.pdf). FID is considered to be resistant to noise and certain artifacts that can be present in generated images. The lower the FID, the better.

It is calculated by constructing 2 distributions from the Inception-v3 features. The first distribution is calculated from the training data features, and the second distribution is calculated from the generated image features. Then the Fréchet distance between these 2 distributions is calculated, and that is your FID score. The lower this score, the better the perceived quality of the generated images. Here is a [short explanation](https://www.youtube.com/watch?v=9zTwSzXxNDo&t=398s) on FID.

Some other metrics you might come across are SSIM, PSNR, IS(Inception Score), and the recently introduced CLIP Score.

* PSNR (peak signal-to-noise ratio) can be interpreted almost as mean-squared-error. Generally, values from [25,34] are okay results while 34+ is very good.

* SSIM (Structural Similarity Index) is a metric in the range [0, 1] where 1 is a perfect match. The final index is calculated from 3 components: luminance, contrast, and structure. [this paper](https://arxiv.org/pdf/2006.13846.pdf) analyzes SSIM and its components if you're really interested.

* Inception score was introduced in [Improved Techniques for Training GANs](https://arxiv.org/pdf/1606.03498.pdf). It is calculated using the features on the inceptionv3 model. The higher the better. It is a mathematically very interesting metric, but has recently fallen out of favor.

* CLIP Score, this metric was introduced in [CLIPScore: A Reference-free Evaluation Metric for Image Captioning](https://arxiv.org/pdf/2104.08718.pdf) is used to evaluate the quality of text to image models. It is calculated by using the CLIP model to calculate the cosine similarity between the generated image and the text prompt. Its range is [0, 100], the higher the better.

 If you're *really curious* about FID. [The Role of ImageNet Classes in Fréchet Inception Distance](https://arxiv.org/pdf/2203.06026.pdf) tries to analyze what FID considers important in an image, and how the features pretrained on imagenet affect the FID score. It is a very interesting read.

### Introduction to Stable Diffusion
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/stable-diffusion.md

# Introduction to Stable Diffusion
This chapter introduces the building blocks of Stable Diffusion which is a generative artificial intelligence (generative AI) model that produces unique photorealistic images from text and image prompts. It originally launched in 2022 and was made possible thanks to a collaboration with
[Stability AI](https://stability.ai/), [RunwayML](https://runwayml.com/) and CompVis Group at LMU Munich following the [paper](https://arxiv.org/pdf/2112.10752.pdf).

What will you learn from this chapter?
- Fundamental components of Stable Diffusion
- How to use `text-to-image`, `image2image`, inpainting pipelines

## What Do We Need for Stable Diffusion to Work?
To make this section interesting we will try to answer some questions to understand the basic components of the Stable Diffusion process. 
We will briefly discuss each component as they are already covered in our Diffusers course. Also, you can visit our previous section, which talks about GANS and Diffusion models in details.

- What strategies does Stable Diffusion employ to learn new information?
    - It uses forward and reverse processes of diffusion models. In the forward process, we add Gaussian noise to an image until all that remains is the random noise. Usually we cannot identify the final noisy version of the image.
    - In the reserve process, we have a learned neural network trained to gradually denoise an image starting from pure noise, until you end up with an actual image.

Both of these processes happens for a finite number of steps `T`(as per DDPM paper T=1000). You begin the process at time \\(t_0\\) by sampling a real image from your data distribution, and the forward process samples some noise from a Gaussian distribution at each time step t, which is added to the image of the previous time step. To get more mathematical intuition, please read [Hugging Face Blog](https://huggingface.co/blog/annotated-diffusion) on Diffusion Models. 

- Since our images can be huge how can we compress it?

When you have large images, they require more computing power to process. This becomes very noticeable in a specific operation known as self-attention. The bigger the image, the more calculations are needed, and these calculations increase very quickly (in a way mathematicians call "quadratically") with the size of the image.
For example, if you have an image that's 128 pixels wide and tall, it has four times more pixels than an image that's only 64 pixels wide and tall. Because of how self-attention works, dealing with this larger image doesn't just need four times more memory and computing power, it actually needs sixteen times more (since 4 times 4 equals 16). This makes it challenging to work with very high-resolution images, as they require a lot of resources to process. 
Latent diffusion models address the high computational demands of processing large images by using a Variational Auto-Encoder (VAE) to shrink the images into a more manageable size. The idea is that many images have repetitive or unnecessary information. A VAE, after being trained on a lot of data, can compress an image into a much smaller, condensed form. This smaller version still retains the essential features of the original image.

- How are we fusing texts with images since we are using prompts?

We know that during inference time, we can feed in the description of an image we'd like to see and some pure noise as a starting point, and the model does its best to 'denoise' the random input into something that matches the caption.
SD leverages a pre-trained transformer model based on something called [CLIP](https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/clip). CLIP's text encoder was designed to process image captions into a form that could be used to compare images and text, so it is well suited to the task of creating useful representations from image descriptions. An input prompt is first tokenized (based on a large vocabulary where each word or sub-word is assigned a specific token) and then fed through the CLIP text encoder, producing a 768-dimensional (in the case of SD 1.X) or 1024-dimensional (SD 2.X) vector for each token. To keep things consistent prompts are always padded/truncated to be 77 tokens long, and so the final representation which we use as conditioning is a tensor of shape 77x1024 per prompt.

- How can we add-in good inductive biases? 

Since, we are trying to generate something new(e.g., a realistic Pokemon), we need a way to go beyond the images we have seen before(e.g., an anime Pokemon). That's where U-Net and self-attention come into the picture. Given a noisy version of an image, the model is tasked with predicting the denoised version based on additional clues such as a text description of the image. Ok, how do we actually feed this conditioning information into the U-Net for it to use as it makes predictions? The answer is something called cross-attention. Scattered throughout the U-Net are cross-attention layers. 
Each spatial location in the U-Net can 'attend' to different tokens in the text conditioning, bringing in relevant information from the prompt. 

## How to use `text-to-image`, `image-to-image`, Inpainting Models in Diffusers
This section introduces helpful usecases and how we can perform these tasks using the [Diffusers](https://github.com/huggingface/diffusers) library.
- Steps for `text-to-image` inference
The idea is to pass in the text prompt, which is converted to the output image.

Using the `diffusers` library you can get `text-to-image` working in 2 steps.

Let's install the `diffusers` library first.
```bash
pip install diffusers
```

We will now initialize the pipeline and pass our prompt inside and infer.
```python
from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained(
    "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
generator = torch.Generator(device="cuda").manual_seed(31)
image = pipeline(
    "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
    generator=generator,
).images[0]
```

- Steps for image-to-image inference
In similar fashion, we can initialize the pipeline, but pass an image and a text prompt instead.
```python
import torch
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForImage2Image.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder",
    torch_dtype=torch.float16,
    use_safetensors=True,
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# Load an image to pass to the pipeline:
init_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
)

# Pass a prompt and image to the pipeline to generate an image:
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipeline(prompt, image=init_image).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

- Steps for Inpainting
For inpainting pipeline, we need to pass an image, a text prompt, and a mask based on an object in that image, which indicates what to inpaint in the image. 
In this example we also pass a negative prompt to further influence the inference on what we want to avoid.
```python
# Load the pipeline
import torch
from diffusers import AutoPipelineForInpainting
from diffusers.utils import load_image, make_image_grid

pipeline = AutoPipelineForInpainting.from_pretrained(
    "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
)
pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed
pipeline.enable_xformers_memory_efficient_attention()

# Load the base and mask images:
init_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"
)
mask_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png"
)

# Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images:
prompt = (
    "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k"
)
negative_prompt = "bad anatomy, deformed, ugly, disfigured"
image = pipeline(
    prompt=prompt,
    negative_prompt=negative_prompt,
    image=init_image,
    mask_image=mask_image,
).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

### Further Reading
- [Diffusers documentation](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview)
- [Diffusers installation](https://huggingface.co/docs/diffusers/installation)

### Introduction to Diffusion Models
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/introduction.md

# Introduction to Diffusion Models

What you will learn from this chapter:

- What are diffusion models and how do they differ from GANs
- Major sub categories of diffusion models
- Use cases of diffusion models
- Drawback in diffusion models

## Diffusion Models and their Difference from GANs

Diffusion models are a new and exciting area in computer vision that has shown impressive results in creating images. These generative models work on two stages, a forward diffusion stage and a reverse diffusion stage: first, they slightly change the input data by adding some noise, and then they try to undo these changes to get back to the original data. This process of making changes and then undoing them helps generate realistic images.

These generative models raised the bar to a new level in the area of generative modeling, particularly referring to models such as [Imagen](https://imagen.research.google/) and [Latent Diffusion Models](https://arxiv.org/abs/2112.10752)(LDMs). For instance consider the below images generated via such models.

![Example images generated using diffusion models](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/diffusion-eg.png)

GANs were considered by many as state-of-the-art generative models in terms of the quality of the generated samples, before the recent rise of diffusion models. GANs are also known as being difficult to train due to their adversarial objective, and often suffer from mode collapse. Think of different modes as different categories. Consider cat and dog as two seperate mode. If the task of the generator is to produce cat and dog images, if mode collapse occurs it means the generator just produces plausible images of either cat or dog alone. One reason for this to happen is the failure of the discriminator to move from local minima ending up repeatatively classifying only one of the mode(either cat or dog) as fake. In contrast, diffusion models have a stable training process and provide more diversity because they are likelihood-based.

However, diffusion models tend to be computationally intensive and require longer inference times compared to GANs due to the step-by-step reverse process.

In Science, Diffusion is a process where solute particles move from higher concentrated region of solute to lower concentrated region of solute in a solvent. Consider the below diffusion analogy for high-level intuition:

![Diffusion analogy-drop of ink in water](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/diffusion-intuition.jpeg)

Above is the traditional diffusion process where the drop of ink completely merges after some time when dropped in a clean water glass. Practically reversing this is impossible, i.e., getting the drop from the mixture. But this is what is done in diffusion models, i.e., removing noise and hence producing a clean image.

In diffusion models, Gaussian noise is added step-by-step to the training images to turn them completely into junk noisy images. Through this process, the model learns to remove the noise step-by-step, hence it is capable of turning any Gaussian noisy image into a new diverse image (can also be conditioned based on text prompts).

![Reverse-process](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/diffusion-process.jpg)

## Major Variants of Diffusion models

There are 3 major diffusion modelling frameworks:
- Denoising diffusion probabilistic models (DDPMs):
	- DDPMs are models that employ latent variables to estimate the probability distribution. From this point of view, DDPMs can be viewed as a special kind of variational auto-encoders (VAEs), where the forward diffusion stage corresponds to the encoding process inside VAE, while the reverse diffusion stage corresponds to the decoding process.
- Noise conditioned score networks (NCSNs):
	- It is based on training a shared neural network via score matching to estimate the score function (defined as the gradient of the log density) of the perturbed data distribution at different noise levels.
- Stochastic differential equations (SDEs):
	- It represents an alternative way to model diffusion, forming the third subcategory of diffusion models. Modeling diffusion via forward and reverse SDEs leads to efficient generation strategies as well as strong theoretical results. This can be viewed as a generalization over DDPMs and NCSNs.

![Sub categories of diffusion](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/diffusion-sub-categories.png)

## Use Cases of Diffusion Models

Diffusion is used in a variety of tasks including, but not limited to:
- Image generation - Generating images based on prompts.
- Image super-resolution - Increasing resolution of images.
- Image inpainting - Filling up a degraded portion of an image based on prompts.
- Image editing - Editing specific/entire part of the image without losing its visual identity.
- Image-to-image translation - This includes changing background, attributes of the location etc.
- Learned Latent representation from diffusion models can also be used for.
    - Image segmentation
    - Classification
    - Anomaly detection

Want to play with diffusion models? No worries, Hugging Face's [Diffusers](https://huggingface.co/docs/diffusers/index) library comes to rescue. You can use almost all recent diffusion SOTA models for almost any task.

## Drawbacks in Diffusion Models

The most significant disadvantage of diffusion models remains the need to perform multiple steps at inference time to generate only one sample. [Latent consistency models](https://latent-consistency-models.github.io/)(LCMs) is one direction of research proposed to overcome the slow iterative sampling process of Latent Diffusion models (LDMs), enabling fast inference with minimal steps on any pre-trained LDMs (e.g Stable Diffusion). Despite the important amount of research conducted in this direction, GANs are still faster at producing images. 

Other issues of diffusion models can be linked to the commonly used strategy to employ CLIP embeddings for text-to-image generation. Few literature studies highlight that their model struggles to render readable text in an image and motivates the behavior by stating that CLIP embeddings do not contain information about spelling.

### Control over Diffusion Models
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/diffusion-models/simple-explanation.md

# Control over Diffusion Models

## Dreambooth

Although diffusion models and GANs can generate many unique images, they can't always generate what you need exactly. Hence, you have to fine-tune a model, which usually requires a lot of data and computation. However, some techniques can be used to personalize a model with just a few examples.

One example is Dreambooth by Google Research, a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. The details on Dreambooth can be found in the [paper](https://dreambooth.github.io/) and the [Hugging Face Dreambooth training documentation](https://huggingface.co/docs/diffusers/training/dreambooth).

Below, you can see the results of Dreambooth being used to train on 4 images of a dog and some inference examples.
![Dreambooth Dog Example](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/teaser_static.jpg)
You can recreate the results following the Hugging Face documentation given above.

From this example, it's clear that the model has learned about that specific dog and can generate new images of that dog in diverse poses and backgrounds. Although computing, data, and time are improvements, others have found more efficient ways to customize a model.

This is where the current most popular method for this comes in, which is Low Rank Adaptation (LoRA). This method was initially developed to efficiently fine-tune Large Language Models by Microsoft in this [paper](https://arxiv.org/abs/2106.09685). The main idea is to factorise the weight update matrix into 2 low-rank matrices, which are optimized during training whilst the rest of the model is frozen. [Hugging face documentation](https://huggingface.co/docs/peft/conceptual_guides/lora) has a good conceptual guide on how LoRA works.

Now, if we put those ideas together we can use LoRA to efficiently fine-tune a diffusion model on a few examples using Dreambooth. A tutorial Google Colab notebook on how to do this can be found [here](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/SDXL_DreamBooth_LoRA_.ipynb).

Due to the quality and efficiency of this method, many people have created their own LoRA parameters which many can be found on a website called [Civitai](https://civitai.com/models) and [Hugging Face](https://huggingface.co/collections/multimodalart/awesome-sdxl-loras-64f9af6d5cce4f4e8f351466). 
For Civitai you can download the LoRA weights which usually are in the range of 50-500MB and in the case of Hugging Face version you can just load the model directly from the model hub.
Below is an example of how to load the LoRA weights in both cases and then fuse them with the model.

We can start with installing diffusers library.
```bash
pip install diffusers
````
We will initialize the `StableDiffusionXLPipeline` and load LoRA adapter weights.
```python
from diffusers import StableDiffusionXLPipeline
import torch

model = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = StableDiffusionXLPipeline.from_pretrained(model, torch_dtype=torch.float16)
pipe.load_lora_weights(
    "lora_weights.safetensors"
)  # if you want to install from a weight file
pipe.load_lora_weights(
    "ostris/crayon_style_lora_sdxl"
)  # if you wish to install a lora from a repository directly
pipe.fuse_lora(lora_scale=0.8)
```

This makes it quick to load a customised diffusion model and use it for inference, especially since there are a lot of models to choose from. Then, if we want to remove the LoRA weights, we can call `pipe.unfuse_lora()` which will return the model to its original state. As for the `lora_scale` parameter, this is a hyperparameter that controls how much the LoRA weights are used during inference. A value of 1.0 means the LoRA weights are fully used and a value of 0.0 means the LoRA weights are not used at all. The best value is often between 0.7 and 1.0 but it's worth experimenting with different values to see what works best for your use case.

You can try some of the Hugging Face LoRA models in this Gradio demo:

## Guided Diffusion via ControlNet

Diffusion models have many ways in which they can be guided to create a desired output, such as prompts, negative prompts, guidance scale, inpainting and many others. Here, we will focus on a method that has many variants and can be combined with all the other methods, called ControlNet. It was introduced in this [paper](https://arxiv.org/abs/2302.05543) by Stanford University. This method allows us to guide the diffusion model with an image that usually holds very specific information such as depth, pose, edges, and many others. This allows for more consistency in the generated images, which is often a problem with diffusion models.

ControlNet can be used in both text-to-image and image-to-image. Below is a text 2 image example using a ControlNet which was trained on edge detection conditioning, with the top left image being used as input.
Here we can see how all of the generated images have a very similar shape but with different colours. This is because the ControlNet is guiding the diffusion model to create images with the same shape as the input image.
 ![bird](https://github.com/lllyasviel/ControlNet/raw/main/github_page/p1.png)

For code to run ControlNet with Stable Diffusion XL refer to the official documentation [here](https://huggingface.co/docs/diffusers/api/pipelines/controlnet_sdxl#diffusers.StableDiffusionXLControlNetPipeline) but if you just want to test out some examples take a look at this Gradio demo that lets you try different types of ControlNet:

### StyleGAN Variants
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/gans-vaes/stylegan.md

# StyleGAN Variants

What you will learn in this chapter:

- What is missing in Vanilla GAN
- StyleGAN1 components and benefits
- Drawback of StyleGAN1 and the need for StyleGAN2
- Drawback of StyleGAN2 and the need for StyleGAN3
- Use cases of StyleGAN

## What is missing in Vanilla GAN
Generative Adversarial Networks(GANs) are a class of generative models that produce realistic images. But it is very evident that you don't have any control over how the images are generated. In Vanilla GANs, you have two networks (i) A Generator, and (ii) A Discriminator. A Discriminator takes an image as input and returns whether it is a real image or a synthetically generated image by the generator. A Generator takes in noise vector (generally sampled from a multivariate Gaussian) and tries to produce images that look similar but not exactly the same as the ones available in the training samples, initially, it will be a junk image but in a long run the aim of the Generator is to fool the Discriminator into believing that the images generated by the generator are real. 

Consider a trained GAN, let z1 and z2 be two noise vectors sampled from a gaussian distribution which is sent to the generator to generate images. Let us assume z1 gets converted to an image containing a male wearing glasses and z2 gets converted to a image containing a female without any glasses. What if you need an image of a female wearing glasses. This kind of explicit control decision can't be intuitively achieved with Vanilla GANs as the features are entangled (more on this below). Let that sink in, you will get to understand it more when you witness what StyleGAN achieves.

TL DR; StyleGAN is a special modification made to the architectural style of the Generator alone whereas the Discriminator remains the same. This modified Generator of StyleGAN provides freedom to generate images as user wants, and provides control over both high-level (pose, facial expression) and stochastic (low-level features like skin pores, local placement of hair etc). Apart from such flexible image-generating capabilities, over the years StyleGAN has been used for several other so-called downstream talks like privacy preservation, image editing etc.

## StyleGAN 1 components and benefits

![Architecture](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/stylegan_arch.png)

Let us just dive into the special components introduced in StyleGAN that give StyleGAN the power which we described above. Don't get intimidated by the figure above, it is one of the simplest yet powerful ideas which you can easily understand. 

As I already said, StyleGAN only modifies Generator and the Discriminator remains the same, hence it is not mentioned above. Diagram (a) corresponds to the structure of ProgessiveGAN. ProgessiveGAN is just a Vanilla GAN, but instead of generating images of a fixed resolution, it progressively generates images of higher resolution in aim of generating realistic high resolution images, i.e., block 1 of generator generates image of resolution 4 by 4, block 2 of generator generates image of resolution 8 by 8 and so on. 
Diagram (b) is the proposed StyleGAN architecture. It has the following main components:
1. A mapping network
2. AdaIN (Adaptive Instance Normalisation)
3. Concatenation of Noise vector

Let's break it down one by one. 

### Mapping Network
Instead of passing the latent code (also known as the noise vector) z directly to the generator as done in traditional GANs, now it is mapped to w by a series of 8 MLP layers. The produced latent code w is not just passed as input to the first layer of the Generator, like in ProgessiveGAN, rather it is passed on to each block of the Generator Network (In StyleGAN terms, it is called a Synthesis Network). There are two major ideas here:

- Mapping the latent code from z to w disentangles the feature space. By disentanglement what we mean here is in a latent code of dimension 512, if you change just one of its feature values (say out of 512 values, you just increase or decrease the 4th value), then ideally in disentangled feature space, only one of the real world feature should change. If the 4th feature value corresponds to the real-world feature 'smile', then changing the 4th value of the 512-dimension latent code should generate images that are smiling/not smiling/something in between. 
- Passing latent code to each layer has a profound effect on the kind of the real features controlled. For instance, the effect of passing latent code w to lower blocks of the Synthetis network has control over high-level aspects such as pose, general hairstyle, face shape, and eyeglasses, and the effect of passing latent code w to blocks of the higher resolution of the synthetis network has control over smaller scale facial features, hairstyle, eyes open/closed etc. 

### Adaptive instance normalisation (AdaIN)

![Adaptive instance normalisation](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/AdaIN.png)

AdaIN modifies the instance Normalization by allowing the normalization parameters (mean and standard deviation) to be dynamically adjusted based on the style information from a separate source. This style information is often derived from the latent code w. 

In StyleGAN, the latent code is not directly passed on to synthesis network rather affine transformer w, i.e y is passed to different blocks. y is called the 'style' representation.
Here,  \\(y_{s,i}\\) and  \\(y_{b,i}\\) are the mean and standard deviation of the style representation y, and \\(mu(x_i)\\) and  \\(sigma(x_i)\\) are the mean and standard deviation of the feature map x.

AdaIN enables the generator to modulate its behavior during the generation process dynamically. This is particularly useful in scenarios where different parts of the generated output may require different styles or characteristics.

### Concatenation of Noise vector

In traditional GAN, the generator has to learn stochastic features on its own. By stochastic feature, I mean those minuscule, yet, important fine details like the position of hairs, skin pores, etc which should vary from one image generation to another and should not remain constant. Without any explicit structure in traditional GAN, it makes it a difficult task for the generator because it needs to introduce those pixel-level randomnesses from one layer to another all on its own, which often doesn't produce a diverse set of such stochastic features.

Instead, in StyleGAN, the authors hypothesize that by adding a noise map to the feature map in each block of the synthesis network (also known as the generator), each layer makes use of this information to produce diverse stochastic nature without trying to do all by its own like in traditional GANs. This turned out well.

![Example for noise](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/noise.png)

## Drawbacks of StyleGAN1 and the need for StyleGAN2
StyleGAN yields state-of-the-art results in data-driven unconditional generative image modeling. Still, there existed a few issues concerning its existing architecture design which is dealt with the next version, StyleGAN2.

To make this chapter readable, we avoid going into the details of the architecture and rather state the characteristic artifacts found in the first version and how the quality was further improved.

There are two major artifacts addressed in this paper, common blob-like artifacts and location preference artifact arising due to the existing progressive growing architecture.

![blob Artifact](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/norm.png)

You can see the blob structure in the above image, which the authors claim to have originated from the normalisation process of StyleGAN1. Hence, below (d) is the proposed architecture that overcame the issue.

![Demodulation](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/stylegan2_demod.png)

(ii) Fixing strong location preference artifact in Progessive GAN structure.

![Phase Artifact](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/progress.png)

In the above figure, each of the images are obtained by interpolating the latent code w to modulate the pose. This leads to quite unrealistic images irrespective of its high visual quality.

A skip generator and a residual discriminator was used to overcome the issue, without progressive growing.

There are also other changes introduced in StyleGAN2, but the above two are important to know at first hand.

## Drawbacks of StyleGAN2 and the need for StyleGAN3
The same set of authors of StyleGAN2 figured out the dependence of the synthesis network on absolute pixel coordinates in an unhealthy manner. This leads to the phenomenon called the aliasing effect.

![Animation of aliasing](https://huggingface.co/datasets/hwaseem04/Documentation-files/resolve/main/CV-Course/MP4%20to%20GIF%20conversion.gif)
Above, the animation is generated by interpolating the latent code w. You can clearly see that in the left image the texture pixels kind of fix to the location and only the high-level attribute (face pose/expression) changes. This exposes the artificiality when generating such animations. StyleGAN3 tackles this problem ground up and you can see the results from the animation on the right side.

## Use Cases
StyleGAN's ability to generate photorealistic images has opened doors for diverse applications, including image editing, preserving privacy, and even creative exploration.

**Image Editing**

- Image inpainting: Filling in missing image regions in a seamless and realistic manner. 
- Image style transfer: Transferring the style of one image to another. 

**Privacy-preserving applications**

- Generating synthetic data: Replacing sensitive information with realistic synthetic data for training and testing purposes. 
- Anonymizing images: Blurring or altering identifiable features in images to protect individuals' privacy.

**Creative explorations**

- Generating fashion designs: StyleGAN can be used to generate realistic and diverse fashion designs.
- Creating immersive experiences: StyleGAN can be used to create realistic virtual environments for gaming, education, and other applications. For instance, Stylenerf: A style-based. 3d aware generator for high-resolution image synthesis.

These are just a non-exhaustive list.

## References
- StyleGAN - [repository](https://github.com/NVlabs/stylegan), [Paper](https://arxiv.org/abs/1812.04948) 
- StyleGAN2 - [repository](https://github.com/NVlabs/stylegan2), [Paper](http://arxiv.org/abs/1912.04958)  
- StyleGAN3 - [repository](https://github.com/NVlabs/stylegan3), [Paper](https://arxiv.org/abs/2106.12423)

### Privacy, Bias and Societal Concerns
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/practical-applications/ethical-issues.md

# Privacy, Bias and Societal Concerns

The widespread adoption of AI-powered image editing tools raises significant concerns regarding privacy, bias, and potential societal ramifications. These tools, capable of manipulating both 2D and 3D images with remarkable realism, introduce ethical dilemmas and require careful consideration.

What you will learn from this chapter:

- Impact of such AI images/videos on society
- Current approaches to tackle the issues
- Future scope

## Impact on Society

The ability to effortlessly edit and alter images has the potential to:

- **Undermine trust in media:** Deepfakes, convincingly manipulated videos, can spread misinformation and erode public trust in news and online content.
- **Harass and defame individuals:** Malicious actors can use AI tools to create fake images for harassment, defamation, and other harmful purposes.
- **Create unrealistic beauty standards:** AI tools can be used to edit images to conform to unrealistic beauty standards, negatively impacting self-esteem and body image.

## Current approaches

Several approaches are currently being employed to address these concerns:

- **Transparency and labeling:** Platforms and developers are encouraged to be transparent about the use of AI-edited images and implement labeling systems to differentiate real and manipulated content.
- **Fact-checking and verification:** Media outlets and tech companies are investing in fact-checking and verification tools to help combat the spread of misinformation and disinformation.
- **Legal frameworks:** Governments are considering legislative measures to regulate the use of AI-edited images and hold individuals accountable for their misuse.

## Future scope

The future of AI-edited images will likely involve:

- **Advanced detection and mitigation techniques:** Researchers will ideally develop more advanced techniques for detecting and mitigating the harms associated with AI-edited images. But is like a cat-and-mouse game where one group develops sophisticated realistic images generation algorithms, whereas another group develops methods to identify them.
- **Public awareness and education:** Public awareness campaigns and educational initiatives will be crucial in promoting responsible use of AI-edited images and combating the spread of misinformation.
- **Protecting rights of image artist:** Companies like OpenAI, Google, StabiltyAI that trains large text-to-image models are facing slew of lawsuits because of scraping works of artists from internet without crediting them in anyway. Techniques like image poisoning is an emerging research problem where an artists' image is added with human-eye-invisible noise-like pixel changes before uploading on internet. This potentially corrupts the training data and hence model's image generation capability if scraped directly. You can read about this more from - [here](https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/), and [here](https://arxiv.org/abs/2310.13828).

This is a rapidly evolving field, and it is crucial to stay informed about the latest developments.

## Conclusion

This section concludes our unit on Generative Vision Models, where you have learned about Generative Adversarial Networks, Variational Auto Encoders and Diffusion Models.
You saw how they can be implemented and used, and in this chapter, you also learned about the important topic of ethics and biases concerning these models.

With the end of this unit, you have also finished the most fundamental part of this course, which includes _Fundamentals_, _Convolutional Neural Networks_, _Vision Transformers_ and _Generative Models_.
In the next chapters we will dive deeper into specialized fields like _Video and Video Processing_, _3D Vision, Scene Rendering and Reconstruction_ and _Model Optimization_.
But first, we will have a look at basic Computer Vision tasks - what they are used for, what defines them and how they are evaluated.

### CycleGAN Introduction
https://huggingface.co/learn/computer-vision-course/unit5/generative-models/practical-applications/cycle_gan.md

# CycleGAN Introduction

This section introduces CycleGAN, short for *Cycle-Consistent Generative Adversarial Network*, which is a framework designed for image-to-image translation tasks where paired examples are not available. Introduced by Zhu et al. in a [2017](https://arxiv.org/abs/1703.10593) paper, it represents a significant advancement in the field of computer vision and machine learning.

In many image-to-image translation tasks, the goal is to learn a mapping between an input image and an output image. Traditional approaches often rely on large datasets of paired examples (e.g., photos and corresponding sketches). However, obtaining such paired datasets can be extremely challenging or even infeasible in many scenarios. This is where CycleGAN comes into play, as it is designed to work with unpaired datasets.

## What Is Unpaired Image-to-Image Translation
![paired and unpaired_images](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/unpaired_images.png)

In many image translation scenarios, we encounter datasets lacking direct, one-to-one correspondence between image pairs. This scenario is where unpaired image-to-image translation comes into play. Here, instead of having matching pairs of images, you work with two distinct sets or "piles" of images, each representing a different style or domain, such as X and Y. For instance, one pile might consist of realistic photographs, while the other could contain artworks by Monet, Cezanne, or other artists. Alternatively, the piles could represent different seasons, with one showcasing winter landscapes and the other summer scenes. Another example could be a collection of horse images in one pile and zebra images in the other, without any direct pairing between the two. The objective in unpaired image-to-image translation is for the model to learn and extract the general stylistic elements from each pile and apply these learned styles to transform images from one domain to another. This transformation often works both ways, allowing images from domain X to be translated into the style of domain Y, and vice versa. This approach is particularly valuable when exact image pairs are unavailable or difficult to obtain.

## Main Components of CycleGAN

**Dual GAN Structure**:

CycleGAN employs two GANs (Generative Adversarial Networks), one for translating from the first set to the second (e.g., zebra to horse) and another for the reverse process (horse to zebra). This dual structure ensures realism (via the adversarial process) and content preservation (via cycle consistency).
    - PatchGAN Discriminators: The discriminators used in CycleGAN are based on [PatchGAN](https://arxiv.org/pdf/1611.07004.pdf) architecture, which assesses patches of an image rather than the whole, contributing to more detailed and localized realism.
    - Generator Architecture: The generators in CycleGAN draw from [U-Net](https://arxiv.org/abs/1505.04597) and [DCGAN](https://arxiv.org/abs/1511.06434 architectures, involving downsampling (encoding), upsampling (decoding), and convolutional layers with batch normalization and ReLU. 
These generators are enhanced with additional convolutional layers and skip connections, known as residual connections, which aid in learning identity functions and supporting deeper transformations.

**Cycle Consistency Loss**:

Ensures that the style of an image can be changed (e.g., sad face to hugging face) and then reverted back to its original form (hugging face back to sad face) with minimal loss of detail or content.

Implementation Involves two stages of transformation.

**First Stage**: A sad face is transformed into a hugging face. **Second Stage**: This hugging face is then converted back into a sad face. The model aims for the final image (reverted sad face) to closely resemble the original sad face. This is achieved by minimizing the pixel difference between these two images, which is added to the model's loss function. The cycle consistency is applied in both directions: sad face to hugging face to sad face, and hugging face to sad face to hugging face. The loss for each cycle is calculated by summing the pixel differences and is then incorporated into the overall generator loss. 
Integration with Adversarial Loss: Cycle consistency loss is combined with adversarial loss, commonly used in GANs, to form a comprehensive loss function. This combined loss function is optimized simultaneously for both generators in CycleGAN.

**Least-Square Loss**:

It's a method that minimizes the sum of squared residuals. What that means is that it tries to find the best-fit line that has the smallest sum of squared distances between that line and all the points. In GANs, the **line** represents the label (real or fake), and the points represent the discriminator's predictions. The discriminator's loss function in Least Squares adversarial loss is formulated by calculating the squared distance between its predictions and the actual labels (real or fake) across multiple images. For the generator, the loss function is designed to make its fake outputs appear as real as possible, measured by how far these outputs deviate from the label of **real**. Having Least-Square Loss addresses vanishing gradients and mode collapse issues common in BCE loss.

**Indentity Loss**:

Introduced in CycleGAN, identity loss is an optional loss term aimed at enhancing color preservation in generated images. It works by ensuring that an image input into a generator (e.g., a horse image into a zebra-to-horse generator) should ideally output the same image, as it is already in the target style. The loss is calculated as the pixel distance between the real input and the output of the generator. A zero pixel distance (no change in the image) results in zero identity loss, which is the desired outcome.
For CycleGANs generators, this loss is added alongside adversarial and cycle consistency losses. It's adjusted using a lambda term (weighting factor).

![CycleGAN](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/cycleGAN1.jpg)
This figure shows the combined GAN architecture functionality for both **GANs**. These GANs are linked by cycle consistency, forming a cycle.
For real images, the classification matrix contains ones. For fake images, it contains zeros. In summary, CycleGAN intricately combines two GANs with various loss functions, including adversarial, cycle consistency, and optional identity loss, 
to effectively transfer styles between two domains while preserving the essential characteristics of the input images.

### Introduction
https://huggingface.co/learn/computer-vision-course/unit11/1.md

# Introduction

Besides fundamentals, this unit assumes familiarity with concepts in transfer and multimodal learning. If you are not viewing the course sequentially in the provided order, we suggest at least reading the transfer learning section in unit 2, and the whole of unit 4.

## On Generalization

Now that we have trained our model like a student cramming all epochs for a test, the real test begins! We hope this knowledge acquired during training by the model translates beyond the specific pictures (cat pictures, for example) it learned from, allowing it to recognize unseen cats like Alice's and Ted’s furry friends. Think of it as the model learning the essence of catness, not just those specific furry faces it saw during training. This ability to apply knowledge to new situations is called **generalization**, and it's what separates a good cat model from a mere cat picture memorizer. Can you imagine an alternate universe without generalization? Yes, it’s pretty simple actually, you’ll only have to train your model on ALL the images of cats in the world (assuming they only exist on earth), including Alice’s and Ted’s, then find a way to prevent the current cat population from breeding. So, yeah, no big deal.

Actually, we weren't right when we said that the model is expected to generalize to all cat pictures that it has not seen. It is expected to generalize to all cat pictures from the same distribution as the image data it was trained on. Simply, if you trained your model on cat selfies and then presented it with a cartoon picture of a cat, it probably won’t be able to recognize it. These two pictures come from totally different distributions or domains. Making your cat selfie model able to recognize cartoon cats is referred to as **domain adaptation** (we’ll briefly talk about it later). It's like taking all the knowledge your model learned about real cats and teaching it to recognize their animated cousins.

So, we’ve gone from generalization, recognizing the unseen Alice’s and Ted’s cat pictures, to domain adaptation, recognizing animated cat pictures. But we’re much greedier than that. You don’t want your model to be able to only recognize your cat pictures, or Alice’s and Ted’s, or not even cartoon cats! Having a model trained on cat pictures, you also want it to recognize pictures of llamas and falcons.

Well, now we’re on the turf of zero-shot learning (also known as ZSL).

[comment]: # (TODO: Illustration showing the difference between generalization, domain adaptation, and ZSL)

## What is Zero-shot Learning?

Let’s warm up with a definition. Zero-shot learning is a setup in which the model is presented with images belonging to **only** classes that it was not exposed to during training, at test time. In other words, the training and testing sets are **disjoint**.
Just a heads-up: in the classic ZSL setup, the test set only has pictures of classes the model hasn't seen before, not a single one from its training days. This may seem a little bit unrealistic, it's  like asking a student to ace an exam on only materials they've never studied. 
Luckily, there's a more pragmatic version of ZSL that doesn't have this strict rule and is called generalized zero-shot learning, or GZSL. This more flexible approach allows the test set to include both seen and unseen classes. It's a more realistic scenario, reflecting how things work in the real world.

[comment]: # (TODO: Illustration showing the difference between ZSL and GZSL)

## History of Zero-shot Learning 

The question of whether we can make models that perform well on tasks that they were not explicitly trained on, came soon after deep learning started to seem feasible. In 2008, the seeds of ZSL were sowed with two independent papers presented at the Association for the Advancement of Artificial Intelligence (AAAI) conference. The first paper, titled **Dataless Classification**, explored the concept of zero-shot learning in the context of natural language processing (NLP). The second one, titled **Zero-Data Learning**, focused on applying ZSL to computer vision tasks. The term zero-shot learning itself first appeared in 2009 at NeurIPS in a paper co-authored by, wait for it… Geoffrey Hinton!

Let’s have a nice outline for the most important moments in ZSL below.

|      |                                                                         |
|------|-------------------------------------------------------------------------|
| 2008 | The first sparks of (the question of) zero-shot learning                |
| 2009 | The term zero-shot learning was coined                                  |
| 2013 | The concept of Generalized ZSL was introduced                           |
| 2017 | The first application of the encoder-decoder paradigm to ZSL            |
| 2020 | OpenAI’s GPT-3 achieves an impressive performance on zero-shot NLP      |
| 2021 | OpenAI’s CLIP takes zero-shot computer vision to a whole new level      |

The impact of CLIP has been profound, ushering in a new era of zero-shot computer vision research. Its versatility and performance have opened up exciting new possibilities. It could be said that the history of zero-shot computer vision can be divided into two eras: The **pre-CLIP** and **post-CLIP** eras.

## How Does Zero-shot Learning Work In Computer Vision?

Now that we know what zero-shot learning is, it would be nice to know how it is applied to computer vision, right? This part will be addressed in more detail in the coming chapters, but let’s paint a broad picture here to break the ice.

In NLP, zero-shot learning is (although it wasn’t always the case) pretty straightforward. Many language models are trained on massive datasets of text, learning to predict the next word in a given sequence. This ability to capture the underlying patterns and semantic relationships within language allows these models to perform surprisingly well on tasks they weren’t explicitly trained on. A nice example is GPT-2 being able to summarize a passage when TL;DR is appended to the prompt. Zero-shot computer vision is another story.

Let’s begin by answering a simple question, *how can we humans do it? How can we recognize objects that we have not seen before?

Yes, you’re absolutely right! We need some **other information** about that object. We can easily identify a tiger, even if we haven’t seen one before, given that we know what a lion looks like. A tiger is a lion with stripes and minus the mane. A zebra is a black-and-white striped version of a horse. Deadpool is a red-black version of Spiderman.

Because of this other information, zero-shot computer vision is essentially multi-modal. If generalization is like learning a language by reading books and talking to people, zero-shot computer vision is like learning a language by reading a dictionary and listening to someone describe what it sounds like.

Can we think of zero-shot learning as an unsupervised learning problem? No, this is still supervised learning. In unsupervised learning, the model learns from unlabeled data. In zero-shot learning, the model learns from dataless labels, remember it was called dataless classification in its infancy.

### What Other Information?

For zero-shot computer vision to work, we need to provide the model with information other than the visual features during training. This other information is called **Semantic or Auxiliary Information**. It provides a semantic bridge between the visual features and the unseen classes. By incorporating this multi-modal information (Text & Image), zero-shot computer vision models can learn to recognize objects even if they have never seen them before visually. In other words, semantic information embeds **both** seen and unseen classes in high-dimensional vectors, and it comes in many different forms:

1. **Attribute vectors**: Think of attributes as a tabular representation of the different features of the object. 
2. **Textual Descriptions**: A text describing the object contained in the image similar to image captions.
3. **Class label vectors**: Those are the embeddings of the class labels themselves.

[comment]: # (TODO: Illustration showing the different forms of semantic information)

Using this semantic information, you train a model to learn a mapping function between the image features and the semantic features. At inference time, the model predicts the class label by looking for the closest label in the semantic space by using, for example, k-nearest neighbor. We can say that we are transferring knowledge obtained from seen classes with the help of semantic information.

Different zero-shot computer vision methods differ in what **semantic information** they use and which **embedding space** they utilize at inference.

[comment]: # (TODO: Illustration showing the pipeline of zero-shot learning)

### How Is This Different from Transfer Learning?

Good question! Zero-shot learning (ZSL) falls within the broader category of transfer learning, specifically under the umbrella of **heterogeneous transfer learning**. This means that ZSL relies on transferring knowledge gained from a **source domain** (seen classes) to a **distinct target domain** (unseen classes). 

Due to the disparity between the source and target domains, ZSL faces specific challenges in transfer learning. These include overcoming domain shift, where the data distributions differ significantly, and handling the lack of labeled data in the target domain. But we’re going to talk about these challenges in detail later, so don’t worry about it now. For now, let’s talk a little about the different methods of zero-shot computer vision.

## Methods of Zero-shot Computer Vision

The landscape of zero-shot computer vision methods is vast and diverse, with numerous proposed approaches and multiple frameworks for categorization. But one framework that I find appealing is one in which these methods can be broadly categorized into **Embedding-based methods** and **Generative-based methods**. This framework offers a useful lens through which to understand and compare different zero-shot computer vision approaches.

- **Embedding-based methods**: The model learns a common embedding space where both images and their semantic features/representations are projected. New unseen classes are then predicted using a similarity measure in that space. For example, CLIP.
- **Generative-based methods**: These methods utilize generative models to create synthetic images or visual features for the unseen classes based on the samples of seen classes and semantic representations of both classes. This allows for training models on these unseen classes even without real data. This way, we’ve cheated a little bit and converted the zero-shot problem into a supervised learning problem. For example, CVAE[^6].

[comment]: # (TODO: Diagram showing the categories of ZSL methods and the sub-categories)

The choice between embedding-based and generative-based approaches, like every other choice that you have to make in machine learning, *depends* on the specifics of the task at hand and the available resources. Embedding-based methods are often preferred for their efficiency and scalability, while generative-based methods offer greater flexibility and potential for handling complex data.

But anyway, in this unit we will address *only* embedding-based methods, including **attention-based embedding-based methods**. Zero-shot learning is a big topic and comprehensive coverage may require a standalone course. Interested readers may check the provided additional readings and have fun.

## Attack Of The Clones: How ZSL Is Different From X?

We’ve gone a long way! We now have a decent idea of zero-shot learning, its history, its application in computer vision, and how it generally works. To round out this introduction, we are going to compare ZSL to other different methods that may initially appear similar.

- **Domain Adaptation (DA)**: This should be familiar by now. We can think of Zero-shot Learning as an extreme case of Domain Adaptation (greedy, greedy), dealing with the problem of learning to recognize unseen classes with no data at all. Domain Adaptation focuses on bridging the gap between two related domains (datasets) with different distributions and it requires labeled data.
- **Open Set Recognition (OSR)**: Think of it as a Boolean Zero-shot Learning. Like ZSL, in the OSR problem, the model handles both seen and unseen classes. But unlike ZSL, the model only needs to classify, at test time, whether the instance belongs to the seen classes or not. That’s it, no fancy labels. Still, this is a significant challenge.
- **Out of Distribution (OOD) Detection**: Think of this problem as a continuous variant of open set recognition. Here we don’t want to detect any instances that weren’t included in the training process, but rather just instances that deviate significantly from the training data distribution distribution. By recognizing and handling unexpected data effectively, OOD detection can pave the way for more trustworthy and robust AI systems that can adapt to unpredictable environments.
- **Open Vocabulary Learning (OVL)**: This is what zero-shot learning aspires to be. Like, ZSL on steroids. Overall, OVL can be considered an extension of ZSL, encompassing the ability to learn from limited data for unseen classes while also extending to handle seen classes and potentially infinite new classes and tasks. 

## References

- [A fast learning algorithm for deep belief nets](https://dl.acm.org/doi/10.1162/neco.2006.18.7.1527)
- [Zero-Shot Learning Through Cross-Modal Transfer](https://arxiv.org/abs/1301.3666)
- [Semantic Autoencoder for Zero-Shot Learning](https://arxiv.org/abs/1704.08345)
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
- [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
- [Learning Structured Output Representation using Deep Conditional Generative Models](https://papers.nips.cc/paper_files/paper/2015/hash/8d55a249e6baa5c06772297520da2051-Abstract.html)

### Zero-shot Learning
https://huggingface.co/learn/computer-vision-course/unit11/2.md

# Zero-shot Learning

Following the introductory chapter, we will next explain zero-shot learning (ZSL) in more details. This chapter is designed to cover:
- The definitions for various types of ZSL and the differences between them.   
- An in-depth example of ZSL which employs semantic embeddings [1]. 

## Zero-shot Learning vs. Generalized Zero-shot Learning

Zero-shot learning and generalized zero-shot learning (GZSL) belong to a machine learning algorithm type, where the image classification model needs to classify labels not included in training. ZSL and GZSL are very similar, and the main difference is how the model is evaluated [2].

For ZSL, the model is purely evaluated for its capability to classify images of unseen classes - only observations of unseen classes are included in ZSL testing dataset. For GZSL, the model is evaluated on both seen classes and unseen classes - which is considered closer to real-world use cases. Overall, GZSL is more challenging because the model needs to determine if an observation belongs to a novel class or a known class. 

## Inductive Zero-shot Learning vs. Transductive Zero-shot Learning

Based on the type of training data, there are two kinds of zero-shot learning:

In inductive ZSL, the model is trained exclusively on datasets containing only seen classes, without access to any data from the unseen classes. The learning process focuses on extracting and generalizing patterns from the training data, which are then applied to classify instances of unseen classes. The approach assumes a clear separation between seen and unseen data during training, emphasizing the model's ability to generalize from the training data to the unseen classes. 

Transductive ZSL differs by allowing the model to have access to some information about the unseen classes during training, typically the attributes or unlabeled examples of the unseen classes, but not the labels. This approach leverages additional information about the structure of the unseen data to train a more generalizable model. 

In next section, we will follow the main concept of a classic research paper from Google [1], and give an example for inductive ZSL. 

## Zero-shot Learning Example with Semantic Embeddings

As described in the previous chapter, developing a successful ZSL model requires more than just images and class labels. It is nearly impossible to classify unseen classes based on images alone. ZSL utilizes auxiliary information, such as semantic attributes or embeddings to help classify images from unseen classes. Before diving into details, the following is a short introduction on semantic embeddings for readers unfamiliar with the term. 

### What are Semantic Embeddings? 

Semantic embeddings are vector representations of semantic information which carry the meaning and interpretation of data. For example, the information transferred with spoken text is a type of semantic information. Semantic information does not include only the direct meanings of words or sentences, but also the contextual and cultural implications.

Embeddings refer to the process of mapping semantic information into vectors of real numbers. Semantic embeddings are often learned with unsupervised machine learning models, such as Word2Vec [3] or GloVe [4]. All types of textual information, such as words, phrases, or sentences can be transformed into numerical vectors based on set procedures. Semantic embeddings describe words in a high-dimensional space where the distance and direction between words reflect their semantic relationships. This enables machines to understand the usage, synonyms, and context of each word by mathematical operations on the word embeddings. 

### Enable Zero-shot Learning with Semantic Embeddings  

During training, a ZSL model learns to associate the visual features of images from seen classes with their corresponding semantic embeddings. The objective is to minimize the distance between the projected visual features of an image and the semantic embedding of its class. This process helps the model to learn the correspondence between images and semantic information.

Since the model has learned to project image features onto semantic space, it can attempt to classify images of unseen classes by projecting their image features into the same space and comparing them to the embeddings of unseen classes. For an image of an unseen class, the model calculates its projected embedding and then searches for the nearest semantic embedding among the unseen classes. The unseen class with nearest embedding is the predicted label for the image.

In summary, semantic embeddings are central to ZSL, enabling models to extend their classification capabilities. This approach allows for a more flexible and scalable way to classify the vast amount of real-world categories without needing labeled datasets.

## Comparison to CLIP 

The relationship between ZSL and CLIP (Contrastive Language–Image Pre-training) [5] stems from their shared goal of enabling models to recognize and classify images of categories that were not present in the training data. However, CLIP represents a significant advancement and a broader application of the principles underlying ZSL, leveraging a novel approach to learning and generalization.

The relationship between CLIP and ZSL can be described as:

- Both ZSL and CLIP aim to classify images into classes that were not seen during training. However, while traditional ZSL approaches might rely on predefined semantic embeddings or attributes to bridge the gap between seen and unseen classes, CLIP directly learns from natural language descriptions, allowing it to generalize to a much broader range of tasks without the need for task-specific embeddings.

- CLIP is a prime example of multi-modal learning, where the model learns from both textual and visual data. This approach aligns with ZSL where auxiliary information is used to improve classification performance. CLIP takes the concept further by directly learning from raw text and images, enabling it to understand and represent the relationships between visual content and descriptive language.

## Zero-shot Learning Evaluation Datasets

New ZSL methods are proposed every year, making it challenging to identify a superior approach due to varying evaluation methodologies. Standardized evaluation frameworks and datasets are preferred to assess different ZSL methods. A comparison study of classic ZSL methods is presented in [6]. Commonly used datasets for ZSL evaluation include: 

- **Animal with Attributes (AwA)**

Dataset to benchmark transfer-learning algorithms, in particular attribute based classification [7]. It consists of 30475 images of 50 animal classes with six feature representations for each image.  

- **Caltech-UCSD-Birds-200-2011（CUB）**

Dataset for fine-grained visual categorization task. It contains 11788 images of 200 subcategories of birds. Each image has 1 subcategory label, 15 part locations, 312 binary attributes and 1 bounding box. Also, ten-sentence descriptions for each image. were collected through Mechanical Turk by Amazon, and the descriptions are carefully constructed to not contain any subcategory information.  

- **Sun database（SUN）**

First large-scale scene attribute database. The dataset consists of 130519 images with 899 categories which can be used in high-level scene understanding and fine-grained scene recognition. 

- **Attribute Pascal and Yahoo dataset（aPY）**

A coarse-grained dataset composed of 15339 images from 3 broad categories (animals, objects and vehicles), further divided into a total of 32 subcategories. 

- **ILSVRC2012/ILSVRC2010（ImNet-2）**

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) evaluates algorithms for object detection and image classification at large scale [8]. 

## Reference

- [1] Frome et al., DeViSE: A Deep Visual Semantic Embedding Model, NIPS, (2013)
- [2] Pourpanah et al., A Review of Generalized Zero-Shot Learning Methods (2022).
- [3] Mikilov et al., Efficient Estimation of Word-Representations in Vector Space, ICLR (2013). 
- [4] Pennington et al., Glove: Global Vectors for Word Representation, EMNLP (2014).
- [5] Radford et al., Learning Transferable Visual Models From Natural Language Supervision, arXiv (2021).
- [6] Xian et al., Zero-Shot Learning - The Good, the Bad and the Ugly, CVPR (2017).
- [7] Lampert et al., Learning to Detect Unseen Object Classes by Between-Class Attribute Transfer, CVPR (2009).
- [8] Deng et al., Imagenet: A Large-Scale Hierarchical Image Datbse, CVPR (2012).

### Welcome to the Community Computer Vision Course
https://huggingface.co/learn/computer-vision-course/unit0/welcome/welcome.md

# Welcome to the Community Computer Vision Course

Dear learner,

Welcome to the **community-driven course on computer vision**. Computer vision is revolutionizing our world in many ways, from unlocking phones with facial recognition to analyzing medical images for disease detection, monitoring wildlife, and creating new images. Together, we'll dive into the fascinating world of computer vision!

Throughout this course, we'll cover everything from the basics to the latest advancements in computer vision. It's structured to include various foundational topics, making it friendly and accessible for everyone. We're delighted to have you join us for this exciting journey!

On this page, you can find how to join the learners community and more details about the course!

## Certification 🥇

Sorry, but currently we don't offer certification for this course. If you want to get involved in building a way for people to prove what they have learned in this course and make it a highly automated process, feel free to open a discussion or an issue.

## Join the community!

We invite you to be a part of [our active and supportive Discord community](http://hf.co/join/discord), where engaging conversations and shared interests flourish every day and where this course started. You will find peers with whom you can exchange ideas and resources. It is your source to collaborate, get feedback, and ask questions!

It is also a good way to motivate yourself to follow the course. Joining our community is an excellent way to stay engaged. Who knows what is the next thing we will build together?

As AI continues to advance, so does the quality of our discussions and the diversity of perspectives within our community. Upon becoming a member, you'll have an opportunity to connect with fellow course participants, exchange ideas, and collaborate with others. Moreover, the contributors to this course are active on Discord and might help you when needed. Join us now!

## Computer Vision Channels

There are many channels focused on various topics on our Discord server. You will find people discussing papers, organizing events, sharing their projects and ideas, brainstorming, and so much more.

As a computer vision course learner, you may find the following set of channels particularly relevant:

- `#computer-vision`: a catch-all channel for everything related to computer vision
- `#cv-study-group`: a place to exchange ideas, ask questions about specific posts and start discussions
- `#3d`: a channel to discuss aspects of computer vision specific to 3D computer vision

If you are interested in generative AI, we also invite you to join all channels related to the Diffusion Models: #core-announcements, #discussions, #dev-discussions, and #diff-i-made-this.

## What you will learn

The course is composed of theory, practical tutorials, and engaging challenges.

- **Theory Part** : This section covers the theoretical principles of computer vision, explained in detail with practical examples.
- **Hands-on Tutorials** : You will learn how to train and apply key computer vision models using Google Colab notebooks.

Throughout this course, we will cover everything from the basics to the latest advancements in computer vision. It is structured to include various foundational topics, giving you a comprehensive understanding of what makes computer vision so impactful today.

## Pre-requisites

Before beginning this course, make sure that you have some experience with Python programming and are familiar with transformers, machine learning, and neural networks. If these are new to you, consider reviewing the [first unit of the Hugging Face NLP course](https://huggingface.co/learn/nlp-course/chapter1/3?fw=pt). While a strong knowledge of pre-processing techniques and mathematical operations like convolutions is beneficial, they are not prerequisites.

## Course Structure

The course is organized into multiple units, covering the fundamentals and delving into an in-depth exploration of state-of-the-art models.

- **Unit 1 - Fundamentals of Computer Vision** : this unit covers the essential concepts to get started with computer vision: the need for computer vision, the field's basics, and its applications. Explore image fundamentals, formation, and preprocessing, along with key aspects of feature extraction.
- **Unit 2 - Convolutional Neural Networks (CNNs)** : delve into the world of CNNs, understanding their general architecture, key concepts, and common pre-trained models. Learn how to apply transfer learning and fine-tuning to adapt CNNs for various tasks.
- **Unit 3 - Vision Transformers** : explore transformer architecture in the context of computer vision and learn how they compare to CNNs. Understand common vision transformers such as Swin, DETR, and CVT, along with techniques for transfer learning and fine-tuning.
- **Unit 4 - Multimodal Models** : understand the fusion of text and vision by exploring multimodal tasks like image-to-text and text-to-image. Study models such as CLIP and its relatives (GroupViT, BLIPM, Owl-VIT), and master transfer learning techniques for multimodal tasks.
- **Unit 5 - Generative Models** : explore generative models, including GANs, VAEs, and diffusion models. Learn about their differences and applications in tasks such as text-to-image, image-to-image, and inpainting.
- **Unit 6 - Basic Computer Vision Tasks** : cover fundamental tasks like image classification, object detection, and segmentation and the models used in them (YOLO, SAM). Gain insights into metrics and practical applications for these tasks.
- **Unit 7 - Video and Video Processing** : examine the characteristics of videos, the role of video processing, and the challenges compared to image processing. Explore temporal continuity, motion estimation, and practical applications in video processing.
- **Unit 8 - 3D Vision, Scene Rendering, and Reconstruction** : delve into the complexities of three-dimensional vision, exploring concepts like Nerf and GQN for scene rendering and reconstruction. Understand the challenges and applications of 3D vision in computer vision, and how it provides an even more comprehensive view of spatial information.
- **Unit 9 - Model Optimization** : explore the critical aspects of model optimization. Cover techniques such as model compression, deployment considerations, and the usage of tools and frameworks. Include topics like distillation, pruning, and TinyML for efficient model deployment.
- **Unit 10 - Synthetic Data Creation** : discover the importance of synthetic data creation using deep generative models. Explore methods like point clouds and diffusion models and investigate major synthetic datasets and their applications in computer vision.
- **Unit 11 - Zero Shot Computer Vision** : delve into the realm of zero-shot learning in computer vision, covering aspects of generalization, transfer learning, and its applications in tasks such as zero-shot recognition and image segmentation. Explore the relationship between zero-shot learning and transfer learning across various computer vision domains.
- **Unit 12 - Ethics and Biases in Computer Vision** : understand the ethical considerations specific to computer vision. Explore why ethics matter, how biases can infiltrate AI models, and the types of biases prevalent in these domains. Learn how to do bias evaluation and mitigation strategies, emphasizing responsible development and deployment of AI technologies.
- **Unit 13 - Outlook and Emerging Trends** : explore current trends and emerging architectures . Delve into innovative approaches like Retentive Network, Hiera, Hyena, I-JEPA, and Retention Vision Models.

## Meet our team

This course is made by the Hugging Face Community with love 💜! Join us by adding your contribution [on GitHub](https://github.com/huggingface/computer-vision-course).
Our goal was to create a computer vision course that is beginner-friendly and that could act as a resource for others. Around 60+ people from all over the world joined forces to make this project happen. Here we give them credit:

**Unit 1 - Fundamentals of Computer Vision**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Ameed Taylor](https://github.com/atayloraerospace), [Sergio Paniego](https://github.com/sergiopaniego)
- Writers: [Seshu Pavan Mutyala](https://github.com/seshupavan), [Isabella Bicalho-Frazeto](https://github.com/bellabf), [Aman Kapoor](https://github.com/aman06012003), [Tiago Comassetto Fróes](https://github.com/froestiago), [Aditya Mishra](https://github.com/adityaiiitr), [Kerem Delikoyun](https://github.com/krmdel), [Ker Lee Yap](https://github.com/klyap), [Kathy Fahnline](https://github.com/kfahn22), [Ameed Taylor](https://github.com/atayloraerospace)

**Unit 2 - Convolutional Neural Networks (CNNs)**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Mohammed Hamdy](https://github.com/mmhamdy), [Sezan](https://github.com/sezan92), [Joshua Adrian Cahyono](https://github.com/JvThunder), [Murtaza Nazir](https://github.com/themurtazanazir), [Albert Kao](https://github.com/albertkao227), [Sitam Meur](https://github.com/sitamgithub-MSIT), [Antonis Stellas](https://github.com/AntonisCSt), [Sergio Paniego](https://github.com/sergiopaniego)
- Writers: [Emre Albayrak](https://github.com/emre570), [Caroline Shamiso Chitongo](https://github.com/ShamieCC), [Sezan](https://github.com/sezan92), [Joshua Adrian Cahyono](https://github.com/JvThunder), [Murtaza Nazir](https://github.com/themurtazanazir), [Albert Kao](https://github.com/albertkao227), [Isabella Bicalho-Frazeto](https://github.com/bellabf), [Aman Kapoor](https://github.com/aman06012003), [Sitam Meur](https://github.com/sitamgithub-MSIT)

**Unit 3 - Vision Transformers**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Mohammed Hamdy](https://github.com/mmhamdy), [Ameed Taylor](https://github.com/atayloraerospace), [Sezan](https://github.com/sezan92)
- Writers: [Surya Guthikonda](https://github.com/SuryaKrishna02), [Ker Lee Yap](https://github.com/klyap), [Anindyadeep Sannigrahi](https://bento.me/anindyadeep), [Celina Hanouti](https://github.com/hanouticelina), [Malcolm Krolick](https://github.com/Mkrolick), [Alvin Li](https://github.com/alvanli), [Shreyas Daniel Gaddam](https://shreydan.github.io), [Anthony Susevski](https://github.com/asusevski), [Alan Ahmet](https://github.com/alanahmet), [Ghassen Fatnassi](https://github.com/ghassen-fatnassi)

**Unit 4 - Multimodal Models**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Snehil Sanyal](https://github.com/snehilsanyal), [Mohammed Hamdy](https://github.com/mmhamdy), [Charchit Sharma](https://github.com/charchit7), [Ameed Taylor](https://github.com/atayloraerospace), [Isabella Bicalho-Frazeto](https://github.com/bellabf)
- Writers: [Snehil Sanyal](https://github.com/snehilsanyal), [Surya Guthikonda](https://github.com/SuryaKrishna02), [Mateusz Dziemian](https://github.com/mattmdjaga), [Charchit Sharma](https://github.com/charchit7), [Evstifeev Stepan](https://github.com/minemile), [Jeremy Kespite](https://github.com/jeremy-k3/), [Isabella Bicalho-Frazeto](https://github.com/bellabf), [Pedro Gabriel Gengo Lourenco](https://github.com/pedrogengo)

**Unit 5 - Generative Models**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [William Bonvini](https://github.com/WilliamBonvini), [Mohammed Hamdy](https://github.com/mmhamdy), [Ameed Taylor](https://github.com/atayloraerospace)-
- Writers: [Jeronim Matijević](github.com/jere357), [Mateusz Dziemian](https://github.com/mattmdjaga), [Charchit Sharma](https://github.com/charchit7), [Muhammad Waseem](https://github.com/hwaseem04)

**Unit 6 - Basic Computer Vision Tasks**

- Reviewers: [Adhi Setiawan](https://github.com/adhiiisetiawan)
- Writers: [Adhi Setiawan](https://github.com/adhiiisetiawan), [Bastien Pouëssel](https://github.com/Skower)

**Unit 7 - Video and Video Processing**

- Reviewers: [Ameed Taylor](https://github.com/atayloraerospace), [Isabella Bicalho-Frazeto](https://github.com/bellabf)
- Writers: [Diwakar Basnet](https://github.com/DiwakarBasnet), [Chulhwa Han](https://github.com/cjfghk5697), [Woojun Jung](https://github.com/jungnerd), [Jiwook Han](https://github.com/mreraser), [Mingi Kim](https://github.com/1kmmk1)

**Unit 8 - 3D Vision, Scene Rendering, and Reconstruction**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [William Bonvini](https://github.com/WilliamBonvini), [Mohammed Hamdy](https://github.com/mmhamdy), [Adhi Setiawan](https://github.com/adhiiisetiawan), [Ameed Taylor](https://github.com/atayloraerospace0)
- Writers: [John Fozard](https://github.com/jfozard), [Vasu Gupta](https://github.com/vasugupta9), [Psetinek](https://github.com/psetinek)

**Unit 9 - Model Optimization**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Mohammed Hamdy](https://github.com/mmhamdy), [Adhi Setiawan](https://github.com/adhiiisetiawan), [Ameed Taylor](https://github.com/atayloraerospace)
- Writer: [Adhi Setiawan](https://github.com/adhiiisetiawan)

**Unit 10 - Synthetic Data Creation**

- Reviewers: [Mohammed Hamdy](https://github.com/mmhamdy), [Ameed Taylor](https://github.com/atayloraerospace), [Bhavesh Misra](https://github.com/Zekrom-7780)
- Writers: [William Bonvini](https://github.com/WilliamBonvini), [Alper Balbay](https://github.com/alperiox), [Madhav Kumar](https://github.com/miniMaddy), [Bhavesh Misra](https://github.com/Zekrom-7780), [Kathy Fahnline](https://github.com/kfahn22)

**Unit 11 - Zero Shot Computer Vision**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Mohammed Hamdy](https://github.com/mmhamdy), [Albert Kao](https://github.com/albertkao227), [Isabella Bicalho-Frazeto](https://github.com/bellabf)
- Writers: [Mohammed Hamdy](https://github.com/mmhamdy), [Albert Kao](https://github.com/albertkao227)

**Unit 12 - Ethics and Biases in Computer Vision**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Mohammed Hamdy](https://github.com/mmhamdy), [Charchit Sharma](https://github.com/charchit7), [Adhi Setiawan](https://github.com/adhiiisetiawan), [Ameed Taylor](https://github.com/atayloraerospace), [Bhavesh Misra](https://github.com/Zekrom-7780)
- Writers: [Snehil Sanyal](https://github.com/snehilsanyal), [Bhavesh Misra](https://github.com/Zekrom-7780)

**Unit 13 - Outlook and Emerging Trends**

- Reviewers: [Ratan Prasad](https://github.com/ratan), [Ameed Taylor](https://github.com/atayloraerospace), [Mohammed Hamdy](https://github.com/mmhamdy)
- Writers: [Farros Alferro](https://github.com/farrosalferro), [Mohammed Hamdy](https://github.com/mmhamdy), [Louis Ulmer](https://github.com/lulmer), [Dario Wisznewer](https://github.com/dariowsz), [gonzachiar](https://github.com/gonzachiar)

**Organisation Team**
[Merve Noyan](https://github.com/merveenoyan), [Adam Molnar](https://github.com/lunarflu), [Johannes Kolbe](https://github.com/johko)

We are happy to have you here, let's get started!

### Table of Contents for Notebooks
https://huggingface.co/learn/computer-vision-course/unit0/welcome/TableOfContents.md

# Table of Contents for Notebooks

Here you can find a list of notebooks that contain accompanying and hands-on material to the chapters you find in this course.
Feel free to browse them at your own speed and interest.

| Chapter Title                                           | Notebooks                                                                                                                                                                                                                             | Colabs                                                                                                                                                                                                                                                           |
| ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Unit 0 - Welcome                                        | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 1 - Fundamentals                                   | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 2 - Convolutional Neural Networks                  | [Transfer Learning with VGG19](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%202%20-%20Convolutional%20Neural%20Networks/transfer_learning_vgg19.ipynb)                                                    | [Transfer Learning with VGG](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%202%20-%20Convolutional%20Neural%20Networks/transfer_learning_vgg19.ipynb)                                                      |
|                                                         | [Using ResNet with timm](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%202%20-%20Convolutional%20Neural%20Networks/timm_Resnet.ipynb)                                                                      | [timm_Resnet](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%202%20-%20Convolutional%20Neural%20Networks/timm_Resnet.ipynb)                                                                                 |
| Unit 3 - Vision Transformers                            | [Detection Transformer (DETR)](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/DETR.ipynb)                                                                                   | [Detection Transformer (DETR)](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/DETR.ipynb)                                                                                   |
|                                                         | [Fine-tuning Vision Transformers for Object Detection](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/Fine-tuning%20Vision%20Transformers%20for%20Object%20detection.ipynb) | [Fine-tuning Vision Transformers for Object Detection](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/Fine-tuning%20Vision%20Transformers%20for%20Object%20detection.ipynb) |
|                                                         | [Knowledge Distillation](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/KnowledgeDistillation.ipynb)                                                                        | [Knowledge Distillation](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/KnowledgeDistillation.ipynb)                                                                        |
|                                                         | [LoRA Fine-tuning for Image Classification](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/LoRA-Image-Classification.ipynb)                                                 | [LoRA Fine-tuning for Image Classification](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/LoRA-Image-Classification.ipynb)                                                 |
|                                                         | [Fine-tuning for Multilabel Image Classification](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/fine-tuning-multilabel-image-classification.ipynb)                         | [Fine-tuning for Multilabel Image Classification](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/fine-tuning-multilabel-image-classification.ipynb)                         |
|                                                         | [Transfer Learning for Image Classification](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/transfer-learning-image-classification.ipynb)                                   | [Transfer Learning for Image Classification](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/transfer-learning-image-classification.ipynb)                                   |
|                                                         | [Transfer Learning for Image Segmentation](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/transfer-learning-segmentation.ipynb)                                             | [Transfer Learning for Image Segmentation](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/transfer-learning-segmentation.ipynb)                                             |
|                                                         | [Swin Transformer](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/Swin.ipynb)                                                                                               | [Swin Transformer](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/Swin.ipynb)                                                                                               |
| Unit 4 - Multimodal Models                              | [Clip Crop](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/ClipCrop.ipynb)                                                                                                    | [Clip Crop](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/ClipCrop.ipynb)                                                                                                    |
|                                                         | [Fine-tuning CLIP](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/Clip_finetune.ipynb)                                                                                        | [Fine-tuning CLIP](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/Clip_finetune.ipynb)                                                                                        |
|                                                         | [Clustering with CLIP](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Clustering%20with%20CLIP.ipynb)                                                  | [Clustering with CLIP](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Clustering%20with%20CLIP.ipynb)                                                  |
|                                                         | [Image Classification with CLIP](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image%20classification%20with%20CLIP.ipynb)                            | [Image Classification with CLIP](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image%20classification%20with%20CLIP.ipynb)                            |
|                                                         | [Image Retrieval with Prompts](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image_retrieval_with_prompts.ipynb)                                      | [Image Retrieval with Prompts](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image_retrieval_with_prompts.ipynb)                                      |
|                                                         | [Image Similarity](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image_similarity.ipynb)                                                              | [Image Similarity](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/CLIP%20and%20relatives/Image_similarity.ipynb)                                                              |
| Unit 5 - Generative Models                              | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 6 - Basic CV Tasks                                 | [Fine-tune SAM on Custom Dataset]()                             | [Fine-tune SAM on Custom Dataset]()                             |
| Unit 7 - Video and Video Processing                     | [Fine-tune ViViT for Video Classification](https://github.com/DiwakarBasnet/computer-vision-course/blob/unit-7_Video_and_VideoProcessing/notebooks/Unit%207%20-%20Video%20and%20Video%20Processing/Vivit_Fine_tuned_Video_Classification.ipynb)                                                                                                                                                                                                                           | [Fine-tune ViViT for Video Classification](https://github.com/DiwakarBasnet/computervisioncourse/blob/unit7_Video_and_VideoProcessing/notebooks/Unit%207%20%20Video%20and%20Video%20Processing/Vivit_Fine_tuned_Video_Classification.ipynb)                                                                                                                                                                                                                                                         |
| Unit 8 - 3D Vision, Scene Rendering, and Reconstruction | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 9 - Model Optimization                             | [Edge TPU](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/edge_tpu.ipynb)                                                                                                    | [Edge TPU](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/edge_tpu.ipynb)                                                                                                    |
|                                                         | [ONNX](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/onnx.ipynb)                                                                                                            | [ONNX](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/onnx.ipynb)                                                                                                            |
|                                                         | [OpenVINO](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/openvino.ipynb)                                                                                                    | [OpenVINO](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/openvino.ipynb)                                                                                                    |
|                                                         | [Optimum](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/optimum.ipynb)                                                                                                      | [Optimum](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/optimum.ipynb)                                                                                                      |
|                                                         | [TensorRT](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tensorrt.ipynb)                                                                                                    | [TensorRT](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tensorrt.ipynb)                                                                                                    |
|                                                         | [TMO](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tmo.ipynb)                                                                                                              | [TMO](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tmo.ipynb)                                                                                                              |
|                                                         | [Torch](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/torch.ipynb)                                                                                                          | [Torch](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/torch.ipynb)                                                                                                          |
| Unit 10 - Synthetic Data Creation                       | [Dataset Labeling with OWLv2](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/OWLV2_labeled_image_dataset_with_annotations.ipynb)                                     | [Dataset Labeling with OWLv2](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/OWLV2_labeled_image_dataset_with_annotations.ipynb)                                     |
|                                                         | [Generating Synthetic Lung Images](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/Synthetic_lung_images_hf_course.ipynb)                                             | [Generating Synthetic Lung Images](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/Synthetic_lung_images_hf_course.ipynb)                                             |
|                                                         | [BlenderProc Examples](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/blenderproc_examples.ipynb)                                                                    | [BlenderProc Examples](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/blenderproc_examples.ipynb)                                                                    |
|                                                         | [Image Labeling with BLIP-2](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/image_labeling_BLIP_2.ipynb)                                                             | [Image Labeling with BLIP-2](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/image_labeling_BLIP_2.ipynb)                                                             |
|                                                         | [Synthetic Data Creation with SDXL Turbo](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/synthetic_data_creation_sdxl_turbo.ipynb)                                   | [Synthetic Data Creation with SDXL Turbo](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%2010%20-%20Synthetic%20Data%20Creation/synthetic_data_creation_sdxl_turbo.ipynb)                                   |
| Unit 11 - Zero Shot Computer Vision                     | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 12 - Ethics and Biases                             | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |
| Unit 13 - Outlook                                       | No Notebook                                                                                                                                                                                                                           | No Colab                                                                                                                                                                                                                                                         |

### Object Detection
https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/object_detection.md

# Object Detection

In this chapter, we'll explore the fascinating world of object detection—a vital task in modern computer vision systems. We will demystify essential concepts, discuss popular methods, examine applications, and discuss evaluation metrics. By the end, you'll have a solid foundation and be ready to venture further into advanced topics.

![Image displaying the bounding boxes around multiple objects in the frame along with the confidence score of their classification](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Object_Detection.png)
## Object Detection Overview

### Introduction

Object detection is the task of identifying and locating specific objects within digital images or video frames. It has far-reaching implications across diverse sectors, including self-driving cars, facial recognition systems, and medical diagnosis tools. 

### Classification vs Localization

Classification distinguishes objects based on unique attributes, while localization determines an object's location within an image. Object detection combines both approaches, locating entities and assigning corresponding class labels. Imagine recognizing different fruit types and pinpointing their exact locations in a single image. That's object detection at play!

## Use Cases

Object detection impacts numerous industries, offering valuable insights and automation opportunities. Representative examples include autonomous vehicles navigating roads, surveillance systems covering vast public spaces, healthcare imaging systems detecting diseases, manufacturing plants maintaining output consistency, and augmented reality enriching user experiences.

Here is an example of object detection using transformers:
```python
from transformers import pipeline
from PIL import Image

pipe = pipeline("object-detection", model="facebook/detr-resnet-50")

image = Image.open("path/to/your/image.jpg").convert("RGB")

bounding_boxes = pipe(image)
```

## How to Evaluate an Object Detection Model? 
You have now seen how to use an object detection model, but how can you evaluate it? As demonstrated in the previous section, object detection is primarily a supervised learning task. This means that the dataset is composed of images and their corresponding bounding boxes, which serve as the ground truth. A few metrics can be used to evaluate your model. The most common ones are:

- **The Intersection over Union (IoU) or Jaccard index** measures the overlap between predicted and reference labels as a percentage ranging from 0% to 100%. Higher IoU percentages indicate better alignments, i.e., improved accuracy. Useful when assessing tracker performance under changing conditions, e.g., following wild animals during migration.

- **Mean Average Precision (mAP)** estimates object detection efficiency using both precision (correct prediction ratio) and recall (true positive identification ability). Calculated across varying IoU thresholds, mAP functions as a holistic assessment tool for object detection algorithms. Helpful for determining the model's performance in localization and detection in challenging conditions like finding irregular surface defects that vary in size and shape in a manufactured part.

## Conclusion and Future Work

Understanding object detection lays the groundwork for mastering advanced computer vision techniques, enabling the construction of powerful and accurate solutions addressing rigorous needs. Some future research areas include developing lightweight object detection models which are fast and easily deployable. Exploration in the field of object detection in 3D space, e.g., for augmented reality applications, is another avenue to explore. 

## References and Additional Resources

- [Hugging Face Object Detection Guide](https://huggingface.co/docs/transformers/tasks/object_detection)
- [Object Detection in 20 Years: A Survey](https://arxiv.org/abs/1905.05055)
- [Papers with Code - Real-Time Object Detection](https://paperswithcode.com/task/real-time-object-detection)
- [Papers with Code - Object Detection](https://paperswithcode.com/task/object-detection)

### Introduction
https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/introduction.md

# Introduction

So far you have learned a lot about different Neural Network architectures for Computer Vision, from CNNs to Transformers, Multimodal architectures and Generative AI.
This unit is meant to give you a better overview of basic Computer Vision tasks, such as _Image Classification_, _Object Detection_ and _Image Segmentation_.

The goal is to get a better understanding of what exactly these tasks are about and which subcategories exist (e.g., Semantic or Instance Segmentation).
We will also highlight popular datasets for these tasks and how they are evaluated. And, of course, we will talk about some of the most popular models that are used for the respective tasks.

## Contributions Welcome

You will notice that this unit so far is a bit short on content. If you want to change that, you are happily invited to join our efforts and have a look at the [Contribution Guidelines](../../../CONTRIBUTING.md).

### Image Segmentation
https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/segmentation.md

# Image Segmentation

Image segmentation is dividing an image into meaningful segments. It's all about creating masks that spotlight each object in the picture. 
The intuition behind this task is *that it can be viewed as a classification for each pixel of the image*.
Segmentation models are the 
core models in various industries. They can be found in agriculture and autonomous driving. In the farming world, these models are used 
for identifying different land sections and assessing the growth stage of crops. They're also key players for self-driving cars, where 
they are used to identify lanes, sidewalks, and other road users.

![Image segmentation](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/segmentation-example.png)

Different types of segmentations can be applied depending on the context and the intended goal.
The most commonly defined segmentations are the following.
- **Semantic Segmentation**:  This involves assigning the most probable class to each pixel. For example, in semantic segmentation, 
the model does not distinguish between two individual cats but rather focuses on the pixel class. It's all about classification of 
each pixel.
- **Instance Segmentation**: This type involves identifying each instance of an object with a unique mask. It combines aspects of 
object detection and segmentation to differentiate between individual objects of the same class.
- **Panoptic Segmentation**: A hybrid approach that combines elements of semantic and instance segmentation. It assigns a class and 
an instance to each pixel, effectively integrating the *what* and *where* aspects of the image.

![Comparison of segmentation types](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/segmentation-types.png)

Choosing the right segmentation type depends on the context and the intended goal. One cool thing is that recent models allow you to achieve the three 
segmentation types with a single model. We recommend you to check out this [article](https://huggingface.co/blog/mask2former), which introduces Mask2former, 
a new model by Meta that achieves the three segmentation types with only a Panoptic dataset.

### Modern Approach: Vision Transformer-based Segmentation

You've probably heard of U-Net, a popular network used for image segmentation. It's designed with several convolutional layers and works 
in two main phases: the downsampling phase, which compresses the image to understand its features, and the upsampling phase, which expands 
the image back to its original size for detailed segmentation.

Computer vision was once dominated by convolutional models, but it has recently shifted towards the vision transformer approach. 
An example is *[Segment anything model (SAM)](https://arxiv.org/abs/2304.02643)* that is a popular prompt based model introduced 
in April 2023 by *Meta AI Research, FAIR*. The model is based on the Vision Transformer (ViT) model and focuses on creating a promptable 
(i.e. you can provide words to describe what you would like to segment in the image) segmentation model capable of 
zero-shot transfer on new images. The strength of the model comes from its training on the largest dataset available, which includes over 
1 billion masks on 11 million images. I recommend you play with [Meta's demo](https://segment-anything.com/) on a few images and even 
better you can play with the [model](https://huggingface.co/ybelkada/segment-anything) in transformers. 

Here is an example of how to use the model in transformers. First, we will initialize the `mask-generation` pipeline. 
Then, we will pass the image in pipeline for inference.

```python
from transformers import pipeline

pipe = pipeline("mask-generation", model="facebook/sam-vit-base", device=0)

raw_image = Image.open("path/to/image").convert("RGB")

masks = pipe(raw_image)
```

More details on how to use the model can be found in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/sam).

### How to Evaluate a Segmentation Model?

You have now seen how to use a segmentation model, but how can you evaluate it? As demonstrated in the previous section, segmentation is 
primarily a supervised learning task. This means that the dataset is composed of images and their corresponding masks, which serve as the 
ground truth. A few metrics can be used to evaluate your model. The most common ones are:

- **The Intersection over Union (IoU) or Jaccard index** metric is the ratio between the intersection and the union of the predicted mask and the ground truth. 
IoU is arguably the most common metric used in segmentation tasks. Its advantage lies in being less sensitive to class imbalance, making 
it often a good choice when you begin modeling.

![IoU](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/iou.png)

- **Pixel accuracy**: Pixel accuracy is calculated as the ratio of the number of correctly classified pixels to the total number of pixels. 
While being an intuitive metric, it can be misleading due to its sensitivity to class imbalance.

![Pixel accuracy](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/pixel-accuracy.png)

- **Dice coefficient**: It's the ratio between the double of the intersection and the sum of the predicted mask and the ground truth.
The dice coefficient is simply the percentage of overlap between the prediction and the ground truth. It's a good metric to use when 
you need sensibility to small differences between the overlap.

![Dice coefficient](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/dice-coefficient.png)

## Resources and Further Reading

- [Segment Anything Paper](https://arxiv.org/abs/2304.02643)
- [Fine-tuning Segformer blog post](https://huggingface.co/blog/fine-tune-segformer)
- [Mask2former blog post](https://huggingface.co/blog/mask2former)
- [Hugging Face's documentation on segmentation tasks](https://huggingface.co/docs/transformers/main/tasks/semantic_segmentation)
- If you want to go deeper into the topic, we recommend you to check out Stanford's [lecture on segmentation](https://www.youtube.com/watch?v=nDPWywWRIRo).

### Metric and Relative Monocular Depth Estimation: An Overview. Fine-Tuning Depth Anything V2 👐 📚
https://huggingface.co/learn/computer-vision-course/unit8/monocular_depth_estimation.md

# Metric and Relative Monocular Depth Estimation: An Overview. Fine-Tuning Depth Anything V2 👐 📚

## Evolution of Models

Over the past decade, monocular depth estimation models have undergone remarkable advancements. Let's take a visual journey through this evolution.

We started with basic models like this:

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_estimation_evolution1.png)
Progressed to more sophisticated models:

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_estimation_evolution2.png)

And now, we have the state-of-the-art model, Depth Anything V2:

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_estimation_evolution3.png)

Impressive, isn't it?

Today, we will demystify how these models work and simplify complex concepts. Moreover, we will fine-tune our own model using a custom dataset. "*But wait,*" you might ask, "*why would we need to fine-tune a model on our own dataset when the latest model performs so well in any environment?*"

This is where the nuances and specifics come into play, which is precisely the focus of this article. If you're eager to explore the intricacies of monocular depth estimation, keep reading.

## The Basics

"*Okay, what exactly is depth?*" Typically, it's a single-channel image where each pixel represents the distance from the camera or sensor to a point in space corresponding to that pixel. However, it turns out that these distances can be absolute or relative — what a twist!
- **Absolute Depth**: Each pixel value directly corresponds to a physical distance (e.g., in meters or centimeters).
- **Relative Depth**: The pixel values indicate which points are closer or further away without referencing real-world units of measurement. Often the relative depth is inverted, i.e. the smaller the number, the farther the point is.

We'll explore these concepts in more detail a bit later.

"*Well, but what does monocular mean?*" It simply means that we need to estimate depth using just a single photo. What’s so challenging about that? Take a look at this:

![image/gif](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_ambiguity1.gif)

![image/gif](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_ambiguity2.gif)

As you can see, projecting a 3D space onto a 2D plane can create ambiguity due to perspective. To address this, there are precise mathematical methods for depth estimation using multiple images, such as Stereo Vision, Structure from Motion, and the broader field of Photogrammetry. Additionally, technologies like laser scanners (e.g., LiDAR) can be used for depth measurement. 

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/stereo_vision_sfm.png)

## Relative and Absolute (aka Metric) Depth Estimation: What's the Point?

Let's explore some challenges that highlight the necessity of relative depth estimation. And to be more scientific, let's refer to some papers.

>The advantage of predicting metric depth is the practical utility for many downstream applications in computer vision and robotics, such as mapping, planning, navigation, object recognition, 3D reconstruction, and image editing. However, training a single metric depth estimation model across multiple datasets often deteriorates the performance, especially when the collection includes images with large differences in depth scale, e.g. indoor and outdoor images. As a result, current MDE models usually overfit to specific datasets and do not generalize well to other datasets.

Typically, the architecture for this image-to-image task is an encoder-decoder model, like a U-Net, with various modifications. Formally, this is a pixel-wise regression problem. Imagine how challenging it is for a neural network to accurately predict distances for each pixel, ranging from a few meters to several hundred meters.  This brings us to the idea of moving away from a universal model that predicts exact distances in all scenarios. Instead, let's develop a model that approximately (relatively) predicts depth, capturing the shape and structure of the scene by indicating which objects are farther and which are closer relative to each other and to us. If precise distances are needed, we can fine-tune this relative model on a specific dataset, leveraging its existing understanding of the task.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/different_scales.png)

There are even more details we have to pay attention to.

>The model not only has to handle images taken with different cameras and camera settings but also has to learn to adjust for the large variations in the overall scale of the scenes.

Apart from different scales, as we mentioned earlier, a significant problem lies in the cameras themselves, which can have vastly different perspectives of the world.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/focal_lenght.png)

Notice how changes in focal length dramatically alter the perception of background distances!

Lastly, many datasets lack absolute depth maps altogether and only have relative ones (for instance, due to the lack of camera calibration). Additionally, each method of obtaining depth has its own advantages, disadvantages, biases, and problems.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/metrics1.png)

>We identify three major challenges. 1) Inherently different representations of depth: direct vs. inverse depth representations. 2) Scale ambiguity: for some data sources, depth is only given up to an unknown scale. 3) Shift ambiguity: some datasets provide disparity only up to an unknown scale and global disparity shift that is a function of the unknown baseline and a horizontal shift of the principal points due to post-processing

*Disparity refers to the difference in the apparent position of an object when viewed from two different viewpoints, commonly used in stereo vision to estimate depth.*

In short, I hope I've convinced you that you can't just take scattered depth maps from the internet and train a model with them using some pixel-wise MSE.

But how do we equalize all these variations? How can we abstract as much as possible from the differences and extract commonalities from all these datasets — namely, the shape and structure of the scene, the proportional relationships between objects, indicating what is closer and what is farther away?

## Scale and Shift Invariant Loss 😎

Simply put, we need to perform some sort of normalization on all the depth maps we want to train on and evaluate metrics with. Here is an idea: we want to create a loss function that doesn't consider the scale of the environment or the various shifts. The remaining task is to translate this idea into mathematical terms.

>Concretely, the depth value is first transformed into the disparity space by \\( d = \frac{1}{t} \\) and then normalized to \\( 0 \sim 1 \\) on each depth map. To enable multi-dataset joint training, we adopt the affine-invariant loss to ignore the unknown scale and shift of each sample:
$$\mathcal{L}_1 = \frac{1}{HW} \sum_{i=1}^{HW} \rho(d_i^*, d_i),$$
where \\( d_i^* \\) and \\( d_i \\) are the prediction and ground truth, respectively. And \\( \rho \\) is the affine-invariant mean absolute error loss: \\( \rho(d_i^*, d_i) = \left| \hat{d}_i^* - \hat{d}_i \right| \\), where \\( \hat{d}_i^* \\) and \\( \hat{d}_i \\) are the scaled and shifted versions of the prediction \\( d_i^* \\) and ground truth \\( d_i \\):
$$\hat{d}_i = \frac{d_i - t(d)}{s(d)},$$
where \\( t(d) \\) and \\( s(d) \\) are used to align the prediction and ground truth to have zero translation and unit scale:
$$t(d) = \mathrm{median}(d), \quad s(d) = \frac{1}{HW} \sum_{i=1}^{HW} \left| d_i - t(d) \right|.$$

In fact, there are many other methods and functions that help eliminate scale and shifts. There are also different additions to loss functions, such as gradient loss, which focuses not on the pixel values themselves but on how quickly they change (hence the name — gradient). You can read more about this in the [MiDaS](https://arxiv.org/abs/1907.01341) paper, i'll include a list of useful literature at the end. Let's briefly discuss metrics before moving on to the most exciting part — fine-tuning on absolute depth using a custom dataset.

## Metrics

In depth estimation, several standard metrics are used to evaluate performance, including MAE (Mean Absolute Error), RMSE (Root Mean Square Error), and their logarithmic variations to smooth out large gaps in distance. Additionally, consider the following:
- **Absolute Relative Error (AbsRel)**: This metric is similar to MAE but expressed in percentage terms, measuring how much the predicted distances differ from the true ones on average in percentage terms.  \\(\text{AbsRel} = \frac{1}{N} \sum_{i=1}^{N} \frac{|d_i - \hat{d}_i|}{d_i}\\)
- **Threshold Accuracy ( \\(\delta_1\\))**: This measures the percentage of predicted pixels that differ from the true pixels by no more than 25%.  \\(\delta_1 = \text{ proportion of predicted depths where } \max\left(\frac{d_i}{\hat{d}_i}, \frac{\hat{d}_i}{d_i}\right) For all our models and baselines, we align predictions and ground truth in scale and shift for each image before measuring errors.

Indeed, if we are training to predict relative depth but want to measure quality on a dataset with absolute values, and we are not interested in fine-tuning on this dataset or in the absolute values, we can, similar to the loss function, exclude scale and shift from the calculations and standardize everything to a unified measure.

### Four Methods of Calculating Metrics

Understanding these methods helps avoid confusion when analyzing metrics in papers:

1. **Zero-shot Relative Depth Estimation**
    - Train to predict relative depth on one set of datasets and measure quality on others. Since the depth is relative, significantly different scales are not a concern, and metrics on other datasets usually remain high, similar to the test sets of the training datasets.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/metrics2.png)

2. **Zero-shot Absolute Depth Estimation**
    - Train a universal relative model, then fine-tune it on a good dataset for predicting absolute depth, and measure the quality of absolute depth predictions on a different dataset. Metrics in this case tend to be worse compared to the previous method, highlighting the challenge of predicting absolute depth well across different environments.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/metrics3.png)

3. **Fine-tuned (In-domain) Absolute Depth Estimation**
    - Similar to the previous method, but now measure quality on the test set of the dataset used for fine-tuning absolute depth prediction. This is one of the most practical approaches.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/metrics4.png)

4. **Fine-tuned (In-domain) Relative Depth Estimation**
    - Train to predict relative depth and measure quality on the test set of the training datasets. This might not be the most precise name, but the idea is straightforward.

## Depth Anything V2 Absolute Depth Estimation Fine-Tuning

In this section, we will reproduce the results from the Depth Anything V2 paper by fine-tuning the model to predict absolute depth on the NYU-D dataset, aiming to achieve metrics similar to those shown in the last table from the previous section.

### Key Ideas Behind Depth Anything V2
Depth Anything V2 is a powerful model for depth estimation, achieving remarkable results due to several innovative concepts:

- **Universal Training Method on Heterogeneous Data**: This method, introduced in the MiDaS 2020 paper, enables robust training across various types of datasets.
- **DPT Architecture**: The "Vision Transformers for Dense Prediction" paper presents this architecture, which is essentially a U-Net with a Vision Transformer (ViT) encoder and several modifications.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/dpt.png)

- **DINOv2 Encoder**: This standard ViT, pre-trained using a self-supervised method on a massive dataset, serves as a powerful and versatile feature extractor. In recent years, CV researchers have aimed to create foundation models similar to GPT and BERT in NLP, and DINOv2 is a significant step in that direction.
- **Use of Synthetic Data**: The training pipeline is very well described in the image below. This approach allowed the authors to achieve such clarity and accuracy in the depth maps. After all, if you think about it, the labels obtained from synthetic data are truly “ground truth.”

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/DA2_pipeline.png)

### Getting Started with Fine-Tuning

Now, let's dive into the code. If you don't have access to a powerful GPU, I highly recommend using Kaggle over Colab. Kaggle offers several advantages:
- Up to 30 hours of GPU usage per week
- No connection interruptions
- Very fast and convenient access to datasets
- The ability to use two GPUs simultaneously in one of the configurations, which will help you practice distributed training

You can jump straight into the code using this [notebook on Kaggle](https://www.kaggle.com/code/amanattheedge/depth-anything-v2-metric-fine-tunning-on-nyu/notebook).

We'll go through everything in detail here. To start, let's download all the necessary modules from the authors' repository and the checkpoint of the smallest model with the ViT-S encoder.

#### Step 1: Clone the Repository and Download Pre-trained Weights

```bash
!git clone https://github.com/DepthAnything/Depth-Anything-V2
!wget -O depth_anything_v2_vits.pth https://huggingface.co/depth-anything/Depth-Anything-V2-Small/resolve/main/depth_anything_v2_vits.pth?download=true
```
You can also download the dataset [here](http://datasets.lids.mit.edu/fastdepth/data/) 

#### Step 2: Import Required Modules
```python
import numpy as np
import matplotlib.pyplot as plt
import os
from tqdm import tqdm
import cv2
import random
import h5py

import sys

sys.path.append("/kaggle/working/Depth-Anything-V2/metric_depth")

from accelerate import Accelerator
from accelerate.utils import set_seed
from accelerate import notebook_launcher
from accelerate import DistributedDataParallelKwargs

import transformers

import torch
import torchvision
from torchvision.transforms import v2
from torchvision.transforms import Compose
import torch.nn.functional as F
import albumentations as A

from depth_anything_v2.dpt import DepthAnythingV2
from util.loss import SiLogLoss
from dataset.transform import Resize, NormalizeImage, PrepareForNet, Crop
```

#### Step 3: Get All File Paths for Training and Validation
```python
def get_all_files(directory):
    all_files = []
    for root, dirs, files in os.walk(directory):
        for file in files:
            all_files.append(os.path.join(root, file))
    return all_files

train_paths = get_all_files("/kaggle/input/nyu-depth-dataset-v2/nyudepthv2/train")
val_paths = get_all_files("/kaggle/input/nyu-depth-dataset-v2/nyudepthv2/val")
```
#### Step 4: Define the PyTorch Dataset
```python
# NYU Depth V2 40k. Original NYU is 400k
class NYU(torch.utils.data.Dataset):
    def __init__(self, paths, mode, size=(518, 518)):

        self.mode = mode  # train or val
        self.size = size
        self.paths = paths

        net_w, net_h = size
        # author's transforms
        self.transform = Compose(
            [
                Resize(
                    width=net_w,
                    height=net_h,
                    resize_target=True if mode == "train" else False,
                    keep_aspect_ratio=True,
                    ensure_multiple_of=14,
                    resize_method="lower_bound",
                    image_interpolation_method=cv2.INTER_CUBIC,
                ),
                NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
                PrepareForNet(),
            ]
            + ([Crop(size[0])] if self.mode == "train" else [])
        )

        # only horizontal flip in the paper
        self.augs = A.Compose(
            [
                A.HorizontalFlip(),
                A.ColorJitter(hue=0.1, contrast=0.1, brightness=0.1, saturation=0.1),
                A.GaussNoise(var_limit=25),
            ]
        )

    def __getitem__(self, item):
        path = self.paths[item]
        image, depth = self.h5_loader(path)

        if self.mode == "train":
            augmented = self.augs(image=image, mask=depth)
            image = augmented["image"] / 255.0
            depth = augmented["mask"]
        else:
            image = image / 255.0

        sample = self.transform({"image": image, "depth": depth})

        sample["image"] = torch.from_numpy(sample["image"])
        sample["depth"] = torch.from_numpy(sample["depth"])

        # sometimes there are masks for valid depths in datasets because of noise e.t.c
        #         sample['valid_mask'] = ...

        return sample

    def __len__(self):
        return len(self.paths)

    def h5_loader(self, path):
        h5f = h5py.File(path, "r")
        rgb = np.array(h5f["rgb"])
        rgb = np.transpose(rgb, (1, 2, 0))
        depth = np.array(h5f["depth"])
        return rgb, depth
```
Here are a few points to note:
- The original NYU-D dataset contains 407k samples, but we are using a subset of 40k. This will slightly impact the final model quality.
- The authors of the paper used only horizontal flips for data augmentation.
- Occasionally, some points in the depth maps may not be processed correctly, resulting in "bad pixels". Some datasets include a mask that differentiates between valid and invalid pixels in addition to the image and depth map. This mask is necessary to exclude bad pixels from loss and metric calculations.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/depth_holes.png)

- During training, we resize images so that the smaller side is 518 pixels and then crop them. For validation, we do not crop or resize the depth maps. Instead, we upsample the predicted depth maps and compute metrics at the original resolution.

#### Step 5: Data Visualization
```python
num_images = 5

fig, axes = plt.subplots(num_images, 2, figsize=(10, 5 * num_images))

train_set = NYU(train_paths, mode="train")

for i in range(num_images):
    sample = train_set[i * 1000]
    img, depth = sample["image"].numpy(), sample["depth"].numpy()

    mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
    std = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
    img = img * std + mean

    axes[i, 0].imshow(np.transpose(img, (1, 2, 0)))
    axes[i, 0].set_title("Image")
    axes[i, 0].axis("off")

    im1 = axes[i, 1].imshow(depth, cmap="viridis", vmin=0)
    axes[i, 1].set_title("True Depth")
    axes[i, 1].axis("off")
    fig.colorbar(im1, ax=axes[i, 1])

plt.tight_layout()
```
![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/dataset.png)

As you can see, the images are very blurry and noisy. Because of this, we won't be able to get fine-grained depth maps seen in the previews of Depth Anything V2. In the black hole artifacts, the depth is 0, and we will use this fact later for masking these holes. Also, the dataset contains many nearly identical photos of the same location.

#### Step 6: Prepare Dataloaders
```python
def get_dataloaders(batch_size):

    train_dataset = NYU(train_paths, mode="train")
    val_dataset = NYU(val_paths, mode="val")

    train_dataloader = torch.utils.data.DataLoader(
        train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, drop_last=True
    )

    val_dataloader = torch.utils.data.DataLoader(
        val_dataset,
        batch_size=1,  # for dynamic resolution evaluations without padding
        shuffle=False,
        num_workers=4,
        drop_last=True,
    )

    return train_dataloader, val_dataloader
```
#### Step 7: Metric Evaluation
```python
def eval_depth(pred, target):
    assert pred.shape == target.shape

    thresh = torch.max((target / pred), (pred / target))

    d1 = torch.sum(thresh = 0.001))

            accelerator.backward(loss)
            optim.step()
            scheduler.step()

            train_loss += loss.detach()

        train_loss /= len(train_dataloader)
        train_loss = accelerator.reduce(train_loss, reduction="mean").item()

        model.eval()
        results = {"d1": 0, "abs_rel": 0, "rmse": 0, "mae": 0, "silog": 0}
        for sample in tqdm(val_dataloader, disable=not accelerator.is_local_main_process):

            img, depth = sample["image"].float(), sample["depth"][0]

            with torch.no_grad():
                pred = model(img)
                # evaluate on the original resolution
                pred = F.interpolate(
                    pred[:, None], depth.shape[-2:], mode="bilinear", align_corners=True
                )[0, 0]

            valid_mask = (depth = 0.001)

            cur_results = eval_depth(pred[valid_mask], depth[valid_mask])

            for k in results.keys():
                results[k] += cur_results[k]

        for k in results.keys():
            results[k] = results[k] / len(val_dataloader)
            results[k] = round(accelerator.reduce(results[k], reduction="mean").item(), 3)

        accelerator.wait_for_everyone()
        accelerator.save_state(state_path, safe_serialization=False)

        if results["abs_rel"] = 0.001**"
- During training cycles, we calculate the loss on resized depth maps. During validation, we upsample the predictions and compute metrics at the original resolution.
- And look how easily we can wrap custom PyTorch code for distributed computing using HF accelerate.

#### Step 10: Launch the Training
```python
# You can run this code with 1 gpu. Just set num_processes=1
notebook_launcher(train_fn, num_processes=2)
```

I believe we achieved our desired goal. The slight difference in performance can be attributed to the significant difference in dataset sizes (40k vs 400k). Keep in mind, we used the ViT-S encoder.

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/metrics5.png)

Let's show some results
```python
model = DepthAnythingV2(**{**model_configs[model_encoder], "max_depth": max_depth}).to(
    "cuda"
)
model.load_state_dict(torch.load(save_model_path))

num_images = 10

fig, axes = plt.subplots(num_images, 3, figsize=(15, 5 * num_images))

val_dataset = NYU(val_paths, mode="val")
model.eval()
for i in range(num_images):
    sample = val_dataset[i]
    img, depth = sample["image"], sample["depth"]

    mean = torch.tensor([0.485, 0.456, 0.406]).view(3, 1, 1)
    std = torch.tensor([0.229, 0.224, 0.225]).view(3, 1, 1)

    with torch.inference_mode():
        pred = model(img.unsqueeze(0).to("cuda"))
        pred = F.interpolate(
            pred[:, None], depth.shape[-2:], mode="bilinear", align_corners=True
        )[0, 0]

    img = img * std + mean

    axes[i, 0].imshow(img.permute(1, 2, 0))
    axes[i, 0].set_title("Image")
    axes[i, 0].axis("off")

    max_depth = max(depth.max(), pred.cpu().max())

    im1 = axes[i, 1].imshow(depth, cmap="viridis", vmin=0, vmax=max_depth)
    axes[i, 1].set_title("True Depth")
    axes[i, 1].axis("off")
    fig.colorbar(im1, ax=axes[i, 1])

    im2 = axes[i, 2].imshow(pred.cpu(), cmap="viridis", vmin=0, vmax=max_depth)
    axes[i, 2].set_title("Predicted Depth")
    axes[i, 2].axis("off")
    fig.colorbar(im2, ax=axes[i, 2])

plt.tight_layout()
```

![image/png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Metric%20and%20Relative%20Monocular%20Depth%20Estimation%20An%20Overview.%20Fine-Tuning%20Depth%20Anything%20V2/inference.png)

The images in the validation set are much cleaner and more accurate than those in the training set, which is why our predictions appear a bit blurry in comparison. Take another look at the training samples above.

In general, the key takeaway is that the model’s quality heavily depends on the quality of the provided depth maps. Kudos to the authors of Depth Anything V2 for overcoming this limitation and producing very sharp depth maps. The only drawback is that they are relative.

## References
- [Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer](https://arxiv.org/abs/1907.01341)
- [ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth](https://arxiv.org/abs/2302.12288)
- [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413)
- [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891)
- [Depth Anything V2](https://arxiv.org/abs/2406.09414)

### Introduction
https://huggingface.co/learn/computer-vision-course/unit8/3d_measurements_stereo_vision.md

# Introduction
This section explains how stereo vision works and how it can be used to find the 3-dimensional structure of surrounding objects. Stereo vision involves capturing two or more images of the same scene from varying positions and viewpoints. These images can be obtained using multiple cameras or by repositioning the same camera.

## Problem Statement 
Let's understand the problem statement of finding the 3D structure of objects by understanding the geometry of image formation. As shown in Figure 1, we have a point P in 3D with x, y, z coordinates. Point P gets projected to the camera's image plane via the pinhole. This can also be viewed as projecting a 3D point to a 2D image plane. 

Now, let's say we are given this 2D image and the location of the pixel coordinates of point P in this image. We want to find the 3D coordinates of point P. Is this possible? Is point P unique, or are there other 3D points that also map to the same pixel coordinates as point P? Answer is that all 3D points that lie on the line joining point P, and the pinhole will map to the same pixel coordinates in the 2D image plane. 

We aim to solve the problem of determining the 3D structure of objects. In our problem statement, we can represent an object in 3D as a set of 3D points. Finding the 3D coordinates of each of these points helps us determine the 3D structure of the object.

  
  Figure 1: Image formation using single camera

## Solution 
Let's assume we are given the following information:

1. Single image of a scene point P
2. Pixel coordinates of point P in the image
3. Position and orientation of the camera used to capture the image. For simplicity, we can also place an XYZ coordinate system at the location of the pinhole, with the z-axis perpendicular to the image place and the x-axis, and y-axis parallel to the image plane like in Figure 1.
4. Internal parameters of the camera, such as focal length and location of principal point. The principal point is where the optical axis intersects the image plane. Its location in the image plane is usually denoted as (Ox,Oy).

With the information provided above, we can find a 3D line that originates from the pixel coordinates of point P (the projection of point P in the image plane), passes through the pinhole, and extends to infinity. Based on the principles of image formation geometry, we can conclude that point P must exist somewhere along this line.

1. Initially (without an image) point P could have been present anywhere in the 3D space. 
2. Using a single image, we reduced possible locations of point P to a single line in 3D. 
3. Now, let's consider whether we can further narrow down the potential locations to pinpoint the precise location of point P on this 3D line. 
4. Imagine moving the camera to a different position. Let the coordinate system remain fixed at the previous position. The 3D line we found also remains the same and point P still lies somewhere on this line.
5. From this new location of the camera, capture another image of the same scene point P. Once more, utilizing the pixel coordinates of point P within this new image and considering the updated location of the camera pinhole, find the 3D line on which point P must lie.
6. Now we have 2 lines in 3D and point P lies somewhere on both of these lines. So, point P must lie on the intersection of these 2 lines. 

Given 2 lines in 3D, there are are three possibilities for their intersection:

1. Intersect at exactly 1 point
2. Intersect at infinite number of points
3. Do not intersect

If both images (with original and new camera positions) contain point P, we can conclude that the 3D lines must intersect at least once and that the intersection point is point P. Furthermore, we can envision infinite points where both lines intersect only if the two lines are collinear. This is achievable if the pinhole at the new camera position lies somewhere on the original 3D line. For all other positions and orientations of the new camera location, the two 3D lines must intersect precisely at one point, where point P lies. 

Therefore, using 2 images of the same scene point P, known positions and orientations of the camera locations, and known internal parameters of the camera, we can precisely find where point P lies in the 3D space.

## Simplified Solution 
Since there are many different positions and orientations for the camera locations which can be selected, we can select a location that makes the math simpler, less complex, and reduces computational processing when running on a computer or an embedded device. One configuration that is popular and generally used is shown in Figure 2. We use 2 cameras in this configuration, which is equivalent to a single camera for capturing 2 images from 2 different locations.

  
  Figure 2: Image formation using 2 cameras

1. Origin of the coordinate system is placed at the pinhole of the first camera which is usually the left camera.
2. Z axis of the coordinate system is defined perpendicular to the image plane. 
3. X and Y axis of the coordinate system are defined parallel to the image plane.  
4. We also have X and Y directions in a 2D image. X is the horizontal direction and Y is the vertical direction. We will refer to these directions in the image plane as u and v respectively. Therefore, pixel coordinates of a point are defined using (u,v) values.  
5. X axis of the coordinate system is defined as the u direction / horizontal direction in the image plane.
6. Similarly Y axis of the coordinate system is defined as the v direction / vertical direction in the image plane. 
7. Second camera (more precisely the pinhole of the second camera) is placed at a distance b called baseline in the positive x direction to the right of the first camera. Therefore, x,y,z coordinates of pinhole of second camera are (b,0,0).
5. Image plane of the second camera is oriented parallel to the image plane of the first camera.  
6. u and v directions in the image plane of second/right camera are aligned with the u and v directions in the image plane of the first/left camera.
7. Both left and right cameras are assumed to have the same intrinsic parameters like focal length and location of principal point.

With the above configuration in place, we have the below equations which map a point in 3D to the image plane in 2D. 

1. Left camera  
    1.  \\(u\_left = f\_x * \frac{x}{z} + O\_x\\)   
    2.  \\(v\_left = f\_y * \frac{y}{z} + O\_y\\)   
  
2. Right camera   
    1.  \\(u\_right = f\_x * \frac{x-b}{z} + O\_x\\)   
    2.  \\(v\_right = f\_y * \frac{y}{z} + O\_y\\)    

Different symbols used in above equations are defined below:    
*  \\(u\_left\\), \\(v\_left\\) refer to pixel coordinates of point P in the left image.
*  \\(u\_right\\),  \\(v\_right\\) refer to pixel coordinates of point P in the right image.   
*  \\(f\_x\\) refers to the focal length (in pixels) in x direction and \\(f\_y\\) refers to the focal length (in pixels) in y direction. Actually, there is only 1 focal length for a camera which is the distance between the pinhole (optical center of the lens) to the image plane. However, pixels may be rectangular and not perfect squares, resulting in different fx and fy values when we represent f in terms of pixels.    
*  x, y, z are 3D coordinates of the point P (any unit like cm, feet, etc can be used).
*  \\(O\_x\\)  and  \\(O\_y\\)  refer to pixel coordinates of the principal point.   
*  b is called the baseline and refers to the distance between the left and right cameras. Same units are used for both b and x,y,z coordinates (any unit like cm, feet, etc can be used).   
  
We have 4 equations above and 3 unknowns - x, y and z coordinates of a 3D point P. Intrinsic camera parameters - focal lengths and principal point are assumed to be known. Equations 1.2 and 2.2 indicate that the v coordinate value in the left and right images is the same. 

3.  \\(v\_left = v\_right\\)    

Using equations 1.1, 1.2 and 2.1 we can derive the x,y,z coordinates of point P.    
     
4.  \\(x = \frac{b * (u\_left - O\_x)}{u\_left - u\_right}\\)    
5.  \\(y = \frac{b * f\_x * (v\_left - O\_y)}{ f\_y * (u\_left - u\_right)}\\)    
6.  \\(z = \frac{b * f\_x}{u\_left - u\_right}\\)     

Note that the x and y values above concern the left camera since the origin of the coordinate system is aligned with the left camera. The above equations show that we can find 3D coordinates of a point P using its 2 images captured from 2 different camera locations. z value is also referred to as the depth value. Using this technique, we can find the depth values for different pixels within an image and their real-world x and y coordinates. We can also find real-world distances between different points in an image. 

## Demo 
### Setup 
We'll work through an example, capture some images, and perform some calculations to find out if our above assumptions and math work out! For capturing the images we'll use a hardware known as OAK-D Lite (OAK stands for OpenCV AI Kit). This device has 3 cameras - left and right mono (black and white) and a center color cameras. We'll use the left and right mono cameras for our experiment. A regular smartphone camera could also be used, but OAK-D lite has some advantages listed below.

* Intrinsic camera parameters like focal length and location of principal point are known for an OAK-D Lite since the device comes pre-calibrated, and these parameters can be read from the device using its Python API. For a smartphone camera, intrinsic parameters need to be determined and could be found by performing camera calibration or sometimes present in the metadata of the image captured using the smartphone. 
* One of the main assumptions above is that the position and orientation of the left and right cameras are known. Using a smartphone camera, it may be difficult to determine this information or additional calibration may be required.  On the other hand, for an OAK-D Lite device, the position and orientation of the left and right cameras are fixed, known, pre-calibrated, and very similar to the geometry of the simplified solution mentioned above. Although some post-processing/image rectification detailed below on the raw images is still required.

### Raw Left and Right Images
The left and right cameras in OAK-D Lite are oriented similarly to the geometry of the simplified solution detailed above. The baseline distance between the left and right cameras is 7.5cm. Left and right images of a scene captured using this device are shown below. The figure also shows these images stacked horizontally with a red line drawn at a constant height (i.e. at a constant v value ). We'll refer to the horizontal x-axis as u and the vertical y-axis as v. 

Raw Left Image

  

Raw Right Image 

  

Raw Stacked Left and Right Images 

  

Let's focus on a single point - the top left corner of the laptop. As per equation 3 above,  \\(v\_left = v\_right\\)  for the same point in the left and right images. However, notice that the red line, which is at a constant v value, touches the top-left corner of the laptop in the left image but misses this point by a few pixels in the right image. There are two main reasons for this discrepancy:

* The intrinsic parameters for the left and right cameras are different. The principal point for the left camera is at (319.13, 233.86), whereas it is (298.85, 245.52) for the right camera. The focal length for the left camera is 450.9, whereas it is 452.9 for the right camera. The values of fx are equal to fy for both the left and right cameras. These intrinsic parameters were read from the device using it's python API and could be different for different OAK-D Lite devices.
* Left and right camera orientations differ slightly from the geometry of the simplified solution detailed above. 

### Rectified Left and Right Images 
We can perform image rectification/post-processing to correct for differences in intrinsic parameters and orientations of the left and right cameras. This process involves performing 3x3 matrix transformations. In the OAK-D Lite API, a stereo node performs these calculations and outputs the rectified left and right images. Details and source code can be viewed [here](https://github.com/luxonis/depthai-experiments/blob/master/gen2-stereo-on-host/main.py). In this specific implementation, correction for intrinsic parameters is performed using intrinsic camera matrices, and correction for orientation is performed using rotation matrices(part of calibration parameters) for the left and right cameras. The rectified left image is transformed as if the left camera had the same intrinsic parameters as the right one. Therefore, in all our following calculations, we'll use the intrinsic parameters for the right camera i.e. focal length of 452.9 and principal point at (298.85, 245.52). In the rectified and stacked images below, notice that the red line at constant v touches the top-left corner of the laptop in both the left and right images.

Rectified Left Image

  

Rectified Right Image 

  

Rectified and Stacked Left and Right Images 

  

Let's also overlap the rectified left and right images to see the difference. We can see that the v values for different points remain mostly constant in the left and right images. However, the u values change, and this difference in the u values helps us find the depth information for different points in the scene, as shown in Equation 6 above. This difference in 'u' values \\(u\_left - u\_right\\) is called disparity, and we can notice that the disparity for points near the camera is greater compared to points further away. Depth z and disparity  \\(u\_left - u\_right\\)  are inversely proportional, as shown in equation 6.

Rectified and Overlapped Left and Right Images 

  

### Annotated Left and Right Rectified Images
Let's find the 3D coordinates for some points in the scene. A few points are selected and manually annotated with their (u,v) values, as shown in the figures below. Instead of manual annotations, we can also use template-based matching, feature detection algorithms like SIFT, etc for finding corresponding points in left and right images. 

Annotated Left Image 

  

Annotated Right Image 

  

### 3D Coordinate Calculations  
Twelve points are selected in the scene, and their (u,v) values in the left and right images are tabulated below. Using equations 4, 5, and 6, (x,y,z) coordinates for these points are also calculated and tabulated below. X and Y coordinates concerning the left camera, and the origin is at the left camera's pinhole (or optical center of the lens). Therefore, 3D points left and above the pinhole have negative X and Y values, respectively.

| point    |   \\(u\_left\\)  |   \\(v\_left\\)  |   \\(u\_right\\)  |   \\(v\_right\\)  |   depth/z(cm)  |   \\(x\_wrt\_left\\)|   \\(y\_wrt\_left\\)  |
|:--------:|:---------:|:---------:|:----------:|:----------:|:--------------:|:-----------------:|:-----------------:|
| pt1     |      138 |      219 |       102 |       219 |         94.36 |           -33.51 |            -5.53 |
| pt2     |      264 |      216 |       234 |       217 |        113.23 |            -8.72 |            -7.38 |
| pt3     |      137 |      320 |       101 |       321 |         94.36 |           -33.72 |            15.52 |
| pt4     |      263 |      303 |       233 |       302 |        113.23 |            -8.97 |            14.37 |
| pt5     |      307 |      211 |       280 |       211 |        125.81 |             2.26 |            -9.59 |
| pt6     |      367 |      212 |       339 |       212 |        121.32 |            18.25 |            -8.98 |
| pt7     |      305 |      298 |       278 |       298 |        125.81 |             1.71 |            14.58 |
| pt8     |      365 |      299 |       338 |       299 |        125.81 |            18.37 |            14.86 |
| pt9     |      466 |      225 |       415 |       225 |         66.61 |            24.58 |            -3.02 |
| pt10    |      581 |      225 |       530 |       226 |         66.61 |            41.49 |            -3.02 |
| pt11    |      464 |      387 |       413 |       388 |         66.61 |            24.29 |            20.81 |
| pt12    |      579 |      388 |       528 |       390 |         66.61 |            41.2  |            20.95 |

### Dimension Calculations and Accuracy 
We can also compute 3D distances between different points using their (x,y,z) values using the formulae \\(distance = \sqrt{(x\_2 - x\_1)^2 + (y\_2 - y\_1)^2 + (z\_2 - z\_1)^2}\\). Computed distances between some of the points are tabulated below along with their actual measured values. Percentage error \\(( \frac{(actual-measured) * 100}{actual}\\)) is also computed and tabulated. Notice that the calculated and actual values match very well with a percentage error of 1.2% or less. 

| dimension    |   calculated(cm)  |   actual(cm)  |       % error       |
|:------------:|:---------------:|:-------------:|:-------------------:|
| d1(1-2)     |           31.2 |         31.2 |               0    |
| d2(1-3)     |           21.1 |         21.3 |               0.94 |
| d3(5-6)     |           16.6 |         16.7 |               0.6  |
| d4(5-7)     |           24.2 |         24   |               0.83 |
| d5(9-10)    |           16.9 |         16.7 |               1.2  |
| d6(9-11)    |           23.8 |         24   |               0.83 |

Calculated Dimension Results  
![Calculated Dimension Results](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/3d_stereo_vision_images/calculated_dim_results.png?download=true)

## Conclusion
1. In summary, we learned how stereo vision works, the equations used to find the real-world coordinates (x, y, z) of a point P given its two images captured from different viewpoints, and compared theoretical values with experimental results. 
2. We assumed that the intrinsic parameters - focal length and principal point of the cameras - are known, along with their position and orientation information. This is also referred to as calibrated stereo vision.
3. Interestingly, it is also possible to find the 3D coordinates of a point, P, if the position and orientation of the cameras are unknown. In fact, the position and orientation of the cameras with respect to each other can be found using the images themselves. This is referred to as uncalibrated stereo vision!

## References 
1. 3D Reconstruction - Multiple Viewpoints [Coursera](https://www.coursera.org/learn/3d-reconstruction-multiple-viewpoints)
2. Stereo Vision and Depth Estimation using OpenCV AI Kit [LearnOpenCV](https://learnopencv.com/stereo-vision-and-depth-estimation-using-opencv-ai-kit/)
3. OAK-D Lite [Luxonics](https://docs.luxonis.com/projects/hardware/en/latest/pages/DM9095/)

### Neural Radiance Fields (NeRFs)
https://huggingface.co/learn/computer-vision-course/unit8/nerf.md

# Neural Radiance Fields (NeRFs)

Neural Radiance Fields are a way of storing a 3D scene within a neural network. This way of storing and representing a scene is often called an implicit representation, since the scene parameters are fully represented by the underlying Multi-Layer Perceptron (MLP). 
(As compared to an explicit representation that stores scene parameters like colour or density explicitly in voxel grids.) 
This novel way of representing a scene showed very impressive results in the task of [novel view synthesis](https://en.wikipedia.org/wiki/View_synthesis), the task of interpolating novel views from camera perspectives that are not in the training set. 
Furthermore, it allows us to store large scenes with a smaller memory footprint than explicit representation, since we merely need to store the weights of our neural network compared to voxel grids, which increase in memory size by a cubic term.

## Short History 📖
The field of NeRFs is relatively young with the first publication by [Mildenhall et al.](https://www.matthewtancik.com/nerf) appearing in 2020. 
Since then, a vast number of papers have been published and fast advancements have been made. 
Since 2020, more than 620 preprints and publications have been released, with more than 250 repositories on GitHub. *(as of Dec 2023, statistics from [paperswithcode.com](https://paperswithcode.com/method/nerf))*.

Since the first formulation of NeRFs requires long training times (up to days on beefy GPUs), there have been a lot of advancements towards faster training and inference. 
An important leap was NVIDIA's [Instant-ngp](https://nvlabs.github.io/instant-ngp/), which was released in 2022. 
While the model architecture used in this approach is similar to existing ones, the authors introduced a novel encoding method that uses trainable hash-tables. 
Thanks to this type of encoding, we can shrink the MLP down significantly without loosing reconstruction quality.
This novel approach was faster to train and query while performing on par quality wise with then state-of-the-art methods. 
[Mipnerf-360](https://jonbarron.info/mipnerf360/), which was also released in 2022, is also worth mentioning. 
Again, the model architecture is the same as for most NeRFs, but the authors introduced a novel scene contraction that allows us to represent scenes that are unbounded in all directions, which is important for real-world applications. 
[Zip-NeRF](https://jonbarron.info/zipnerf/), released in 2023, combines recent advancements like the encoding from [Instant-ngp](https://nvlabs.github.io/instant-ngp/) and the scene contraction from [Mipnerf-360](https://jonbarron.info/mipnerf360/) to handle real-world situation whilst decreasing training times to under an hour. 
*(this is still measured on beefy GPUs to be fair)*.

Since the field of NeRFs is rapidly evolving, we added a section at the end where we will tease the latest research and the possible future direction of NeRFs.

But now enough with the history, let's dive into the intrinsics of NeRFs! 🚀🚀

## Underlying approach (Vanilla NeRF) 📘🔍
The fundamental idea behind NeRFs is to represent a scene as a continuous function that maps a position, \\( \mathbf{x} \in \mathbb{R}^{3} \\), and a viewing direction,  \\( \boldsymbol{\theta} \in \mathbb{R}^{2} \\), to a colour \\( \mathbf{c} \in \mathbb{R}^{3} \\) and volume density \\( \sigma \in \mathbb{R}^{1}\\). 
As neural networks can serve as universal function approximators, we can approximate this continuous function that represents the scene with a simple Multi-Layer Perceptron (MLP) \\( F_{\mathrm{\Theta}} : (\mathbf{x}, \boldsymbol{\theta}) \to (\mathbf{c},\sigma) \\).

A simple NeRF pipeline can be summarized with the following picture:

  
  Image from: Mildenhall et al. (2020)

**(a)** Sample points and viewing directions along camera rays and pass them through the network.

**(b)** Network output is a colour vector and density value for each sample.

**(c)** Combine the outputs of the network via volumetric rendering to go from discrete samples in 3D space to a 2D image. 

**(d)** Compute the loss and update network gradients via backpropagation to represent scene.

This is overview is very high level, so for a better understanding, let's go into the details of the volume rendering and the used loss function.

**Volume Rendering**

The principles behind the process of volumetric rendering are well established in classical computer graphics pipelines and do not stem from NeRFs. 
What is important for the use case of NeRFs is that this step is **differentiable** in order to allow for backpropagation. The simplest form of volumetric rendering in NeRFs can be formulated as follows:

$$\mathbf{C}(\mathbf{r}) = \int_{t_n}^{t_f}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t),\mathbf{d})dt$$

In the equation above, \\( \mathbf{C}(\mathbf{r}) \\) is the expected colour of a camera ray \\( \mathbf{r}(t)=\mathbf{o}+t\mathbf{d} \\), where \\( \mathbf{o} \in \mathbb{R}^{3} \\) is the origin of the camera, \\( \boldsymbol{d} \in \mathbb{R}^{3} \\) is the viewing direction as a 3D unit vector and \\( t \in \mathbb{R}_+ \\) is the distance along the ray. 
\\( t_n \\) and \\( t_f \\) stand for the near and far bounds of the ray, respectively. 
\\( T(t) \\) denotes the accumulated transmittance along ray \\( \mathbf{r}(t) \\) from \\( t_n \\) to \\( t \\).

After discretization, the equation above can be computed as the following sum:

$$\boldsymbol{\hat{C}}(\mathbf{r})=\sum_{i=1}^{N}T_i (1-\exp(-\sigma_i \delta_i)) \mathbf{c}_i\,, \textrm{ where }T_i=\exp \bigg(-\sum_{j=1}^{i-1} \sigma_j \delta_j \bigg)$$

Below, you can see a schematic visualisation of a discretized camera ray in order to get a better sense of the variables from above:

![ray_image](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/nerf_ray_visualisation.png)

**Loss formulation**

As the discretized volumetric rendeing equation is fully differentiable, the weights of the underlying neural network can then be trained using a reconstruction loss on the rendered pixels. 
Many NeRF approaches use a pixel-wise error term that can be written as follows:

$$\mathcal{L}_{\rm recon}(\boldsymbol{\hat{C}},\boldsymbol{C^*}) = \left\|\boldsymbol{\hat{C}}-\boldsymbol{C^*}\right\|^2$$

,where \\( \boldsymbol{\hat{C}} \\) is the rendered pixel colour and \\( \boldsymbol{C}^* \\) is the ground truth pixel colour.

**Additional remarks**

It is very hard to describe the whole NeRF pipeline in detail within a single chapter. 
The explanations above are important to understand the basic concepts and similar if not identical in every NeRF model. 
However, some additional tricks are needed to obtain a well performing model.

First of all, it is necesarry to encode input signals in order to capture high-frequency variations in colour and geometry. 
The practice of encoding inputs before passing them through a neural network is not unique to the NeRF domain but also widely adopted in other ML domains like for example Natural Language Processing (NLP). 
A very simple encoding where we map the inputs to a higher dimensional space, enabling us to capture high frequency variations in scene parameters could look as follows:

```python
import torch
import mediapy as media
import numpy as np

def positional_encoding(in_tensor, num_frequencies, min_freq_exp, max_freq_exp):
    """Function for positional encoding."""
    # Scale input tensor to [0, 2 * pi]
    scaled_in_tensor = 2 * np.pi * in_tensor
    # Generate frequency spectrum
    freqs = 2 ** torch.linspace(
        min_freq_exp, max_freq_exp, num_frequencies, device=in_tensor.device
    )
    # Generate encodings
    scaled_inputs = scaled_in_tensor.unsqueeze(-1) * freqs
    encoded_inputs = torch.cat(
        [torch.sin(scaled_inputs), torch.cos(scaled_inputs)], dim=-1
    )
    return encoded_inputs.view(*in_tensor.shape[:-1], -1)

def visualize_grid(grid, encoded_images, resolution):
    """Helper Function to visualize grid."""
    # Split the grid into separate channels for x and y
    x_channel, y_channel = grid[..., 0], grid[..., 1]
    # Show the original grid
    print("Input Values:")
    media.show_images([x_channel, y_channel], cmap="plasma", border=True)
    # Show the encoded grid
    print("Encoded Values:")
    num_channels_to_visualize = min(
        8, encoded_images.shape[-1]
    )  # Visualize up to 8 channels
    encoded_images_to_show = encoded_images.view(resolution, resolution, -1).permute(
        2, 0, 1
    )[:num_channels_to_visualize]
    media.show_images(encoded_images_to_show, vmin=-1, vmax=1, cmap="plasma", border=True)

# Parameters similar to your NeRFEncoding example
num_frequencies = 4
min_freq_exp = 0
max_freq_exp = 6
resolution = 128

# Generate a 2D grid of points in the range [0, 1]
x_samples = torch.linspace(0, 1, resolution)
y_samples = torch.linspace(0, 1, resolution)
grid = torch.stack(
    torch.meshgrid(x_samples, y_samples), dim=-1
)  # [resolution, resolution, 2]

# Apply positional encoding
encoded_grid = positional_encoding(grid, num_frequencies, min_freq_exp, max_freq_exp)

# Visualize result
visualize_grid(grid, encoded_grid, resolution)
```

The output should look something like the image below:

![encoding](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/nerf_encodings.png)

The second trick worth mentioning is that most methods use smart approaches to sample points in space. 
Essentially, we want to avoid sampling in regions where the scene is empty. 
There are various approaches to concentrate samples in regions that contribute most to the final image, but the most prominent one is to use a second network, often called *proposal network* so that no compute is wasted. 
If you are interested in the inner workings and optimisation of such a *proposal network*, feel free to dig into the publication of [Mipnerf-360](https://jonbarron.info/mipnerf360/), where it was first proposed.

## Train your own NeRF
To get the full experience when training your first NeRF, I recommend taking a look at the awesome [Google Colab notebook from the nerfstudio team](https://colab.research.google.com/github/nerfstudio-project/nerfstudio/blob/main/colab/demo.ipynb). 
There, you can upload images of a scene of your choice and train a NeRF. You could for example fit a model to represent your living room. 🎉🎉

## Current advancements in the field
The field is rapidly evolving and the number of new publications is almost exploding. 
Concerning training and rendering speed, [VR-NeRF](https://vr-nerf.github.io) and [SMERF](https://smerf-3d.github.io) show very promising results. 
We believe that we will soon be able to stream a real-world scene in real-time on an edge device, and this is a huge leap towards a realistic *Metaverse*. 
However, the research in the field of NeRFs is not only focusing on training and inference speed, but encompasses various directions like, Generative NeRFs, Pose Estimation, Deformable NeRFs, Compositionality and many more. 
If you are interested in a curated list of NeRF publications, checkout [Awesome-NeRF](https://github.com/awesome-NeRF/awesome-NeRF).

### Camera models
https://huggingface.co/learn/computer-vision-course/unit8/terminologies/camera-models.md

# Camera models

## Pinhole Cameras

  

The simplest kind of camera - perhaps one that you have made yourself - consists of a lightproof box, with a small hole made in one side and a screen or a photographic film on the other. Light rays passing through the hole generate an inverted image on the rear wall of the box. This simple model for a camera is commonly used in 3D graphics applications.

### Camera axes conventions

![Blender camera axes conventions](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/axes_handedness.png)
There are a number of different conventions for the direction of the camera axes. Here we will follow the convention of Blender (see diagram), where the camera points along the negative Z-axis, the camera X-axis points to the left (looking from the camera) and the camera Y-axis points up.

### Pinhole camera coordinate transformation

![Pinhole transformation](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Pinhole_transform.png)
Each point in 3D space maps to a single point on the 2D plane. To find the map between 3D and 2D coordinates, we first need to know the intrinsics of the camera, which for a pinhole camera are:
 - the focal lengths, \\(f_x\\) and \\(f_y\\).
 - the coordinates of the principle point, \\(c_x\\)and \\(c_y\\), which is the optical centre of the image. This point is where the optical axis intersects the image plane.
 
Using these intrinsic parameters, we construct the camera matrix:

$$
K = \begin{pmatrix}
f_x & 0 & c_x  \\
0 & f_y & c_y  \\
0 & 0 & 1  \\
\end{pmatrix}
$$
 
In order to apply this to a point \\( p=[x,y,z]\\) to a point in 3D space, we multiply the point by the camera matrix \\( K @ p \\) to give a new 3x1 vector \\( [u,v,w]\\). This is a homogeneous vector in 2D, but where the last component isn't 1. To find the position of the point in the image plane we have to divide  the first two coordinates by the last one, to give the point \\([u/w, v/w]\\).

Whilst this is the textbook definition of the camera matrix, if we use the Blender camera convention it will flip the image left to right and up-down (as points in front of the camera will have negative z-values). One potential way to fix this is to change the signs of some of the elements of the camera matrix:

$$
K = \begin{pmatrix}
-f_x & 0 & c_x  \\
0 & -f_y & c_y  \\
0 & 0 & 1  \\
\end{pmatrix}
$$

### Camera Transformation Matrices

Usually, the camera isn't just at the origin, but we have to transform points from world coordinates to coordinates relative to the camera.  To do so, we first apply the world-to-camera matrix to the points, and then we apply the camera matrix.

### More complex camera models

More complicated camera models are possible, modeling the distortion generated by a real lens. For a discussion of such models, see [Multiple View Geometry in Computer Vision](https://www.robots.ox.ac.uk/~vgg/hzbook/).

### Basics of Linear Algebra for 3D Data
https://huggingface.co/learn/computer-vision-course/unit8/terminologies/linear-algebra.md

# Basics of Linear Algebra for 3D Data

## Coordinate systems

Most three-dimensional data consists of objects such as points which have a defined position in space, often represented by their three Cartesian coordinates \\([X, Y, Z]\\).

![Axis handedness]( https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/axes_handedness.png)

However, various systems have different conventions for this coordinate system. The most important difference is handedness, which is the relative orientation of the X, Y and Z axes. The easiest way to remember the difference is to point your middle finger inwards, such that your thumb, index, and middle fingers are roughly at right angles to each other. On your left hand, your thumb (X), index finger (Y), and middle finger (Z) form a left-handed coordinate system. Similarly, the fingers of your right hand make a right-handed coordinate system.

In mathematics and physics, a right-handed system is usually used. However, in computer graphics, different libraries and environments have different conventions. Notably, Blender, Pytorch3d and OpenGL (mostly) use right-handed coordinates, whilst DirectX uses left-handed coordinates.Here we will use the right-handed convention, following Blender and NerfStudio.

## Transformations

It is useful to be able to rotate, scale, and translate these coordinates in space. For example, if an object is moving, or if we want to change these coordinates from world coordinates relative to some fixed set of axes, to coordinates relative to our camera.

These transformations can be represented by matrices. Here we'll use `@` to denote matrix multiplication. To allow us to represent translation, rotation and scaling in a consistent manner, we take the three dimensional coordinates \\([x,y,z]\\), and add an extra coordinate \\(w=1\\). These are known as homogeneous coordinates - more generally, \\(w\\) can take any value, and all points on the four-dimensional line \\([wx, wy, wz, w]\\) correspond to the same point \\([x,y,z]\\) in three-dimensional space. However, here, \\(w\\) will always be 1.

Libraries such as [Pytorch3d](https://pytorch3d.org/) provide a range of functions for generating and manipulating transformations.

Yet another convention to note - OpenGL treats positions as column vectors `x` (of shape 4x1), and applies a transformation `M` by pre-multiplying the vector by the matrix (`M @ x`), whereas DirectX and Pytorch3d consider positions as row vectors of shape (1x4), and apply a transformation by post-multiplying the vector by the matrix ( `x @ M` ). To convert between the two conventions we need to take the transpose of the matrix `M.T`. We will show how a cube transforms under different transformation matrices in a few code snippets. For these code snippets, we will use the OpenGL convention.

### Translations

Translations, moving all the points in space by the same distance and direction, can be represented as

$$T = \begin{pmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{pmatrix}$$

where \\(t = [t_x,t_y,t_z]\\) is the direction vector to translate all the points.

To try out a translation ourselves, let us first write a little helper function to visualise a cube:

```python
import numpy as np
import matplotlib.pyplot as plt

def plot_cube(ax, cube, label, color="black"):
    ax.scatter3D(cube[0, :], cube[1, :], cube[2, :], label=label, color=color)
    lines = [
        [0, 1],
        [1, 2],
        [2, 3],
        [3, 0],
        [4, 5],
        [5, 6],
        [6, 7],
        [7, 4],
        [0, 4],
        [1, 5],
        [2, 6],
        [3, 7],
    ]
    for line in lines:
        ax.plot3D(cube[0, line], cube[1, line], cube[2, line], color=color)
    ax.set_xlabel("X")
    ax.set_ylabel("Y")
    ax.set_zlabel("Z")
    ax.legend()
    ax.set_xlim([-2, 2])
    ax.set_ylim([-2, 2])
    ax.set_zlim([-2, 2])
```

Now, we can create a cube and pre-multiply it with a translation matrix:

```python
# define 8 corners of our cube with coordinates (x,y,z,w) and w is always 1 in our case
cube = np.array(
    [
        [-1, -1, -1, 1],
        [1, -1, -1, 1],
        [1, 1, -1, 1],
        [-1, 1, -1, 1],
        [-1, -1, 1, 1],
        [1, -1, 1, 1],
        [1, 1, 1, 1],
        [-1, 1, 1, 1],
    ]
)

# translate to follow OpenGL notation
cube = cube.T

# set up figure
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")

# plot original cube
plot_cube(ax, cube, label="Original", color="blue")

# translation matrix (shift 1 in positive x and 1 in positive y-axis)
translation_matrix = np.array([[1, 0, 0, 1], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]])

# translation
translated_cube = translation_matrix @ cube
plot_cube(ax, translated_cube, label="Translated", color="red")
```

The output should look something like this:

  

### Scaling

Scaling is the process of uniformly increasing or decreasing the size of an object. A scaling transformation is represented by a matrix that multiplies each coordinate by a scale factor. The scaling matrix is given by:

$$S = \begin{pmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$$

Let us try the following example of a scaling our cube by a factor of 2 along the X-axis and 0.5 along the Y-axis.

```python
# set up figure
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")

# plot original cube
plot_cube(ax, cube, label="Original", color="blue")

# scaling matrix (scale by 2 along x-axis and by 0.5 along y-axis)
scaling_matrix = np.array([[2, 0, 0, 0], [0, 0.5, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])

scaled_cube = scaling_matrix @ cube

plot_cube(ax, scaled_cube, label="Scaled", color="green")
```

The output should look something like this:

  

### Rotations

Rotations around an axis are another commonly used transformation. There are a number of different ways of representing rotations, including Euler angles and quaternions, which can be very useful in some applications. Again, libraries such as Pytorch3d include a wide range of functionalities for performing rotations. However, as a simple example, we will just show how to construct rotations about each of the three axes.

- Rotation around the X-axis:

$$ R_x(\alpha) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \alpha & -\sin \alpha & 0 \\ 0 & \sin \alpha & \cos \alpha & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$

A little example for a positive 20 degree roation around the X-axis is given below:

```python
# set up figure
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")

# plot original cube
plot_cube(ax, cube, label="Original", color="blue")

# rotation matrix: +20 deg around x-axis
angle = 20 * np.pi / 180
rotation_matrix = np.array(
    [
        [1, 0, 0, 0],
        [0, np.cos(angle), -np.sin(angle), 0],
        [0, np.sin(angle), np.cos(angle), 0],
        [0, 0, 0, 1],
    ]
)

rotated_cube = rotation_matrix @ cube

plot_cube(ax, rotated_cube, label="Rotated", color="orange")
```

The output should look something like this:

  

- Rotation around the Y-axis:

$$ R_y(\beta) = \begin{pmatrix} \cos \beta & 0 & \sin \beta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \beta & 0 & \cos \beta & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$

We are sure you can use the example snippet above and figure out how to implement a rotation around the Y-axis.😎😎

- Rotation around the Z-axis

$$ R_z(\beta) = \begin{pmatrix} \cos \beta & -\sin \beta & 0 & 0 \\ \sin \beta & \cos \beta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$

Again, can you use the last code snippet and implement a rotation around the Z-axis❓

Note that the standard convention is that a positive rotation angle corresponds to an anti-clockwise rotation when the axis of rotation is pointing toward the viewer. Also note that in most libraries the cosine function requires the angle to be in radians. To convert from
degrees to radians, multiply by \\( pi/180\\).

### Combining transformations

Multiple transformations can be combined by multiplying together their matrices. Note that the order that matrices are multiplied matters - with the matrices being applied right to left. To make a matrix that applies the transforms P, Q, and R, in that order, the composite transformation is given by \\( X = R @ Q @ P\\).

If we want to do first the translation, then the rotation, and then the scaling that we did above in one operation, it looks as follows:

```python
# set up figure
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")

# plot original cube
plot_cube(ax, cube, label="Original", color="blue")

# combination of transforms
combination_transform = rotation_matrix.dot(scaling_matrix.dot(translation_matrix))
final_result = combination_transform.dot(cube)
plot_cube(ax, final_result, label="Combined", color="violet")
```

The output should look something like the following.

### Representations for 3D Data
https://huggingface.co/learn/computer-vision-course/unit8/terminologies/representations.md

# Representations for 3D Data

Depending on the application, one of a number of different representations for 3D data might be used. 
Here we'll outline some of the more common ones.

## Point Clouds

Point clouds represent data by lists of points in 3D space, with their coordinates and perhaps with other associated features. These can be distributed just on the surface of an object, or spread throughout its interior. Point clouds are commonly generated through 3D scanning techniques, such as LIDARs. They lack information about connectivity, which can make it difficult to determine the surface of the object and its topology.

## Meshes

Meshes are commonly used in computer graphics, representing the surfaces of objects as collections of (usually) triangular faces, connecting vertices in three-dimensional space. Additional information such as normals, colors, or texture coordinates can be associated with either the vertices or the faces. Especially when a texture is used, these provide a very efficient method for storing solid objects, and are commonly used in games and in other computer graphics applications.

The Python `trimesh` package contains many useful functions for working with mesh data, in particularly loading and saving common data formats.

## Volumetric Data

Volumetric data is commonly used to encode information about transparent objects, such as clouds and fire. Fundamentally, it takes the form of a function \\( f(x,y,z) \\) mapping positions in space to a density, color, and possibly other attributes. One simple method of representing such data is as a volumetric grid, where the data at each point is found by trilinear interpolation from the eight corners of the voxel containing it.

As will be seen later in the NeRF chapter, volumetric representations can also be effectively used to represent solid objects. More sophisticated representations can also be used, such as a small MLP, or complex hash-grids such as in InstantNGP.

## Implicit Surfaces

Sometimes the flexibility of a volumetric representation is desirable, but the surface of the object itself is of interest. Implicit surfaces are like volumetric data, but where the function \\( f(x,y,z) \\) maps each point in space to a single number, and where the surface is the zero of this function. For computational efficiency, it can be useful to require that this function is actually a signed distance function (SDF), where the function \\( f(x,y,z) \\) indicates the sortest distance to the surface, with positive values outside the object and negative values inside (this sign is a convention and may vary). Maintaining this constraint is more difficult, but it allows intersections between straight lines and the surface to be calculated more quickly, using an algorithm known as sphere tracing.

### Applications of 3D Vision
https://huggingface.co/learn/computer-vision-course/unit8/introduction/applications.md

# Applications of 3D Vision

3D computer vision enables machines to see and understand their environment in three dimensions, unlocking various applications across numerous industries. In this section, you can find some of the exciting applications of 3D computer vision.

## Robotics and Automation 
+ **Object Manipulation:** 3D vision systems allow robots to precisely identify and grasp objects of varying shapes and sizes. This enables them to perform tasks like picking and placing, assembly and packaging.
+ **Quality Control:** 3D vision systems can be used to inspect manufactured parts and products for defects, ensuring quality and consistency.

## Autonomous Navigation 
+ **Self Driving Cars:** 3D vision cameras and algorithms help with depth perception - information about distances to surrounding objects. This enables self-driving cars to better perceive their environment and navigate safely on roads.  
+ **Autonomous Drones and Robots:** Autonomous drones and robots use 3D computer vision for localizing their position with respect to surroundings, mapping their environment and navigating safely by avoiding obstacles.

## Healthcare
+ **Medical Imaging and Diagnosis:** 3D computer vision is used in medical imaging technologies like CT scans, MRIs, etc. These images help visualize 3D structure of internal body organs, aiding in diagnosis, treatment planning and surgical procedures.
+ **Surgical Robotics:** 3D vision systems assist surgeons in performing complex procedures with greater precision and control. 

## AR and VR
+ **Augmented Reality (AR):** AR applications like virtual try on clothing or visualizing furniture in your living space before making a purchase are all enabled by 3D computer vision.
+ **Virtual Reality (VR):** VR experiences which immerse users in a three-dimensional, interactive and often realistic setting are also enabled by 3D computer vision. 

## Entertainment and Gaming
+ **Animation and Motion Capture:** 3D vision systems can track and record human movements, enabling the creation of realistic character animations in movies.
+ **Gaming:** 3D vision allows game developers to create detailed and realistic 3D environments, landscapes, structures, objects with high quality lighting and shadow effects.

### Introduction
https://huggingface.co/learn/computer-vision-course/unit8/introduction/introduction.md

# Introduction

Apart from videos, about which we talked in the last chapter, another common form of visual data comes in the 3-dimensional form.
While for 2D images we usually have the two dimensions, commonly labelled as _x_ and _y_, for 3D images we have three dimensions, referred to as _x_, _y_ and _z_.

"But wait," I hear you say, "videos also have three dimensions!" That is completely correct - videos have the two spatial dimensions, _x_ and _y_, and the temporal dimension, _t_. The difference with 3D data is that here all three dimensions are of a spatial nature. This helps us to create a better model of our world and our perceptive capabilities. That is why one very common field for 3D applications nowadays is Mixed Reality applications, in which we try to merge the digital and analog worlds.

## Unit Overview

You will learn more about applications of 3D Computer Vision in the first chapter after this introduction. Right after that, we will take a look at the historical developments of 3D applications - all the way from the 19th century to today.

After these general topics, we'll dive right into the terminologies and concepts with three chapters about camera models, linear algebra and different representations.

We are following up the theory with some proper fields of use for 3D Computer Vision. Starting off with Novel View Synthesis, followed by Stereo Vision and finishing this unit (for now) with one of the most popular applications right now - Neural Radiance Fields (NeRFs).

Ready? Then get out your 3D goggles and lets learn! 🌟

### A Brief History of 3D Vision
https://huggingface.co/learn/computer-vision-course/unit8/introduction/brief_history.md

# A Brief History of 3D Vision

## 1838: Stereoscopy

- **Inventor**: Sir Charles Wheatstone.
- **Technique**: Presenting offset images to each eye through a stereoscope, creating depth perception.

## 1853: Anaglyph 3D

- **Pioneer**: Louis Ducos du Hauron
- **Method**: Using glasses with colored filters to separate images in complementary colors, creating a depth illusion.

## 1936: Polarized 3D

- **Developer**: Edwin H. Land.
- **Approach**: Utilizing polarized light technology in 3D movies, with glasses that filter light in specific directions.

## 1960s: Virtual Reality

- **Nature**: Experimental and not widely accessible.
- **Features**: Stereoscopic displays, head-tracking technology for immersive environments.

## 1979: Autostereograms (Magic Eye Images)

- **Creator**: Christopher Tyler
- **Concept**: 2D patterns that allow viewers to see 3D images without special glasses.

## 1986: IMAX 3D

- **Innovation**: Incorporating 3D technology in IMAX films using polarized glasses and dual projection systems.

## 2000s: Digital 3D Cinema

- **Evolution**: Digital projection systems, circular polarization, or active shutter glasses enhancing the 3D film experience.

## 2010s: 3D Television

- **Trend**: Introduction of 3D TVs with various technologies, like active shutter glasses and glasses-free autostereoscopic displays.
- **Decline**: Waning popularity due to glasses requirement, limited 3D content, and additional cost.

## 2010s: Virtual Reality Resurgence

- **Highlights**: Introduction of affordable, high-quality VR headsets like Oculus Rift and HTC Vive.

## Ongoing Evolution

### Augmented Reality (AR) and Mixed Reality (MR)

- **Development**: Blending digital content with the real world.

### Further VR Developments

- **Focus**: Enhancing quality, accessibility, and applicability in various sectors.

### Novel View Synthesis
https://huggingface.co/learn/computer-vision-course/unit8/3d-vision/nvs.md

# Novel View Synthesis

We've seen in the NeRF chapter how, given a large set of images, we can generate a three-dimensional representation of an object.
But sometimes we have only a handful of images or even just one.
Novel View Synthesis (NVS) is a collection of methods to generate views from new camera angles that are plausibly consistent with a set of images.
Once we have a large, consistent set of images we can use NeRF or a similar algorithm to construct a 3D representation.

Many methods have recently been developed for this task.
However, they can be divided into two general classes - those that generate an intermediate three-dimensional representation, which is rendered from a new viewing direction, and those that directly produce a new image.

One key difficulty is that this task is almost always underdetermined.
For example, for an image of the back of a sign, there are many possible different things that could be on the front.
Similarly, there could be parts of the object that are occluded, with one part of an object in front of another.
If a model is trained to directly predict (regress) the unseen parts, with a loss penalizing errors in reconstructing held-out views, then by necessity the model will, when it is not clear what should be there, predict a blurry, grey colored region, as noted in [NerfDiff](https://jiataogu.me/nerfdiff/).
This has spurred interest in the use of generative, diffusion-based models, which are able to sample from multiple plausible possibilities for the unseen regions.

Here we will briefly discuss two approaches, which are representative of the two classes.
[PixelNeRF](https://alexyu.net/pixelnerf) directly predicts a NeRF for the scene from an input image.
[Zero123](https://zero123.cs.columbia.edu/) adapts the Stable Diffusion latent diffusion model to directly generate new views without an intermediate 3D representation.

## PixelNeRF

PixelNeRF is a method that directly generates the parameters of a NeRF from one or more images.
In other words, it conditions the NeRF on the input images.
Unlike the original NeRF, which trains a MLP which takes spatial points to a density and color, PixelNeRF uses spatial features generated from the input images.

  
  Image from: PixelNeRF

The method first passes the input images through a convolutional neural network (ResNet34), bilinearly upsampling features from multiple layers to the same resolution as the input images.  
As in a standard NeRF, the new view is generated by volume rendering. 
However, the NeRF itself has a slightly unusual structure. 
At each query point \\( x \\) in the rendered volume, the corresponding point in the input image(s) is found (by projecting it using the input image camera transformation \\( \pi \\) ). 
The input image features at this point, \\( W(\pi x) \\) are then found by bilinear interpolation.
Like in the original NeRF, the query point \\( x \\) is positionally encoded and concatentated with the viewing direction \\( d\\).
The NeRF network consists of a set of ResNet blocks; the input image features \\( W(\pi(x)) \\) pass through a linear layer, and are added to the features at the start of each of the first three residual blocks.
There are then two more residual blocks to further process these features, before an output layer reduces the number of channels to four (RGB+density).
When multiple input views are supplied, these are processed independently for the first three residual blocks, and then the features are averaged before the last two blocks.

The original PixelNeRF model was trained on a relatively small set of renderings from the [ShapeNet](https://huggingface.co/datasets/ShapeNet/ShapeNetCore) dataset.
The model is trained with either one or two input images, and attempts to predict a single novel view from a new camera angle.
The loss is the mean-squared error between the rendered and expected novel views. 
A model was trained separately on each class of object (e.g. planes, benches, cars).

### Results (from the PixelNeRF website)

  

  
  Image from: PixelNeRF

The PixelNeRF code can be found on [GitHub](https://github.com/sxyu/pixel-nerf).

### Related methods

In the [ObjaverseXL](https://arxiv.org/pdf/2307.05663.pdf) paper, PixelNeRF was trained on a *much* larger dataset [allenai/objaverse-xl](https://huggingface.co/datasets/allenai/objaverse-xl).

See also - [Generative Query Networks](https://deepmind.google/discover/blog/neural-scene-representation-and-rendering/),
[Scene Representation Networks](https://www.vincentsitzmann.com/srns/),
[LRM](https://arxiv.org/pdf/2311.04400.pdf).

## Zero123 (or Zero-1-to-3)

Zero123 takes a different approach, being a diffusion model.
Rather than trying to generate a three-dimensional representation, it instead directly predicts the image from the new views. 
The model takes a single input image, and the relative viewpoint transformation between the input and novel view direction.
It attempts to generate a plausible, 3D-consistent image from the novel view direction.

Zero123 is built upon the [Stable Diffusion](https://arxiv.org/abs/2112.10752) architecture, and it was trained by fine-tuning existing weights.  
However, it adds a few new twists. 
The model actually starts with the weights from [Stable Diffusion Image Variations](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations), which uses the CLIP image embeddings (the final hidden state) of the input image to condition the diffusion U-Net, instead of a text prompt. 
However, here these CLIP image embeddings are concatenated with the relative viewpoint transformation between the input and novel views.
(This viewpoint change is represented in terms of spherical polar coordinates).

  
  Image from: https://zero123.cs.columbia.edu

The rest of the architecture is the same as Stable Diffusion.
However, the latent representation of the input image is concatenated channel-wise with the noisy latents before being input into the denoising U-Net.

To explore this model further, see the [Live Demo](https://huggingface.co/spaces/cvlab/zero123-live).

### Related methods

[3DiM](https://3d-diffusion.github.io/) - X-UNet architecture, with cross-attention between input and noisy frames.

[Zero123-XL](https://arxiv.org/pdf/2311.13617.pdf) - Trained on the larger objaverseXL dataset. See also [Stable Zero 123](https://huggingface.co/stabilityai/stable-zero123).

[Zero123++](https://arxiv.org/abs/2310.15110) - Generates 6 new fixed views, at fixed relative positions to the input view, with reference attention between input and generated images.

### Supplementary reading and resources 🤗🌎
https://huggingface.co/learn/computer-vision-course/unit12/supplementary-material.md

# Supplementary reading and resources 🤗🌎

We hope that you had an excited learning journey throughout the unit of Ethics and Bias in CV models. To explore more about the field in general, you can go through these learning resources:

- [**Ethics and Society Newsletter**](https://huggingface.co/blog?tag=ethics) by Hugging Face. This newsletter discusses the efforts of Hugging Face in the domain of Ethical AI. Hugging Face also has a separate space dedicated to collections, spaces, datasets and models involving ethical AI, [here](https://huggingface.co/society-ethics).
- [**Data Ethics**](https://ethics.fast.ai/) course by fast.ai. 
- [**Intro to AI Ethics**](https://www.kaggle.com/learn/intro-to-ai-ethics) course on Kaggle Learn. This is a short course with exercises for beginners in the field. 
- [**Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy**](https://dl.acm.org/doi/abs/10.1145/3351095.3375709)
This paper discusses about the ImageNet dataset and how the bias in dataset was removed by filtering most of the synsets and balancing according to age, gender and color.
- [**The AI Ethics Brief**](https://brief.montrealethics.ai/) newsletter by Montreal AI Ethics Institute, with a section on [Computer Vision](https://montrealethics.ai/category/columns/the-ethics-of-computer-vision/). The newsletter is a must read for learners and practitioners in the field. 
- [**Ethics of AI**](https://ethics-of-ai.mooc.fi/) MOOC by University of Helsinki.
- [**CS 281**](https://stanfordaiethics.github.io/) Ethics of AI Course by Stanford University.
- [**Ethics of AI Bias**](https://ocw.mit.edu/courses/res-10-002-ethics-of-ai-bias-spring-2023/) course by MIT OCW.

### Ethics and Bias in AI 🧑‍🤝‍🧑
https://huggingface.co/learn/computer-vision-course/unit12/ethics-bias-ai.md

# Ethics and Bias in AI 🧑‍🤝‍🧑

We hope that you found the ImageNet Roulette case study interesting and learned what can go wrong with AI models in general. In this chapter, we will go through yet another example of a powerful technology that has cool applications but can also raise ethical concerns if kept unchecked. Let's first quickly summarize the ImageNet Roulette case study and its after-effects.
- ImageNet Roulette is a great example of an AI model gone wrong, due to the inherent biases and overlooking the labelling and data pre-processing stages.
- The experiment was hand-crafted just to demonstrate how things can go wrong if kept unchecked.
- This project led to a great deal of corrections done by the ImageNet team in the dataset, as well as implementation of appropriate measures to mitigate problems like face obfuscation, removal of harmful and triggering synsets, removal of corresponding images, etc.
- Finally, it opened up an ongoing discussion and fuelled the research work on mitigating the risks. 

Before we take a look at another example of a powerful technology, let's step back and reflect on some questions. In general, is technology good or bad? Is electricity good or bad? Is the internet generally safe or harmful? Etc. Keep these questions in mind as we begin our journey.

## Deepfakes 🎥

Imagine you are a recent graduate who wants to learn about Deep Learning. You enroll in a course called "Introduction to Deep Learning (MIT 6.S191)" by MIT. To make things more interesting, the course team released a really cool video on things that can be done using deep learning. Check the video here:

An introduction to the course MIT 6.S191, where deepfakes were used to give an impression of a welcome by Barack Obama.

Yup, the introduction session was made in such a way that it puts an impression that the students are being welcomed by none other than Barack Obama himself. Really cool application, a course on deep learning with an introduction curated to showcase one of the use cases of deep generative models. For the first-timers, this would be really engaging, making them interested in the technology and everyone would like to try it out. After all you can actually make such videos and images easily within a few minutes using a decent GPU and start posting memes, posts etc surrounding this.
Let's see another example of this technology but with different after-effects. Imagine if we could come up with the same deepfake of an influential political leader or actor during elections or wars. The same fake video can be used to spread hatred and misinformation, leading to marginalizing different people. Even though the person did not spread misinformation, the video itself can cause massive outrage. This can be horrifying. However, the main problem lies in the fact that once the misinformation is spread, the harm is already done, and people are divided, even if it later becomes clear that the video was manipulated. So, the harm can only be avoided if the manipulated video is not made public in the first place. This makes this technology dangerous, but is the technology itself safe or harmful? Technology itself is never good or bad, but its usage (who uses it and for what purpose) can have good or bad effects.

Deepfakes are synthetic media that are created using the help of deep generative CV models. You can actually manipulate images with a different person's image and also generate videos through it. Audio deepfake is another technology that can complement the CV counterpart by mimicking the exact voice of the subject under consideration. This was just one of the examples of how deepfakes can cause havoc, but in reality, the implications are far more dangerous as they can have a lifelong impact on the lives of the victims.

## What is Ethics and Bias in AI?

From the previous example, a few aspects of this technology to keep in mind would be:
- Consent of the subject before using the images/videos to manipulate and form new media.
- Algorithms that facilitate the creation of synthetic media which can be used for manipulation.
- Algorithms that can be used to detect such synthetic media.
- Awareness about these algorithms and their after-effects.

💡Check out The Consentful Tech Project [here](https://www.consentfultech.io/). This project raises awareness, develops strategies, and shares skills to help people build and use technology consentfully.

Let us now formalize some definitions based on these examples. So what is Ethics and Bias? Ethics can be defined simply as a set of moral principles that help us distinguish between wrong and right. Now, AI Ethics can be defined as the set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI. AI Ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. This field involves a variety of stakeholders:
- **Researchers, Engineers, Developers, and AI Ethicists:** people in charge of models, algorithms, datasets development, and curation.
- **Government bodies, legal authorities (like lawyers):** bodies and people in charge of regulatory aspects of Ethical AI development.
- **Companies and organizations:** stakeholders that are at the forefront of delivering AI products and services.
- **Citizens:** people who use AI services and products in their daily lives and are largely affected by the technology.

Bias in AI refers to the biases in the output of the algorithms, which might happen due to assumptions during model development or training data. These assumptions stem from the inherent biases that are inside humans who were responsible for the development. As a result, AI models and algorithms start reflecting on these biases. These biases can disrupt ethical development or principles and, therefore, need attention and ways to mitigate them. We will cover more about biases, how they creep into different AI models, their types, evaluation, and mitigation (with a focus on CV models) in detail in the upcoming chapters of the unit. To understand more about Ethics in AI, let us look closely into the principles for Ethical AI.

## Ethical AI Principles 🤗 🌎

### Asimov's Three Laws of Robotics 🤖 

There have been many historic works that reflect on development of ethics for technology. The earliest work can be traced back to the famous science fiction writer Isaac Asimov. He came up with the three laws of robotics keeping in mind the potential risks of autonomous AI agents. The laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.

### Asilomar AI Principles 🧑🏻‍⚖️🧑🏻‍🎓🧑🏻‍💻

Asimov's laws of robotics were one of the earliest works in ethics for technology. In 2017, a conference was organized at Asilomar Conference Grounds, California. This conference was held to discuss the impacts of AI on society. The outcome of this conference was the development of guidelines for the responsible development of AI. The guideline has 23 principles, which were signed by around 5,000 individuals, including 844 AI and robotics researchers.

![Asilomar AI Principles](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/asilomar-ai.png)
23 Asilomar AI Principles for Responsible AI Development

💡You can check the full list of the 23 Asilomar AI Principles and the signatories [here](https://futureoflife.org/open-letter/ai-principles/).

These principles are a guide for ethical development and implementation of AI models in general. Let's now look into a recent work on ethical AI guidelines by UNESCO.

### UNESCO's report: Recommendations on the Ethics of Artificial Intelligence 🧑🏼‍🤝‍🧑🏼🌐

UNESCO came up with a global standard on AI ethics in the form of a report named **"Recommendation on the Ethics of Artificial Intelligence"**, which was adopted by 193 member countries in November 2021. Previous guidelines on Ethical AI were lacking in terms of actionable policy. Still, the recent report by UNESCO allows policymakers to translate the core principles into action concerning different domains like data governance, environment, gender, health, etc. The four core values of the recommendation which lay the foundation for AI systems are:
- **Human rights and human dignity** Respect, protection, and promotion of human rights and fundamental freedoms and human dignity.
- **Living in Peaceful** just and interconnected societies.
- **Ensuring diversity and inclusiveness.**
- **Environment and ecosystem flourishing.**

![AI Policy Areas](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/ai_policy.png)
11 key policy areas for responsible developments in AI.

The ten core principles that lay out a human-rights centered approach to Ethics of AI by UNESCO are give below:
- **Proportionality and Do No Harm:** The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harm that may result from such uses.
- **Safety and Security:** Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks), should be avoided and addressed by AI actors.
- **Right to Privacy and Data Protection:** Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.
- **Multi-stakeholder and Adaptive Governance & Collaboration:** International law & national sovereignty must be respected in the use of data. Additionally, the participation of diverse stakeholders is necessary for inclusive approaches to AI governance.
- **Responsibility and Accountability:** AI systems should be auditable and traceable. There should be oversight, impact assessment, audit, and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental well-being.
- **Transparency and Explainability:** The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety, and security.
- **Human Oversight and Determination:** Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.
- **Sustainability:** AI technologies should be assessed against their impacts on "sustainability", understood as a set of constantly evolving goals, including those set out in the UN's Sustainable Development Goals.
- **Awareness & Literacy:** Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, and media & information literacy.
- **Fairness and Non-Discrimination:** AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI's benefits are accessible to all.

💡To read the complete report by UNESCO on "Recommendations on the Ethics of Artificial Intelligence", you can visit [here](https://unesdoc.unesco.org/ark:/48223/pf0000381137).

As we close the unit we will also look into Hugging Face's efforts to ensure Ethical AI practises. In the next chapter, we will learn more about biases, types and how they creep in different AI models.

### Introduction
https://huggingface.co/learn/computer-vision-course/unit12/introduction.md

# Introduction

Welcome to the introduction chapter for the unit Ethics and Bias in Computer Vision. This chapter will build the foundation for many important concepts we will encounter later in this unit.
In this chapter, we will:
- Cover the popular ImageNet Roulette case study in context to ethics and bias in computer vision with examples.
- Explore what implications it can have on people and certain groups.
- Take a look into the consequences of the experiment.
- Efforts by ImageNet team to tackle and mitigate these problems.
- Conclude the chapter with some questions on the case study and lay foundation for next chapters.

So let us dive in 🤗

## ImageNet Roulette: A Case Study on Biases in Classification 

Imagine you wake up Sunday morning and play around with your phone. You come across this app that tries to return sarcastic and funny labels if you upload different images or maybe take a selfie. You don't mind some fun, so you try the app out by uploading a selfie, and to your shock, it returns an alarming label. It mentions you as a crime suspect (this crime can also be highly dangerous and heinous). You also see social media posts on the same app of different people with provoking labels, increasing the chances of racial and gender profiling. Some of these labels would mean a person who is a crime offender, a person with specific facial features linked with one's ethnicity, or a person's ancestry. This app returns very offensive labels often and might harm self-interests and target a special group of people. A wide range of such labels, which can offend people based on their religion, ethnicity, gender, or age, were present in the app, and you are just shocked and also kind of confused about what's going on with this. 

AI has made our lives easier and more comfortable, but many times, if AI is not kept in check, it can cause havoc in people's lives. Humans should be more inclusive and aware of others' needs and preferences. The same human values must be incorporated and reflected while developing and deploying AI models. AI models should not create negative sentiments or try to manipulate anyone against a group. 

### Introduction to ImageNet: A Large Scale Dataset for Object Recognition

ImageNet is a large-scale dataset that was created for object recognition benchmarks at scale. The aim was to map out the entire world of objects to make machines around us smarter in scene understanding, in which humans are far better. This dataset was one of the earliest attempts of its kind to create a large-scale dataset for object recognition. 

The ImageNet team started scraping image data from various sources on the internet. The original dataset consisted of around 14,197,122 images with 21,841 classes; this was referred to as Imagenet-21K, reflecting around 21K classes. The annotations were crowdsourced using Amazon Mechanical Turk. A smaller subset of this dataset called ImageNet-1K contained 1,281,167 training images, 50,000 validation images, and 100,000 test images with 1000 classes which was used as the foundation for the popular ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The ILSVRC became the competition ground for many aspiring companies and labs working in computer vision to outperform previous approaches on accurately labeling objects. The structure for ImageNet was based on WordNet, a database of word classifications developed at Princeton University.

💡You can read more about ImageNet [here](https://www.image-net.org/). Also check out [this TED Talk](https://www.youtube.com/watch?v=40riCqvRoMs) by Prof. Fei Fei Li on the same topic.

### Motivation Behind ImageNet Roulette 🃏

Now, let's talk about the app we discussed earlier. This app existed on the internet a few years ago as a web application. This was an intentional experiment termed ImageNet Roulette and is still one of the most popular cases of how AI models can go wrong if the training data is not prepared carefully with guidelines. This project was developed by artist Trevor Paglen and researcher Kate Crawford. They trained their model based on the 2,833 subcategories of the Person category found in the dataset.

The model was trained using Caffe on the images and labels in the "person" category. The app prompts the user for an image, and a face detector detects faces in the image. The detected face was then sent to the Caffe model for classification. Finally, the app returned an image with a bounding box around the detected face and the label the Caffe model predicted. 

The main motivation behind ImageNet Roulette was to show inherent biases in classifying people. It was trained on only the "person" category from the ImageNet dataset (as discussed earlier). So what went wrong? The inference on different images reflected harmful and provocative on many levels. The biggest cause for bias in this system was largely due to the already available ImageNet categories. The classes reflect how they were kind of absurd, offensive, and provocative while annotating the images. Some of the labels (after rephrasing them from their initial labels to avoid triggering) would mean a person who is an addict, a person with questionable character, a person who is against a specific group of people, a person who is unsuccessful, and a loser, and many more. 

A wide range of labels that categorize people based on gender, race, profession, etc., was inherent in ImageNet. And where did it go wrong? These labels are all drawn from the structure of WordNet. This is exactly where the biases crept into this model (due to overlooking the data preparation process, irrelevant images were downloaded in bulk). We will explore the reasons mentioned by the ImageNet team `later` in the chapter.

Would you like such models to be deployed without any checks? If deployed, are you fine with letting people around you refer to you as an unsuccessful person and make a viral post? This is what went wrong and unlooked for while preparing the dataset. 

### Implications of ImageNet Roulette

Let us explore the implications this experiment had:

1. It exposed deep rooted biases within the ImageNet annotations which are often offensive and stereotypical, particularly concerning race and gender.
2. The experiment also questions the integrity of datasets used to train AI models especially within the ImageNet dataset. It highlights the need for more rigorous scrutiny and ethical considerations in creating and annotating the training data.
3. The shocking results acted as a catalyst for discussions around ethical considerations in AI. It prompted a broader conversation within the AI community about the responsibility to ensure fair and unbiased training data, emphasizing the need for ethical data practices.
In general, if models like this are deployed in real life applications, it can have alarming effects on different people and target groups.

### Consequences of ImageNet Roulette

Initially, the "person" category was not noticed as ImageNet was an object recognition benchmark. But after this experiment, some crucial changes happened in the community. In this case study, the creators were able to show the problems with the inherent biases in ImageNet (which was in shadow till around 2018, when some research started showing up). After a few days, ImageNet released a research paper summarizing their one-year-long project funded by NSF. The full ImageNet dataset was disabled for download since January 2019, whereas the 1000-category dataset ImageNet-1K was not affected. The ImageNet team provided some underlying issues and ways to deal with them (surprisingly, the ImageNet Roulette was not mentioned in their report).

**Problem 1: Offensive Synonym Sets in WordNet** WordNet contains many offensive synsets that are inappropriate as image labels. Somehow, a lot of these labels creeped into ImageNet and were included. 

**Solutions:** 
a. ImageNet appointed a group of in-house manual annotators to classify synsets in three categories: *offensive*, *sensitive* and *safe*. Offensive labels are racial or gender slurs, sensitive labels are not offensive but might cause offense according to the context, and safe labels are not offensive.
b. Out of the 2,832 synsets within the person category, 1,593 unsafe synsets (offensive and sensitive) were identified and the remaining 1,239 synsets were temporarily deemed safe. 
c. A new version of ImageNet was prepared by removing unsafe synsets resulting in removal of around 600,000 images in total.

**Problem 2: Non-imageable concepts** Some synsets might not be offensive, but including them in the overall dataset is also not logical. For example, we cannot classify a person in the image as a philanthropist. Similarly, there might be many synsets that cannot be captured visually using images. 

**Solutions:**

a. For such concepts, multiple workers were asked to rate each of the 2,394 people synsets (safe + sensitive).
b. The rating was done based on the ease with which the synset arouses mental imagery in a scale of 1-5, 1 being very hard and 5 begin very easy.
c. The median rating was 2.36 and around 219 synsets had a rating more than 4, images with very low imageability were removed.

**Problem 3: Diversity of images** Many images in ImageNet might have lesser representation. For example, an image search of a particular profession can end up returning different gender ratios as in the real world. Images of construction workers or gangsters might be more inclined toward a particular gender or race.  Not only during search but also during annotations and data cleaning, annotators might be inclined to respond to particular categories in an already socially stereotypical manner. 
**Solutions:**
a. To mitigate such stereotypes in search and annotations, images should have higher visual arousal (visually more strong).
b. ImageNet team did a demographic analysis on most imageable attributes like gender, color and age.
c. After this analysis, the dataset was balanced by removing overrepresented attributes in a synset leading to a more uniform gender, color and age balance. 

**Problem 4: Privacy Concerns** While classification was subject to some inherent biases, to protect an individual's identity, privacy is also an equally important factor. If these classifications from the experiment were viral, it would have a huge impact on people's lives and overall well-being. To ensure this, AI models should not only be fair but also preserve subjects' privacy.

**Solutions:**

a. Imagenet-1K dataset had 3 people categories. Separate face annotations were carried out and a face-blurred version of the dataset was created.
b. Image obfuscation techniques like blurring and mosaicing were applied to these images.
c. It was shown that these images lead to a very minimal drop of accuracy while benchmarking on object recognition tasks and are suitable for training privacy-aware visual classifiers.

💡For more details on the ImageNet Roulette experiment, you can follow the article on ImageNet Roulette. The experiment is posted on [Excavating AI](https://excavating.ai/) which discusses this in detail. To know more about ImageNet's stance and research on mitigating these issues, you can take a look into the full technical report submitted by them [here](https://www.image-net.org/filtering-and-balancing/).

## Conclusion 

In the later chapters, we will also follow the same flow for case studies and try to answer some basic questions. Although we will discuss AI models in general, our focus will be mainly on CV models and the ethical concerns revolving around them.

1. Exploration, what the case study or experiment is all about?
2. What can go wrong or what went wrong and where?
3. What is the impact on target groups and other implications (impact assessment)?
4. How to evaluate bias in CV models using metrics?
4. How to mitigate these problems for fair and ethical development of CV models.
5. Role of community and other target groups in fostering and cultivating open dialogues.

In conclusion over the whole unit, we will come across various case studies related to ethics and bias, will evaluate bias and try to think of what impact they can have if biases are unresolved. We will also explore various strategies to mitigate the biases, and make CV models safe and inclusive for usage.

### Exploring Ethical Foundations in CV Models
https://huggingface.co/learn/computer-vision-course/unit12/pre-intro.md

# Exploring Ethical Foundations in CV Models

Welcome to the Ethics and Bias unit of our Computer Vision Course! 📸✨ This segment is designed to explore the critical elements of ethics and bias within the domain of computer vision.

## What you’ll learn 🖼️🤖

In this unit we'll understand the ethical dimensions and potential biases within AI models and how it is crucial for responsible development and deployment of computer vision systems. 
A brief outline of the unit is given below:

- We will start off this unit with Chapter 1, where we discuss the implications of the popular ImageNet Roulette Case Study.
- In Chapter 2, we'll discuss about the ethical considerations connected with AI and computer vision technologies and why it is important to be fair, while developing CV systems.
- In Chapter 3, we'll learn how bias can infiltrate AI models across various modalities such as text, vision, and speech.
- In Chapter 4, we'll discuss various types of biases and their implications on computer vision models.
- In Chapter 5, we'll discuss different ways to spot bias, and metrics for evaluating biases in CV models with the help of practical case studies.
- In Chapter 6, we'll learn about the strategies and methods to mitigate biases specifically within computer vision models.
- Finally, we close with Chapter 7 and discuss HuggingFace's mission and initiatives toward fostering ethical AI for society.

## Journey through the unit 🏃🏻‍♂️🏃🏻‍♀️

Let's begin our journey that merges theoretical foundations, practical case studies, and ethical concerns inherent in the landscape of computer vision. From exploring real-world examples like the ImageNet Roulette case study to evaluating biases in AI models recognizing "Gay Face", Twitter's Saliency Algorithm and similar case studies, this unit dives deep into understanding, assessing, and mitigating biases in computer vision systems.

By the conclusion of this unit, you'll have gained insights into recognizing biases, evaluating them in CV models, and effectively employing strategies to mitigate these biases. Additionally, you'll explore Hugging Face's efforts to promote ethical practices within AI, providing a roadmap towards responsible and transparent AI development.

Join us as we navigate the domain of ethics and bias in computer vision, equipping ourselves to contribute ethically and responsibly to the future of AI and society.
Let's shape the AI that is not only smart, but fair and responsible.

Let’s dive in! 🚀🤗🌎

### Hugging Face's efforts: Ethics and Society 🤗🌎
https://huggingface.co/learn/computer-vision-course/unit12/conclusion.md

# Hugging Face's efforts: Ethics and Society 🤗🌎

We hope you liked exploring the unit on ethics and bias in computer vision. As we conclude this unit, let us take a look into the efforts by Hugging Face to improve ethics in society. This chapter will encourage you to explore the world of ethics and bias in AI in general, which is constantly evolving. Hugging Face's core mission is to *democratize good machine learning*. So what is Good ML? There are some principles for Good ML. 

## Democratizing Good ML 🤗

**1. Collaboration:** Providing tools for easier collaboration with the open-source community. Some examples of these tools are: 
    a. [*Model Cards*](https://huggingface.co/docs/hub/model-cards) are files that accompany the models and provide information about the model, its intended uses and potential limitations (including ethical considerations), training parameters and experimental information, datasets used for training and evaluation results. This ensures that the models uploaded to the hub are transparent and open to the community.
    b. [*Evaluation*](https://huggingface.co/blog/eval-on-the-hub) lets users to evaluate any model on any dataset that is openly available on the Hub without writing a single line of code. , 
    c. [*Community discussion*](https://huggingface.co/blog/community-update) is important, whether you upload a model, dataset or a space or you just want to know more about them from the authors. Everyone can give feedback, flag a given Space, and improve or contribute directly to the repository via PRs. 
    d. [*Discord*](https://discord.com/invite/JfAtkvEtRb) group for the 🤗 community, where a wide range of channels discuss about different domains like reinforcement-learning, NLP, game development, audio, computer vision and so on. 
**2. Transparency:** Being transparent about the intent, sources of data, model training and performance. Efforts in this direction include: 
    a. [*Ethical charter for multimodal project*](https://huggingface.co/blog/ethical-charter-multimodal) discusses the values of the multimodal learning group at 🤗. This is a project-specific charter.
    b. [*Work on AI policy @* 🤗](https://huggingface.co/blog/us-national-ai-research-resource) mentions the response of Hugging Face on U.S. National AI Research Resource Interim Report.
**3. Responsibility:** Assessing the impacts of ML models and tools, and making them more auditable and understandable even for people with less expertise on ML. Some efforts in this directions are: 
    a. [🤗 *for Education Project*](https://huggingface.co/blog/education) aims at educating people from all backgrounds, beginners and instructors. Various experts and team members organize meetups, conferences and workshops.  
    b. [*Data Measurement Tool*](https://huggingface.co/spaces/huggingface/data-measurements-tool) is an interactive interface and open-source library that lets dataset creators and users automatically calculate metrics that are meaningful and useful for responsible data development. 

## Categories of Hugging Face Spaces 

As we move forward, let us now take a look into how spaces are categorized by Hugging Face. Hugging Face categorizes the spaces in 6 high levels based on the ethical aspects in Machine Learning. Hugging Face Spaces are categorized as:

### ✍️ Rigorous

Rigorous projects pay special attention to examining failure cases, protecting privacy through security measures, and ensuring that potential users (technical and non-technical) are informed of the project's limitations. Some examples:
- Projects built with models that are well-documented with Model Cards.
- Tools that provide transparency into how a model was trained and how it behaves.
- Evaluations against cutting-edge benchmarks, with results reported against disaggregated sets.
- Demonstrations of models failing across gender, skin type, ethnicity, age or other attributes.
- Techniques for mitigating issues like over-fitting and training data memorization.
- Techniques for detoxifying language models.

An example space is the [**Diffusion Bias Explorer**](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer) which lets the users compare three text-to-image models SD 1.4, SD 2.0 and Dall-E 2 for different prompts and how they represent different professions and adjectives. 

### 🤝 Consentful

Consentful technology supports the self-determination of people who use and are affected by these technologies. Some examples:
- Demonstrating a commitment to acquiring data from willing, informed, and appropriately compensated sources.
- Designing systems that respect end-user autonomy, e.g. with privacy-preserving techniques.
- Avoiding extractive, chauvinist, "dark", and otherwise "unethical" patterns of engagement.

Some example spaces for this category are: 
1. [**Does CLIP Know My Face:**](https://huggingface.co/spaces/AIML-TUDA/does-clip-know-my-face) this space lets you choose a model, enter your name and upload some pictures. Depending on this information, the model tries to predict your name from the images, if it predicts name correctly for multiple images there are high chances that you were part of the training data.
2. [**Photoguard:**](https://huggingface.co/spaces/RamAnanth1/photoguard) this space demonstrates an approach to safeguarding images against manipulation by ML-powered photo-editing models such as SD, through immunization of images.

### 👁️‍🗨️ Socially Conscious 

Socially Conscious work shows us how machine learning can support efforts toward a stronger society. Some examples:
- Using machine learning as part of an effort to tackle climate change.
- Building tools to assist with medical research and practice.
- Models for text-to-speech, image captioning, and other tasks aimed at increasing accessibility.
- Creating systems for the digital humanities, such as for Indigenous language revitalization.

Some example spaces:
1. [**Socratic Models Image Captioning**](https://huggingface.co/spaces/Geonmo/socratic-models-image-captioning-with-BLOOM)
2. [**Comparing Image Captioning Models**](https://huggingface.co/spaces/nielsr/comparing-captioning-models)

### 🌎 Sustainable

This is work that highlights and explores techniques for making machine learning ecologically sustainable. Some examples:
- Tracking emissions from training and running inferences on large language models.
- Quantization and distillation methods to reduce carbon footprints without sacrificing model quality.

1. [**EfficientFormer**](https://huggingface.co/spaces/adirik/efficientformer)
2. [**EfficientNetV2 Deepfakes Video Detector**](https://huggingface.co/spaces/Ron0420/EfficientNetV2_Deepfakes_Video_Detector)

### 🧑‍🤝‍🧑 Inclusive

These are projects which broaden the scope of who builds and benefits in the machine learning world. Some examples:
- Curating diverse datasets that increase the representation of underserved groups.
- Training language models on languages that aren't yet available on the Hugging Face Hub.
- Creating no-code and low-code frameworks that allow non-technical folk to engage with AI.

An example space is [**Promptist Demo**](https://huggingface.co/spaces/microsoft/Promptist).

### 🤔 Inquisitive

Some projects take a radical new approach to concepts which may have become commonplace. These projects, often rooted in critical theory, shine a light on inequities and power structures which challenge the community to rethink its relationship to technology. Some examples:
- Reframing AI and machine learning from Indigenous perspectives.
- Highlighting LGBTQIA2S+ marginalization in AI.
- Critiquing the harms perpetuated by AI systems.
- Discussing the role of "openness" in AI research.

An example space is [**PAIR: Datasets Have Worldviews**](https://huggingface.co/spaces/merve/dataset-worldviews).

Finally, if you would like to explore more about Hugging Face's efforts, do check out the [**Society and Ethics**](https://huggingface.co/society-ethics) organzation on Hugging Face. Also check out the dedicated channel on [**#ethics-and-society**](https://discord.gg/hugging-face-879548962464493619).

### Introduction to model optimization for deployment
https://huggingface.co/learn/computer-vision-course/unit9/intro_to_model_optimization.md

# Introduction to model optimization for deployment

Have you ever felt confused after the model training stage? What else should you do? If yes, this chapter will help you. In general, the step after we have trained a computer vision model is to deploy it so that other people can use our model. However, when the model has successfully deployed in production, many problems arise, such as the model size being too large, the prediction process taking a long time, and limited memory on the device. These problems can happen because we usually deploy models on devices with smaller specifications than the hardware for training. To overcome the issues, we can carry out additional stages before deploying and model optimization.

## What is model optimization?
Model optimization is a process of modifying a model we trained to make it better in terms of efficiency. These modifications are crucial because the hardware we use during training and inference will be very different in most cases. The hardware specifications at the time of inference are smaller, which is why this optimization model needs to be carried out. For example, we have training on high-performance GPUs, and the model inference process will run on edge devices (e.g., microcomputers, mobile devices, IoT, etc.). Of course, these devices have different specifications and tend to be smaller. Carrying out model optimization is crucial so our model can run smoothly on devices with lower specifications.

## Why is it important for deployment in computer vision?
As we already know, optimizing the model is important in before the deployment stage, but why? Several things make this optimization model important to do before the deployment stage. Some of these things are:
1. Resource limitations: Computer vision models often require high computational resources such as memory, CPU, and GPU. This will be a problem if we want to deploy the model on devices with limited resources, such as mobile phones, embedded systems, or edge devices. Optimization techniques can reduce model size and computational cost and make it deployable for that platform.
2. Latency requirements: Many computer vision applications, such as self-driving cars and augmented reality, require real-time response. This means the model must be able to process data and generate results quickly. Optimization can significantly increase the inference speed of a model and ensure it can meet latency constraints.
3. Power consumption: Devices that use batteries, such as drones and wearable devices, require models with efficient power usage. Optimization techniques can also reduce battery consumption which is often caused by model sizes that are too large.
4. Hardware compatibility: Sometimes, different hardware has its capabilities and limitations. Several optimization techniques are specifically used for specific hardware. If this is done, we can easily overcome the hardware limitations.

## Different types of model optimization techniques
There are several techniques in the model optimization, which will be explained in the next section. However, this section will briefly describe several types:
1. Pruning: Pruning is the process of eliminating redundant or unimportant connections in the model. This aims to reduce model size and complexity.

![Pruning](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/pruning.png)

2. Quantization: Quantization means converting model weights from high-precision formats (e.g., 32-bit floating-point) to lower-precision formats (e.g., 16-bit floating-point or 8-bit integers) to reduce memory footprint and increase inference speed.
3. Knowledge Distillation: Knowledge distillation aims to transfer knowledge from a complex and larger model (teacher model) to a smaller model (student model) by mimicking the behavior of the teacher model.

![Knowledge Distillation](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/knowledge_distillation.png)

4. Low-rank approximation: Approximates large matrices with small ones, reducing memory consumption and computational costs.
5. Model compression with hardware accelerators: This process is like pruning and quantization. But, running on specific hardware such as NVIDIA GPUs and Intel Hardware.

## Trade-offs between accuracy, performance, and resource usage
A trade-off exists between accuracy, performance, and resource usage when deploying a model. That's when we have to decide which part to prioritize so that the model can be maximized in the case at hand.
1. Accuracy is the model's ability to predict correctly. High accuracy is needed in all applications, which also causes higher performance and resource usage. Complex models with high accuracy usually require a lot of memory, so there will be limitations if they are deployed on resource-constrained devices.
2. Performance is the model's speed and efficiency (latency). This is important so the model can make predictions quickly, even in real time. However, optimizing performance will usually result in decreasing accuracy.
3. Resource usage is the computational resources needed to perform inference on the model, such as CPU, memory, and storage. Efficient resource usage is crucial if we want to deploy models on devices with certain limitations, such as smartphones or IoT devices.

The image below shows a common computer vision model in terms of model size, accuracy, and latency. A bigger model has high accuracy, but needs more time for inference and has a larger file size.

![Model Size VS Accuracy](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/model_size_vs_accuracy.png)

![Accuracy VS Latency](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/accuracy_vs_latency.png)

These are the three things we must consider: where do we focus on the model we trained? For example, focusing on high accuracy will result in a slower model during inference or require extensive resources. To overcome this, we apply one of the optimization methods as explained so that the model we get can maximize or balance the trade-off between the three components mentioned above.

### Model Deployment Considerations
https://huggingface.co/learn/computer-vision-course/unit9/model_deployment.md

# Model Deployment Considerations

This chapter delves into the intricate considerations of deploying machine learning models. From diverse deployment platforms to crucial practices like serialization, packaging, serving, and best deployment strategies, we explore the multifaceted landscape of model deployment.   

## Different Deployment Platforms
- **Cloud**: Deploying models on cloud platforms like AWS, Google Cloud, or Azure offers a scalable and robust infrastructure for AI model deployment. These platforms provide managed services for hosting models, ensuring scalability, flexibility, and integration with other cloud services.
    - **Advantages**
        - Cloud deployment offers scalability through high computing power, abundant memory resources, and managed services.
        - Integration with the cloud ecosystem allows seamless interaction with various cloud services.
    
    - **Considerations** 
        - Cost implications need to be evaluated concerning infrastructure usage.
        - Data privacy concerns and managing network latency for real-time applications should be addressed.

- **Edge**: Exploring deployment on edge devices such as IoT devices, edge servers, or embedded systems allows models to run locally, reducing dependency on cloud services. This enables real-time processing and minimizes data transmission to the cloud.
    - **Advantages**
        - Low latency and real-time processing capabilities due to local deployment.
        - Reduced data transmission and offline capabilities enhance privacy and performance.

    - **Challenges**
        - Limited resources in terms of compute power and memory pose challenges.
        - Optimization for constrained environments, considering hardware limitations, is crucial.

- Deployment to the edge isn't limited to cloud-specific scenarios but emphasizes deploying models closer to users or areas with poor network connectivity.
- Edge deployments involve training models elsewhere (e.g., in the cloud) and optimizing them for edge devices, often by reducing model package sizes for smaller devices.

- **Mobile**: Optimizing models for performance and resource constraints. Frameworks like [Core ML](https://developer.apple.com/documentation/coreml) (for iOS) and [TensorFlow Mobile](https://www.tensorflow.org/mobile) (for Android and iOS) facilitate model deployment on mobile platforms.

## Model Serialization and Packaging

- **Serialization:** Serialization converts a complex object (a machine learning model) into a format that can be easily stored or transmitted. It's like flattening a three-dimensional puzzle into a two-dimensional image. This serialized representation can be saved to disk, sent over a network, or stored in a database.
    - **ONNX (Open Neural Network Exchange):** ONNX is like a universal translator for machine learning models. It's a format that allows different frameworks, like TensorFlow, PyTorch, and scikit-learn, to understand and work with each other's models. It's like having a common language that all frameworks can speak. 
        - PyTorch's `torch.onnx.export()` function converts a PyTorch model to the ONNX format, facilitating interoperability between frameworks.
        - TensorFlow offers methods to freeze the graph and convert it to ONNX format using tools like `tf2onnx`.

- **Packaging:** Packaging, on the other hand, involves bundling all the necessary components and dependencies of a machine learning model. It's like putting all the puzzle pieces into a box, along with the instructions on assembling it. Packaging includes everything needed to run the model, such as the serialized model file, pre-processing or post-processing code, and required libraries or dependencies.
    
- Serialization is device-agnostic when packaging for cloud deployment. Serialized models are often packaged into containers (e.g., Docker) or deployed as web services (e.g., Flask or FastAPI). Cloud deployments also involve auto-scaling, load balancing, and integration with other cloud services.

- Another modern approach to deploying machine learning models is through dedicated and fully managed infrastructure provided by 🤗 [Inference Endpoints](https://huggingface.co/inference-endpoints). These endpoints facilitate easy deployment of Transformers, Diffusers, or any model without the need to handle containers and GPUs directly. The service offers a secure, compliant, and flexible production solution, enabling deployment with just a few clicks.
## Model Serving and Inference

- **Model Serving:**  Involves making the trained and packaged model accessible for inference requests.
    - HTTP REST API: Serving models through HTTP endpoints allows clients to send requests with input data and receive predictions in return. Frameworks like Flask, FastAPI, or TensorFlow Serving facilitate this approach.

    - gRPC (Remote Procedure Call): gRPC provides a high-performance, language-agnostic framework for serving machine learning models. It enables efficient communication between clients and servers.

    - Cloud-Based Services: Cloud platforms like AWS, Azure, and GCP offer managed services for deploying and serving machine learning models, simplifying scalability, and maintenance.
- **Inference:** Inference utilizes the deployed model to generate predictions or outputs based on incoming data. It relies on the serving infrastructure to execute the model and provide predictions.

    - Using the Model: Inference systems take input data received through serving, run it through the deployed model, and generate predictions or outputs.

    - Client Interaction: Clients interact with the serving system to send input data and receive predictions or inferences back, completing the cycle of model utilization.

- **Kubernetes**: [Kubernetes](https://kubernetes.io/docs/home/) is an open-source container orchestration platform widely used for deploying and managing applications. Understanding Kubernetes can help deploy models in a scalable and reliable manner.
## Best Practices for Deployment in Production
- **MLOps** is an emerging practice that applies DevOps principles to machine learning projects. It encompasses various best practices for deploying models in production, such as version control, continuous integration and deployment, monitoring, and automation.
- **Load Testing**: Simulate varying workloads to ensure the model's responsiveness under different conditions.
- **Anomaly Detection**: Implement systems to detect deviations in model behavior and performance.
    - Example: A *Distribution shift* occurs when the statistical properties of incoming data change significantly from the data the model was trained on. This change might lead to reduced model accuracy or performance, highlighting the importance of anomaly detection mechanisms to identify and mitigate such shifts in real-time.
- **Real-time Monitoring**: Utilize tools for immediate identification of issues in deployed models.
    - Real-time monitoring tools can flag sudden spikes in prediction errors or unusual patterns in input data, triggering alerts for further investigation and prompt action.

- **Security and Privacy:** Employ encryption methods for securing data during inference and transmission. Establish strict access controls to restrict model access and ensure data privacy.
- **A/B Testing**: Evaluate new model versions against the existing one through A/B testing before full deployment.
    - A/B testing involves deploying two versions of the model simultaneously, directing a fraction of traffic to each. Performance metrics, such as accuracy or user engagement, are compared to determine the superior model version.
- **Continuous Evaluation**: Continuously assess model performance post-deployment and prepare for rapid rollback if issues arise.
- Maintain detailed records covering model architecture, dependencies, and performance metrics.

### Model optimization tools and frameworks
https://huggingface.co/learn/computer-vision-course/unit9/tools_and_frameworks.md

# Model optimization tools and frameworks

## Tensorflow Model optimization Toolkit (TMO)

### Overview

The TensorFlow Model Optimization Toolkit is a suite of tools for optimizing machine learning models for deployment. 
The TensorFlow Lite post-training quantization tool enable users to convert weights to 8 bit precision which reduces the trained model size by about 4 times. 
The tools also include API for pruning and quantization during training if post-training quantization is insufficient.
These help user to reduce latency and inference cost, deploy models to edge devices with restricted resources and optimized execution for existing hardware or new special purpose accelerators.

### Setup guide

The Tensorflow Model Optimization Toolkit is available as a pip package, `tensorflow-model-optimization`. To install the package, run the following command:
```
pip install -U tensorflow-model-optimization
```

### Hands-on guide

For a hands-on guide on how to use the Tensorflow Model Optimization Toolkit, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tmo.ipynb)
## PyTorch Quantization

### Overview

For optimizing model, PyTorch supports INT8 quantization compared to typical FP32 models which leads to 4x reduction in the model size and a 4x reduction in memory bandwidth requirements. 
PyTorch supports multiple approaches to quantizing a deep learning model which are as follows:
1. Model is trained in FP32 and then the model is converted to INT8. 
2. Quantization aware training, where models quantization errors in both the forward and backward passes using fake-quantization modules. 
3. Represent quantized tensors and perform operations with them. They can be used to directly construct models that perform all or part of the computation in lower precision. 

For more details on quantization in PyTorch, see [here](https://pytorch.org/docs/stable/quantization.html)

### Setup guide

PyTorch quantization is available as API in the PyTorch package. To use it simple install PyTorch and import the quantization API as follows: 
```
pip install torch
import torch.quantization
```
## Hands-on guide

For a hands-on guide on how to use the Pytorch Quantization, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/torch.ipynb)

## ONNX Runtime

### Overview

ONNX Runtime is a cross-platform machine-learning model accelerator, with a flexible interface to integrate hardware-specific libraries. 
ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks.
The benefits of using ONNX Runtime for Inferencing are as follows:
- Improve inference performance for a wide variety of ML models.
- Run on different hardware and operating systems.
- Train in Python but deploy into a C#/C++/Java app.
- Train and perform inference with models created in different frameworks.

For more details on ONNX Runtime, see [here](https://onnxruntime.ai/docs/).

### Setup guide

ONNX Runtime has 2 python package and only one of these packages should be installed at a time in any one environment. 
Use the GPU package if you want to use ONNX Runtime with GPU support.
The python package for ONNX Runtime is available as a pip package. To install the package, run the following command:
```
pip install onnxruntime
```

For GPU version, run the following command:
```
pip install onnxruntime-gpu
```

### Hands-on guide

For a hands-on guide on how to use the ONNX Runtime, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/onnx.ipynb)

## TensorRT

### Overview

NVIDIA® TensorRT™ is an SDK for optimizing trained deep learning models to enable high-performance inference. 
TensorRT contains a deep learning inference optimizer for trained deep learning models, and a runtime for execution.
After user have trained their deep learning model in a framework of their choice, TensorRT enables user to run it with higher throughput and lower latency.

### Setup guide

TensorRT is available as a pip package, `tensorrt`. To install the package, run the following command:
```
pip install tensorrt
```
for other installation methods, see [here](https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#install).

### Hands-on guide

For a hands-on guide on how to use the TensorRT, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/tensorrt.ipynb)

## OpenVINO

### Overview

The OpenVINO™ toolkit enables user to optimize a deep learning model from almost any framework and deploy it with best-in-class performance on a range of Intel® processors and other hardware platforms.
The benefits of using OpenVINO includes:
- link directly with OpenVINO Runtime to run inference locally or use OpenVINO Model Server to serve model inference from a separate server or within Kubernetes environment
- Write an application once, deploy it anywhere on your preferred device, language and OS
- has minimal external dependencies
- Reduces first-inference latency by using the CPU for initial inference and then switching to another device once the model has been compiled and loaded to memory

### Setup guide

Openvino is available as a pip package, `openvino`. To install the package, run the following command:
```
pip install openvino
```

For other installation methods, see [here](https://docs.openvino.ai/2023.2/openvino_docs_install_guides_overview.html?VERSION=v_2023_2_0&OP_SYSTEM=LINUX&DISTRIBUTION=ARCHIVE).

### Hands-on guide

For a hands-on guide on how to use the OpenVINO, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/openvino.ipynb)

## Optimum

### Overview

Optimum serves as an extension of [Transformers](https://huggingface.co/docs/transformers), offering a suite of tools designed for optimizing performance in training and 
running models on specific hardware, ensuring maximum efficiency. In the rapidly evolving AI landscape, specialized hardware and unique optimizations continue to emerge regularly. 
Optimum empowers developers to seamlessly leverage these diverse platforms, maintaining the ease of use inherent in Transformers. 
Platforms supported by optimum as of now are:
1. [Habana](https://huggingface.co/docs/optimum/habana/index) 
2. [Intel](https://huggingface.co/docs/optimum/intel/index)
3. [Nvidia](https://github.com/huggingface/optimum-nvidia)
4. [AWS Trainium and Inferentia](https://huggingface.co/docs/optimum-neuron/index)
5. [AMD](https://huggingface.co/docs/optimum/amd/index)
8. [FuriosaAI](https://huggingface.co/docs/optimum/furiosa/index)
9. [ONNX Runtime](https://huggingface.co/docs/optimum/onnxruntime/overview)
10. [BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)

### Setup guide

Optimum is available as a pip package, `optimum`. To install the package, run the following command:
```
pip install optimum
``` 

For installation of accelerator-specific features, see [here](https://huggingface.co/docs/optimum/installation).

### Hands-on guide

For a hands-on guide on how to use Optimum for quantization, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/optimum.ipynb)

## EdgeTPU

### Overview

Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.
The benefits of using EdgeTPU includes:
- Complements Cloud TPU and Google Cloud services to provide an end-to-end, cloud-to-edge, hardware + software infrastructure for AI-based solutions deployment
- High performance in a small physical and power footprint
- Combined custom hardware, open software, and state-of-the-art AI algorithms to provide high-quality, easy to deploy AI solutions for the edge

For more details on EdgeTPU, see [here](https://cloud.google.com/edge-tpu)

For guide on how to setup and use EdgeTPU, refer this [notebook](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%209%20-%20Model%20Optimization/edge_tpu.ipynb)

### Supplementary Reading and Resources 🤗
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/supplementary-material.md

# Supplementary Reading and Resources 🤗

We hope that you found the unit on multimodal models exciting. If you'd like to learn and explore in detail about multimodal learning and models, here is a list of resources for your reference:

- [**Hugging Face Tasks**](https://huggingface.co/tasks) offers an overview of various tasks under domains like Computer Vision, Audio, NLP, Multimodal Learning and Reinforcement Learning. The tasks contain demos, use cases, models, datasets, etc.
- [**11-777 MMML**](https://cmu-multicomp-lab.github.io/mmml-course/fall2022/) course on multimodal machine learning by CMU. You can find the video lectures [**here**](https://www.youtube.com/@LPMorency/playlists).
- [**Blog on Multimodality and LLMs by Chip Huyen**](https://huyenchip.com/2023/10/10/multimodal.html) provides a comprehensive overview of multimodality, large multimodal models, systems like BLIP, CLIP, etc.
- [**Awesome Multimodal ML**](https://github.com/pliang279/awesome-multimodal-ml), a GitHub repository containing papers, courses, architectures, workshops, tutorials etc.
- [**Awesome Multimodal Large Language Models**](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models), a GitHub repository containing papers and datasets related to multimodal LLMs.
- [**EE/CS 148, Caltech**](https://gkioxari.github.io/teaching/cs148/) course on Large Language and Vision Models.

In the next unit we will take a look at another kind of Neural Network Models that were revolutionized by multimodality in the last years: **Generative Neural Networks**
Get your paint brush ready and join us on another exciting adventure in the realm of Computer Vision 🤠

### Multimodal Tasks and Models
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/tasks-models-part1.md

# Multimodal Tasks and Models

In this section, we will briefly look at the different multimodal tasks involving Image and Text modalities and their corresponding models. Before diving in, let's have a small recap on what is meant by "multimodal" which was covered in previous sections. The human world is a symphony of diverse sensory inputs. We perceive and understand through sight, sound, touch, and more. This multimodality is what separates our rich understanding from the limitations of traditional, unimodal AI models. Drawing inspiration from human cognition, multimodal models aim to bridge this gap by integrating information from multiple sources, like text, images, audio, and even sensor data. This fusion of modalities leads to a more comprehensive and nuanced understanding of the world, unlocking a vast range of tasks and applications.

## Examples of Tasks
Before looking into specific models, it's crucial to understand the diverse range of tasks involving image and text. These tasks include but are not limited to:

- **Visual Question Answering (VQA) and Visual Reasoning:** Imagine a machine that looks at a picture and understands your questions about it. Visual Question Answering (VQA) is just that! It trains computers to extract meaning from images and answer questions like "Who's driving the car?" while Visual Reasoning is the secret sauce, enabling the machine to go beyond simple recognition and infer relationships, compare objects, and understand scene context to give accurate answers. It's like asking a detective to read the clues in a picture, only much faster and better!

- **Document Visual Question Answering (DocVQA):** Imagine a computer understanding both the text and layout of a document, like a map or contract, and then answering questions about it directly from the image. That's Document Visual Question Answering (DocVQA) in a nutshell. It combines computer vision for processing image elements and natural language processing to interpret text, allowing machines to "read" and answer questions about documents just like humans do. Think of it as supercharging document search with AI to unlock all the information trapped within those images.

- **Image captioning:** Image captioning bridges the gap between vision and language. It analyzes an image like a detective, extracting details, understanding the scene, and then crafting a sentence or two that tells the story – a sunset over a calm sea, a child laughing on a swing, or even a bustling city street. It's a fascinating blend of computer vision and language, letting computers describe the world around them, one picture at a time.

- **Image-Text Retrieval:** Image-text retrieval is like a matchmaker for images and their descriptions. Think of it like searching for a specific book in a library, but instead of browsing titles, you can use either the picture on the cover or a brief summary to find it. It's like a super-powered search engine that understands both pictures and words, opening doors for exciting applications like image search, automatic captioning, and even helping visually impaired people "see" through text descriptions.

- **Visual grounding:** Visual grounding is like connecting the dots between what we see and say. It's about understanding how language references specific parts of an image, allowing AI models to pinpoint objects or regions based on natural language descriptions. Imagine asking "Where's the red apple in the fruit bowl?" and the AI instantly highlights it in the picture - that's visual grounding at work!

- **Text-to-Image generation:** Imagine a magical paintbrush that interprets your words and brings them to life! Text-to-image generation is like that; it transforms your written descriptions into unique images. It's a blend of language understanding and image creation, where your text unlocks a visual world from photorealistic landscapes to dreamlike abstractions, all born from the power of your words.

## Visual Question Anwering (VQA) and Visual Reasoning
![VQA](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/vqa_visual_reasoning.png) *Example of Input (Image + Text) and Output (Text) for the VQA and Visual Reasoning Models [[1]](#pretraining-paper)*

**Visual Question Answering (VQA)**
- **Input:** An image-question pair (image and a question about it).
- **Output:** In multiple-choice setting: A label corresponding to the correct answer among pre-defined choices.
In open-ended setting: A free-form natural language answer based on the image and question.
- **Task:** Answer questions about images. (Most VQA models treat as a classification problem with pre-defined answers). See the above example for the reference.

**Visual Reasoning**
- **Input:** Varies depending on the specific visual reasoning task:
    - VQA-style tasks: Image-question pairs.
    - Matching tasks: Images and text statements.
    - Entailment tasks: Image and text pair (potentially with multiple statements).
    - Sub-question tasks: Image and a primary question with additional perception-related sub-questions.
- **Output:** Varies depending on the task:
    - VQA: Answers to questions about the image.
    - Matching: True/False for whether the text is true about the image(s).
    - Entailment: Prediction of whether the image semantically entails the text.
    - Sub-question: Answers to the sub-questions related to perception.
- **Task:** Performs various reasoning tasks on images. See the above example for the reference.

In general, both VQA and Visual Reasoning are treated as *Visual Question Answering (VQA)* task. Some of the popular models for VQA tasks are:
- **BLIP-VQA**: It is a large pre-trained model for visual question answering (VQA) tasks developed by Salesforce AI. It uses a "Bootstrapping Language-Image Pre-training" (BLIP) approach, which leverages both noisy web data and caption generation to achieve state-of-the-art performance on various vision-language tasks. You can use the BLIP in huggingface as follows:
```python
from PIL import Image
from transformers import pipeline

vqa_pipeline = pipeline(
    "visual-question-answering", model="Salesforce/blip-vqa-capfilt-large"
)

image = Image.open("elephant.jpeg")
question = "Is there an elephant?"

vqa_pipeline(image, question, top_k=1)
```
- **Deplot**: It is a one-shot visual language reasoning model trained on translating plots and charts to text summaries. This enables its integration with LLMs for answering complex questions about the data, even with novel human-written queries. DePlot achieves this by standardizing the plot-to-table translation task and leveraging Pix2Struct architecture, surpassing previous SOTA on chart QA with just a single example and LLM prompting. You can use the Deplot in huggingface as follows:
```python
from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image

processor = Pix2StructProcessor.from_pretrained("google/deplot")
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")

url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(
    images=image,
    text="Generate underlying data table of the figure below:",
    return_tensors="pt",
)
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
```
- **VLIT**: It is a Vision-and-Language Transformer (ViLT) model, utilizing a transformer architecture without convolutions or region supervision, fine-tuned on the VQAv2 dataset for answering natural language questions about images. The base ViLT model boasts a large architecture (B32 size) and leverages joint image and text training, making it effective for various vision-language tasks, particularly VQA, achieving competitive performance. You can use VLIT in HuggingFace as follows:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image

# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"

processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")

# prepare inputs
encoding = processor(image, text, return_tensors="pt")

# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
Learn more about how to train and use VQA models in HuggingFace `transformers` library [here](https://huggingface.co/docs/transformers/v4.36.1/tasks/visual_question_answering).

## Document Visual Question Answering (DocVQA)
![DocVQA](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/doc_vqa.jpg)
*Example of Input (Image + Text) and Output (Text) for the Doc VQA Model. [[2]](#doc-vqa-paper)*
- **Input:**
    - Document image: A scanned or digital image of a document, containing text, layout, and visual elements.
    - Question about the document: A natural language question posed in text format.
- **Task:**
    - Analyze and understand: The DocVQA model must process both the visual and textual information within the document to fully comprehend its content.
    - Reason and infer: The model needs to establish relationships between visual elements, text, and the question to draw relevant conclusions.
    - Generate a natural language answer: The model must produce a clear, concise, and accurate answer to the question in natural language text format. See the above example for the reference.

- **Output:** Answer to the question: A text response that directly addresses the query and accurately reflects the information found in the document.

Now, let's look at the some of the popular DocVQA models in the HuggingFace:
- **LayoutLM:** It is a pre-trained neural network that understands document images by jointly analyzing both the text and its layout. Unlike traditional NLP models, it considers factors like font size, position, and proximity to learn relationships between words and their meaning in the context of the document. This allows it to excel at tasks like form understanding, receipt analysis, and document classification, making it a powerful tool for extracting information from scanned documents. You can use LayoutLM in HuggingFace as follows:
```python
from transformers import pipeline
from PIL import Image

pipe = pipeline("document-question-answering", model="impira/layoutlm-document-qa")

question = "What is the purchase amount?"
image = Image.open("your-document.png")

pipe(image=image, question=question)

## [{'answer': '20,000$'}]
```
- **Donut:** Also known as OCR-free Document Understanding Transformer, is a state-of-the-art image processing model that bypasses traditional optical character recognition (OCR) and directly analyzes document images to understand their content. It combines a vision encoder (Swin Transformer) with a text decoder (BART) to extract information and generate textual descriptions, excelling in tasks like document classification, form understanding, and visual question answering. Its unique strength lies in its "end-to-end" nature, avoiding potential errors introduced by separate OCR steps and achieving impressive accuracy with efficient processing. You can use Donut model in HuggingFace as follows:
```python
from transformers import pipeline
from PIL import Image

pipe = pipeline(
    "document-question-answering", model="naver-clova-ix/donut-base-finetuned-docvqa"
)

question = "What is the purchase amount?"
image = Image.open("your-document.png")

pipe(image=image, question=question)

## [{'answer': '20,000$'}]
```
- **Nougat:** It is a visual transformer model, trained on millions of academic papers, that can directly "read" scanned PDFs and output their content in a structured markup language, even understanding complex elements like math equations and tables. It bypasses traditional Optical Character Recognition, achieving high accuracy while preserving semantics, making scientific knowledge stored in PDFs more accessible and usable. Nougat uses the same architecture as Donut, meaning an image Transformer encoder and an autoregressive text Transformer decoder to translate scientific PDFs to markdown, enabling easier access to them. You can use Nougat model in HuggingFace as follows:
```python
from huggingface_hub import hf_hub_download
import re
from PIL import Image

from transformers import NougatProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch

processor = NougatProcessor.from_pretrained("facebook/nougat-base")
model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base")

device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# prepare PDF image for the model
filepath = hf_hub_download(
    repo_id="hf-internal-testing/fixtures_docvqa",
    filename="nougat_paper.png",
    repo_type="dataset",
)
image = Image.open(filepath)
pixel_values = processor(image, return_tensors="pt").pixel_values

# generate transcription (here we only generate 30 tokens)
outputs = model.generate(
    pixel_values.to(device),
    min_length=1,
    max_new_tokens=30,
    bad_words_ids=[[processor.tokenizer.unk_token_id]],
)

sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0]
sequence = processor.post_process_generation(sequence, fix_markdown=False)
# note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence
print(repr(sequence))
```
Learn more about how to train and use DocVQA models in HuggingFace `transformers` library [here](https://huggingface.co/docs/transformers/tasks/document_question_answering).

## Image Captioning
![Image Captioning](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/image_captioning.png)
*Example of Input (Image) and Output (Text) for the Image Captioning Model. [[1]](#pretraining-paper)*
- **Inputs:**
    - Image: Image in various formats (e.g., JPEG, PNG).
    - Pre-trained image feature extractor (optional): A pre-trained neural network that can extract meaningful features from images, such as a convolutional neural network (CNN).
- **Outputs:** Textual captions: Single Sentence or Paragraph that accurately describe the content of the input images, capturing objects, actions, relationships, and overall context. See the above example for the reference.
- **Task:** To automatically generate natural language descriptions of images. This involves: (1) Understanding the visual content of the image (objects, actions, relationships). (2) Encoding this information into a meaningful representation. (3) Decoding this representation into a coherent, grammatically correct, and informative sentence or phrase.

Now, let's look at some of the popular Image Captioning models in HuggingFace:
- **ViT-GPT2:** It is a PyTorch model for generating image captions, built by combining Vision Transformer (ViT) for visual feature extraction and GPT-2 for text generation. Trained on the COCO dataset, it leverages ViT's ability to encode rich image details and GPT-2's fluency in language production to create accurate and descriptive captions. This open-source model offers an effective solution for image understanding and captioning tasks. You can use **ViT-GPT2** in HuggingFace as follows:
```python
from transformers import pipeline

image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning")

image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png")

# [{'generated_text': 'a soccer game with a player jumping to catch the ball '}]
```
- **BLIP-Image-Captioning:** It is a state-of-the-art image captioning model based on BLIP, a framework pre-trained on both clean and noisy web data for unified vision-language understanding and generation. It utilizes a bootstrapping process to filter out noisy captions, achieving improved performance on tasks like image captioning, image-text retrieval, and VQA. This large version, built with a ViT-L backbone, excels in generating accurate and detailed captions from images. You can use the BLIP Image Captioning model in HuggingFace as follows:
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained(
    "Salesforce/blip-image-captioning-large"
)

img_url = "https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")

# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
- **git-base:** `microsoft/git-base` is a base-sized version of the GIT (GenerativeImage2Text) model, a Transformer decoder trained to generate text descriptions of images. It takes both image tokens and text tokens as input, predicting the next text token based on both the image and previous text. This makes it suitable for tasks like image and video captioning. Fine-tuned versions like `microsoft/git-base-coco` and `microsoft/git-base-textcaps` exist for specific datasets, while the base model offers a starting point for further customization. You can use git-base model in HuggingFace as follows:
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import requests
from PIL import Image

processor = AutoProcessor.from_pretrained("microsoft/git-base-coco")
model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

pixel_values = processor(images=image, return_tensors="pt").pixel_values

generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```

Learn more about how to train and use Image Captioning models in HuggingFace `transformers` libraries [here.](https://huggingface.co/docs/transformers/v4.36.1/en/tasks/image_captioning)

## Image-Text Retrieval
![Image-Text Retrieval](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/image_text_retrieval.png)
*Example of Input (Text Query) and Output (Image) for the Text-to-Image Retrieval. [[1]](#pretraining-paper)*
- **Inputs:**
    - Images: Image in various formats (e.g., JPEG, PNG).
    - Text: Natural language text, usually in the form of captions, descriptions, or keywords associated with images.
- **Outputs:** 
    - Relevant images: When a text query is given, the system returns a ranked list of images most relevant to the text.
    - Relevant text: When an image query is given, the system returns a ranked list of text descriptions or captions that best describe the image.
- **Tasks:**
    - Image-to-text retrieval: Given an image as input, retrieve text descriptions or captions that accurately describe its content.
    - Text-to-image retrieval: Given a text query, retrieve images that visually match the concepts and entities mentioned in the text.

One of most popular model for the Image-Text Retrieval is CLIP.
- **CLIP (Contrastive Language-Image Pretraining):** It excels in image-text retrieval by leveraging a shared embedding space. Through contrastive learning, it pretrains on large-scale image and text datasets, enabling the model to map diverse concepts into a common space. CLIP leverages a contrastive learning approach during pretraining, where it learns to map images and text into a shared embedding space. This shared space enables direct comparison between the two modalities, allowing for efficient and accurate retrieval tasks. In image-text retrieval, CLIP can be applied by encoding images and text into the shared embedding space, and the similarity between an image and a textual query is measured by the proximity of their respective embeddings.  The model's versatility arises from its ability to grasp semantic relationships without task-specific fine-tuning, making it efficient for applications ranging from content-based image retrieval to interpreting natural language queries for images. You can use CLIP in HuggingFace for Image-Text retrieval as follows:
```python
from PIL import Image
import requests

from transformers import CLIPProcessor, CLIPModel

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(
    text=["a photo of a cat", "a photo of a dog"],
    images=image,
    return_tensors="pt",
    padding=True,
)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
probs = logits_per_image.softmax(
    dim=1
)  # we can take the softmax to get the label probabilities
```
Learn more about how to use CLIP for Image-Text retrieval in HuggingFace [here](https://huggingface.co/docs/transformers/model_doc/clip#resources).

## Visual Grounding
![Visual Grounding](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/visual_grounding.jpg)
*Example of Input (Image + Text) and Output (Bounding Boxes).(a) Phrase Grounding (b) Expression Comprehension. [[1]](#pretraining-paper)*
- **Inputs:**
    - Image: A visual representation of a scene or object.
    - Natural language query: A text description or question that refers to a specific part of the image.

- **Output:** Bounding box or segmentation mask: A spatial region within the image that corresponds to the object or area described in the query. This is typically represented as coordinates or a highlighted region.
- **Task:** Locating the relevant object or region: The model must correctly identify the part of the image that matches the query. This involves understanding both the visual content of the image and the linguistic meaning of the query.

Now, see some of the popular Visual Grounding (Object Detection) models in HuggingFace.
- **OWL-ViT:** OWL-ViT (Vision Transformer for Open-World Localization) is a powerful object detection model built on a standard Vision Transformer architecture and trained on large-scale image-text pairs. It excels at "open-vocabulary" detection, meaning it can identify objects not present in its training data based on textual descriptions. By leveraging contrastive pre-training and fine-tuning, OWL-ViT achieves impressive performance in both zero-shot (text-guided) and one-shot (image-guided) detection tasks, making it a versatile tool for flexible search and identification in images. You can use OWL-ViT in HuggingFace as follows:
```python
import requests
from PIL import Image
import torch

from transformers import OwlViTProcessor, OwlViTForObjectDetection

processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)

# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(
    outputs=outputs, threshold=0.1, target_sizes=target_sizes
)

i = 0  # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]

# Print detected objects and rescaled box coordinates
for box, score, label in zip(boxes, scores, labels):
    box = [round(i, 2) for i in box.tolist()]
    print(
        f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"
    )
```
- **Grounding DINO[[3]](#grounding-dino):** It combines the Transformer-based object detector (DINO) with "grounded pre-training" to create a state-of-the-art, zero-shot object detection model. This means it can identify objects even if they weren't in its training data, thanks to its ability to understand both images and human language inputs (like category names or descriptions). Its architecture combines text and image backbones, a feature enhancer, language-guided query selection, and a cross-modality decoder, achieving impressive results on benchmarks like COCO and LVIS. Essentially, Grounding DINO takes visual information, links it to textual descriptions, and then uses that understanding to pinpoint objects even within completely new categories.
You can try out the Grounding DINO model in the Google Colab [here](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb).

## Text-to-Image Generation
![Text-Image Generation](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/text_image_generation.png)
*Illustration of Auto-regressive and Diffusion Models for Text-Image Generation.[[1]](#pretraining-paper)*
- **Auto-regressive Models:** These models treat the task like translating text descriptions into sequences of image tokens, similar to language models generating sentences. Like puzzle pieces, these tokens, created by image tokenizers like VQ-VAE, represent basic image features. The model uses an encoder-decoder architecture: the encoder extracts information from the text prompt, and the decoder, guided by this information, predicts one image token at a time, gradually building the final image pixel by pixel. This approach allows for high control and detail, but faces challenges in handling long, complex prompts and can be slower than alternative methods like diffusion models. The generation process is shown in the above figure (a).

- **Stable Diffusion Models:** Stable Diffusion Models uses "Latent Diffusion" technique, where it builds images from noise by progressively denoising it, guided by a text prompt and a frozen CLIP text encoder. Its light architecture with a UNet backbone and CLIP encoder allows for GPU-powered image generation, while its latent focus reduces memory consumption. This unique setup empowers diverse artistic expression, translating textual inputs into photorealistic and imaginative visuals. The generation process is shown in the above figure (b).

Now, let's how can we use text-image generation models in HuggingFace.

Install `diffusers` library:
```bash
pip install diffusers --upgrade
```

In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```bash
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16,
    use_safetensors=True,
    variant="fp16",
)
pipe.to("cuda")

prompt = "An astronaut riding a unicorn"

images = pipe(prompt=prompt).images[0]
```

To learn more about text-image generation models, you can refer to the HuggingFace [Diffusers Course](https://huggingface.co/docs/diffusers/training/overview).

Now you know what some of the popular tasks and models involving Image and Text modalities. But you might be wondering on how to train or fine-tune for the above mentioned tasks. So, let's have a glimpse on training the Vision-Language models.

## Glimpse of Vision-Language Pretrained Models
![VLP Framework](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/vlp_framework.png)
*General framework for Transformer based vision-language models. [[1]](#pretraining-paper)*

Given an image-text pair, a VL model first extracts text features via a text encoder and a vision encoder, respectively. The text and visual features are then fed into a multimodal fusion module to produce cross-modal representations, which are then optionally fed into a decoder before generating the final outputs. An illustration of this general framework is shown in the above figure. In many cases, there are no clear boundaries among image/text backbones, multimodal fusion module, and the decoder.

Congratulations! you made it till the end. Now on to the next section for more on Vision-Language Pretrained Models.

## References
1. [Vision-Language Pre-training: Basics, Recent Advances, and Future Trends](https://arxiv.org/abs/2210.09263)
2. [Document Collection Visual Question Answering](https://arxiv.org/abs/2104.14336)
3. [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499)

### A Multimodal World
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/a_multimodal_world.md

# A Multimodal World

Welcome to the chapter on the fundamentals of multimodality. This chapter builds the foundation for the later sections of the unit. We will explore:

- The notion of multimodality, and different sensory inputs humans use for efficient decision making.
- Why is it important for making innovative applications and services through which we can interact and make lives easier.
- Multimodality in context to Deep Learning, data, tasks, and models.
- Related applications like multimodal emotion recognition and multimodal search.

So let's begin 🤗

## What is Multimodality? 📸📝🎵

A modality means a medium or a way in which something exists or is done. In our daily lives, we come across many scenarios where we have to make decisions and perform tasks. For this, we use our 5 sense organs (eyes to see, ears to hear, nose to smell, tongue to taste, and skin to touch). Based on the information from all sense organs, we assess our environment, perform tasks, and make decisions for our survival. Each of these 5 sense organs is a different modality through which information comes to us and thus the word multimodality or multimodal.

Think about this scenario for a moment, on a windy night you hear an eerie sound while you are on your bed 👻😨. You feel a bit scared, as you are unaware about the source of the sound. You try to gather some courage and check your environment but you are unable to figure this out 😱. Daringly, you turn on the lights and you find out that it was just your window which was half-opened through which the wind was blowing and making the sound in the first place 😒.

So what just happened here? Initially you had restricted understanding of the situation due to your limited knowledge of the environment. This limited knowledge was due to the fact because you were just relying on your ears (the eerie sound), to make sense. But as soon as you turned on the lights in the room and looked around through your eyes (added another sense organ), you had a better understanding about the whole situation. As we kept on adding modalities our understanding of the situation became better and clearer than before, for the same scenario, this suggests that adding more modalities to the same situation assist each other and improves the information content.
Even while taking this course and moving ahead, would you not like cool infographics, accompanied by video content explaining minute concepts instead of just plain textual content 😉
Here you go:

![Multimodality Notion](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/multimodal_elephant.png)

_An infographic on multimodality and why it is important to capture the overall sense of data through different modalities. The infographic is multimodal as well (image + text)._

Many times communication between 2 people gets really awkward in textual mode, slightly improves when voices are involved but greatly improves when you are able to visualize body language and facial expressions as well. This has been studied in detail by the American Psychologist, Albert Mehrabian who stated this as the 7-38-55 rule of communication, the rule states:
"In communication, 7% of the overall meaning is conveyed through verbal mode (spoken words), 38% through voice and tone and 55% through body language and facial expressions."

To be more general, in the context of AI, 7% of the meaning conveyed is through textual modality, 38% through audio modality and 55% through vision modality.
Within the context of deep learning, we would refer each modality as a way data arrives to a deep learning model for processing and predictions. The most commonly used modalities in deep learning are: vision, audio and text. Other modalities can also be considered for specific use cases like LIDAR, EEG Data, eye tracking data etc.

Unimodal models and datasets are purely based on a single modality, and have been studied for long with many tasks and benchmarks but are limited in their capabilities. Relying on a single modality might not give us the complete picture, and combining more modalities will increase the information content and reduce the possibility of missing cues that might be in them.
For the machines around us to be more intelligent, better at communicating with us and have enhanced interpretation and reasoning capabilities, it is important to build applications and services around models and datasets that are multimodal in nature. Because, multimodality can give us a clearer and more accurate representation of the world around us enabling us to develop applications that are closer to the real-world scenarios.

**Common combinations of modalities and real life examples:**

- Vision + Text : Infographics, Memes, Articles, Blogs.
- Vision + Audio: A Skype call with your friend, dyadic conversations.
- Vision + Audio + Text: Watching YouTube videos or movies with captions, social media content in general is multimodal.
- Audio + Text: Voice notes, music files with lyrics.

## Multimodal Datasets

A dataset consisting of multiple modalities is a multimodal dataset. Out of the common modality combinations let us see some examples:

- Vision + Text: [Visual Storytelling Dataset](https://visionandlanguage.net/VIST/), [Visual Question Answering Dataset](https://visualqa.org/download.html), [LAION-5B Dataset](https://laion.ai/blog/laion-5b/).
- Vision + Audio: [VGG-Sound Dataset](https://www.robots.ox.ac.uk/~vgg/data/vggsound/), [RAVDESS Dataset](https://zenodo.org/records/1188976), [Audio-Visual Identity Database (AVID)](https://www.avid.wiki/Main_Page).
- Vision + Audio + Text: [RECOLA Database](https://diuf.unifr.ch/main/diva/recola/), [IEMOCAP Dataset](https://sail.usc.edu/iemocap/).

Now, let us see what kind of tasks can be performed using a multimodal dataset. There are many examples, but we will generally focus on tasks that contain both visual and textual elements. A multimodal dataset requires a model that is able to process data from multiple modalities. Such a model is called a multimodal model.

## Multimodal Tasks and Models

Each modality has different tasks related to it, for example: vision downstream tasks contain image classification, image segmentation, object detection etc. and we would be using
models specifically designed for these tasks. So tasks and models go hand in hand. If a task involves two or more modalities then it can be termed as a multimodal task. If we consider a task in terms of inputs and outputs, a multimodal task can generally be thought of as a single input/output arrangement with two different modalities at input and output ends respectively.

Hugging Face supports a wide variety of multimodal tasks. Let us look into some of them.

**Some multimodal tasks supported by 🤗 and their variants:**

1. Vision + Text:

- [Visual Question Answering or VQA](https://huggingface.co/tasks/visual-question-answering): Aiding visually impaired persons, efficient image retrieval, video search, Video Question Answering, Document VQA.
- [Image to Text](https://huggingface.co/tasks/image-to-text): Image Captioning, Optical Character Recognition (OCR), Pix2Struct.
- [Text to Image](https://huggingface.co/tasks/text-to-image): Image Generation.
- [Text to Video](https://huggingface.co/tasks/text-to-video): Text-to-video editing, Text-to-video search, Video Translation, Text-driven Video Prediction.

2. Audio + Text:

- [Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition) (or Speech to Text): Virtual Speech Assistants, Caption Generation.
- [Text to Speech](https://huggingface.co/tasks/text-to-speech): Voice assistants, Announcement Systems.

💡An amazing usecase of multimodal task is Multimodal Emotion Recognition (MER). The MER task involves recognition of emotion from two or more modalities like audio+text, text+vision, audio+vision or vision+text+audio As we discussed in the example, MER is more efficient than unimodal emotion recognition and gives clear insight into the
emotion recognition task. Check out more on MER with [this repository](https://github.com/EvelynFan/AWESOME-MER).

![Multimodal model flow](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multimodal_fusion_text_vision/Multimodal.jpg)

A multimodal model, is a model that can be used to perform multimodal tasks by processing data coming from multiple modalities at the same time. These models combine the uniqueness and strengths of different modalities to make a complete representation of data enhancing the performance on multiple tasks. Multimodal models are trained to integrate and process data from sources like images, videos, text, audio etc.
The process of combining these modalities begins with multiple unimodal models. The outputs of these unimodal models (encoded data) are then fused using a strategy by the fusion module. The strategy of fusion can be early fusion, late fusion or hybrid fusion. The overall task of the fusion module is to make a combined representation of the encoded data from the unimodal models. Finally, a classification network takes up the fused representation to make predictions.

A detailed section on multimodal tasks and models with a focus on Vision and Text, will be discussed in the next chapter.

## An application of multimodality: Multimodal Search 🔎📲💻

Internet search was the one key advantage Google had, but with the introduction of ChatGPT by OpenAI, Microsoft started out with
powering up their Bing search engine so that they can crush the competition. It was only restricted initially to LLMs, looking into large corpus of text data but the world around us, mainly social media content, web articles and all possible forms of online content are largely multimodal. When we search for an image, the image pops up with a corresponding text to describe it. Wouldn't it be super cool to have another powerful multimodal model which involved both Vision and Text at the same time? This can revolutionize the search landscape hugely, and the core tech involved in this is multimodal learning. We know that many companies also have a large database which is multimodal and mostly unstructured in nature. Multimodal models might help companies with internal search, interactive documentation (chatbots), and many such use cases. This is another domain of Enterprise AI where we leverage AI for organizational intelligence.

Vision Language Models (VLMs) are models that can understand and process both vision and text modalities. The joint understanding of both modalities lead VLMs to perform various tasks efficiently like Visual Question Answering, Text-to-image search etc. VLMs thus can serve as one of the best candidates for multimodal search. So overall, VLMs should find some way to map text and image pairs to a joint embedding space where each text-image pair is present as an embedding. We can perform various downstream tasks using these embeddings, which can also be used for search. The idea of such a joint space is that image and text embeddings that are similar in meaning will lie close together, enabling us to do searches for images based on text (text-to-image search) or vice-versa.

💡Meta released the first multimodal AI model to bind information from 6 different modalities: images and videos, audio, text, depth, thermal, and inertial measurement units (IMUs). Learn more about it [here](https://imagebind.metademolab.com/).

After going through the fundamentals for multimodality, let's now take a look into different multimodal tasks and models available in 🤗 and their applications via cool demos and Spaces.

### Exploring Multimodal Text and Vision Models: Uniting Senses in AI
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/pre-intro.md

# Exploring Multimodal Text and Vision Models: Uniting Senses in AI

Welcome to the Multimodal Text and Vision Models unit! 🌐📚👁️

In the last unit we have learned about the Transformer architecture, which revolutionized Natural Language Processing, but did not stop at the text modality.
As we have seen it has begun to conquer the field of Vision (including image and video), bringing with it a wide array of new research and applications.

In this unit, we'll focus on the data fusion possibilities that the modality-overlapping usage of Transformers has enabled and the benefitting tasks and models.

## Exploring Multimodality 🔎🤔💭

Our adventure begins with understanding why blending text and images is crucial, exploring the history of multimodal models, and discovering how self-supervised learning unlocks the power of multimodality. The unit discusses about different modalities with a focus on text and vision. In this unit we will encounter three main topics:

**1. A Multimodal World + Introduction to Vision Language Models**
These chapter serve as a foundation, enabling learners to understand the significance of multimodal data, its representation, and its diverse applications laying the groundwork for the fusion of text and vision within AI models.

In this chapter, you will:

- Understand the nature of real-world multimodal data coming from various sensory inputs that are important for human decision-making.
- Explore practical applications of multimodality in robotics, search , Visual Reasoning etc., showcasing their functionality and diverse applications.
- Learn about diverse multimodal tasks and models focusing on Image to Text, Text to Image, VQA, Document VQA, Captioning, Visual Reasoning etc.
- Conclude with an introduction on Vision Language Models and cool applications including multimodal chatbots.

**2. CLIP and Relatives**
Moving ahead, this chapter talks about the popular CLIP model and similar vision language models.
In this chapter you will:

- Dive deep into CLIP's magic, from theory to practical applications, and explore its variations.
- Discover relatives like Image-bind, BLIP, and others, along with their real-world implications and challenges.
- Explore the functionality of CLIP, its applications in search, zero-shot classification, and generation models like DALL-E.
- Understand contrastive and non-contrastive losses and explore the self-supervised learning techniques.

**3. Transfer Learning: Multimodal Text and Vision**
In the final chapter of the unit you will:

- Explore diverse multimodal model applications in specific tasks, including one-shot, few-shot, training from scratch, and transfer learning, setting the stage for an exploration of transfer learning's advantages and practical applications in Jupyter notebooks.
- Engage in detailed practical implementations within Jupyter notebooks, covering tasks such as CLIP fine-tuning, Visual Question Answering, Image-to-Text, Open-set object detection, and GPT-4V-like Assistant models, focusing on task specifics, datasets, fine-tuning methods, and inference analyses.
- Conclude by comparing previous sections, discussing benefits, challenges, and offering insights into potential future advancements in multimodal learning.

## Your Journey Ahead 🏃🏻‍♂️🏃🏻‍♀️🏃🏻

Get ready for a captivating experience! We'll explore the mechanisms behind multimodal models like CLIP, explore their applications, and journey through transfer learning for text and vision.

By the end of this unit, you'll possess a solid understanding of multimodal tasks, hands-on-experience with multimodal models, build cool applications based on them, and the evolving landscape of multimodal learning.

Join us as we navigate the fascinating domain where text and vision converge, unlocking the possibilities of AI understanding the world in a more human-like manner.

Let's begin 🚀🤗✨

### Introduction to Vision Language Models
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/vlm-intro.md

# Introduction to Vision Language Models

What will you learn from this chapter:
- A brief introduction to multimodality
- Introduction to Vision Language Models
- Various learning strategies
- Common datasets used for VLMs
- Downstream tasks and evaluation

## Our World is Multimodal 
Humans explore the world through diverse senses: sight, sound, touch, and scent. A complete grasp of our surroundings emerges by harmonizing insights from these varied modalities.
We think of modality, initially introduced in mathematics as distinct peaks, in a poetic way as: "With each modality, a unique part to play, Together they form our understanding array. A symphony of senses, a harmonious blend, In perception's dance, our world transcends." In pursuit of making a AI capable to understand the world, the field of machine learning seeks to develop models capable of processing and integrating data across multiple modalities. However, several challenges, including representation and alignment, must be addressed. Representation explores techniques to effectively summarize multimodal data, capturing the intricate connections among individual modality elements. Alignment focuses on identifying connections and interactions across all elements.
Also, it is crucial to acknowledge the inherent difficulties in handling multimodality:
- One modality may dominate others.
- Additional modalities can introduce noise.
- Full coverage over all modalities is not guaranteed.
- Different modalities can have complicated relationships.

Despite these challenges, the machine learning community has significantly progressed in developing these systems. This chapter delves into the fusion of vision and language, giving rise to Vision Language Models (VLMs). For a deeper understanding of multimodality, please refer to the preceding section of this Unit.

## Introduction 
Processing images to generate text, such as image captioning and visual question-answering, has been studied for many years which includes autonomous driving, remote sensing, etc. We also have seen a shift from traditional ML/DL training from scratch to a new learning paradigm including pre-training, fine-tuning and prediction, which has shown great benefit since in the traditional way we may need to collect huge amount of data, etc. 

Firstly proposed in [ULMFIT Paper](https://aclanthology.org/P18-1031.pdf) this paradigm involves:
- Pre-training the model with extensive training data.
- Fine-tuning the pre-trained model with task-specific data.
- Utilizing the trained model for downstream tasks such as classification.

In VLMs, we employ this paradigm and extend it to combine visual images, yielding the desired results. For instance, in 2021, OpenAI introduced a groundbreaking paper called [CLIP](https://openai.com/research/clip) which significantly influenced the adoption of this approach. CLIP utilizes an image-text contrastive objective, learning by bringing paired images and texts close while pushing others far away in the embedding space. This pre-training method allows VLMs to capture rich vision-language correspondence knowledge, enabling zero-shot predictions by matching the embeddings of any given images and texts. Notably, CLIP outperformed Imagenet models on out-of-distribution tasks. Further exploration of CLIP and similar models is covered in the next section of this chapter!

## Mechanism
To enable the functionality of Vision Language Models (VLMs), a meaningful combination of both text and images is essential for joint learning. How can we do that? One simple/common way is given image-text pairs:
- Extract image and text features using text and image encoders. For images it can be **CNN** or **transformer** based architectures.
- Learn the vision-language correlation with certain pre-training objectives.
    - The pre-training objective can be divided into three groups:
        - **contrastive** objectives train VLMs to learn discriminative representations by pulling paired samples close and pushing others faraway in the embedding space.
        - **generative** objectives to make VLMs learn semantic features by training networks to generate image/text data.
        - **alignment** objectives align the image-text pair via global image-text matching or local region-word matching on embedding space.
- With the learned vision-language correlation, VLMs can be evaluated on unseen data in a zero-shot manner by matching the embeddings of any given images and texts.

![Basic Mechanism of CLIP model](https://huggingface.co/datasets/hf-vision/course-assets/resolve/99ac107ade7fb89aae792f3655341528e64e1fbb/clip_paper.png)

Existing research predominantly focuses on enhancing VLMs from three key perspectives:
- collecting large-scale informative image-text data. 
- designing effective models for learning from big data. 
- designing new pre-training methods/objective for learning effective vision-language correlation.

VLM pre-training aims to pre-train a VLM to learn image-text correlation, targeting effective zero-shot predictions on visual recognition tasks which can be segmentation, classification, etc. 

## Strategies
We can [group](https://lilianweng.github.io/posts/2022-06-09-vlm/#no-training) VLMs based on how we leverage the two modes of learning.

- Translating images into embedding features that can be jointly trained with token embeddings.
    - In this method we fuse visual information into language models by treating images as normal text tokens and train the model on a sequence of joint representations of both text and images. Precisely, images are divided into multiple smaller patches and each patch is treated as one "token" in the input sequence. e.g. [VisualBERT](https://arxiv.org/abs/1908.03557), [SimVLM](https://arxiv.org/abs/2108.10904).

- Learning good image embeddings that can work as a prefix for a frozen, pre-trained language model.
    - In this method we don't change the language model parameters when adapting to handle visual signals. Instead we learn such an embedding space for images, such that it is compatible with the language model’s. e.g. [Frozen](https://arxiv.org/abs/2106.13884), [ClipCap](https://arxiv.org/abs/2111.09734).

- Using a specially designed cross-attention mechanism to fuse visual information into layers of the language model.
    - For enhanced fusion of visual information across language model layers, employ a tailored cross-attention fuse mechanism to balance text generation and visual input. Eg. [VisualGPT](https://arxiv.org/abs/2102.10407), etc.
- Combine vision and language models without any training.
    - More recent methods like [MAGiC](https://arxiv.org/abs/2205.02655) does guided decoding to sample next token, without fine-tuning.

## Common VLM Datasets
These are some of the common dataset used in VLM's present in [HF dataset](https://huggingface.co/docs/datasets/index).

**[MSCOCO](https://huggingface.co/datasets/HuggingFaceM4/COCO)** contains 328K images and each paired with 5 independent captions.

**[NoCaps](https://huggingface.co/datasets/HuggingFaceM4/NoCaps)** designed to measure generalization to unseen classes and concepts, where in-domain contains images portraying only COCO classes, near-domain contains both COCO and novel classes, and out-of-domain consists of only novel classes.

**[Conceptual Captions](https://huggingface.co/datasets/conceptual_captions)** contains 3 million pairs of images and captions, mined from the web and post-processed.

**[ALIGN](https://huggingface.co/blog/vit-align)**  a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset.

**[LAION](https://huggingface.co/collections/laion/openclip-datacomp-64fcac9eb961d0d12cb30bc3)** dataset consists of image-text-pairs. The image-text-pairs have been extracted from the Common Crawl web data dump and are from random web pages crawled between 2014 and 2021. This dataset was used in training Stable-Diffusion Models.

## Downstream Tasks and Evaluation
VLMs are getting good at many downstream tasks, including image classification, object detection, semantic segmentation, image-text retrieval, and action recognition while surpassing models trained traditionally. 

Generally the setup used for evaluating VLMs is zero-shot prediction and linear probing. Zero-shot prediction is the most common way to evaluate the VLMs, where we directly apply pre-trained VLMs to downstream tasks without any task-specific fine-tuning. 

In linear probing, we freeze the pre-trained VLM and train a linear classifier to classify the VLM-encoded embeddings to measure its representation. How do we evaluate these models? We can check how they perform on datasets, e.g. given an image and a question, the task is to answer the question correctly! We can also check how these models reason answer questions about the visual data. For this, the most common dataset used is [CLEVR](https://cs.stanford.edu/people/jcjohns/clevr/). 

Standard datasets like MSCOCO might be straightforward for a model to learn due to their distribution, which may not adequately demonstrate a model's capacity to generalize across more challenging or diverse datasets. In response, datasets like [Hateful Memes](https://arxiv.org/abs/2005.04790) are created to address this problem by understanding the models capability to an extreme by adding difficult examples ("benign confounders") to the dataset to make it hard which showed that multimodal pre-training doesn't work and models had a huge gap with human performance. 

![Winogrand Idea](https://huggingface.co/datasets/hf-vision/course-assets/resolve/99ac107ade7fb89aae792f3655341528e64e1fbb/winogrand_paper.png) 

One more such dataset called **Winoground** was designed to figure out how good CLIP actually is. **Figure Above** This dataset challenges us to consider if models, despite their impressive results, truly grasp compositional relationships like humans or if they're generalizing data. For example, earlier versions of Stable Diffusion and other text-to-image models, were not able to clearly count fingers. So, there's still lot of amazing work to be done to get the VLM's to the next stage! 

## What's Next?
The community is moving fast and we can see already lot of amazing work like [FLAVA](https://arxiv.org/abs/2112.04482) which tries to have a single "foundational" model for all the target modalities at once. This is one possible scenario for the future - modality-agnostic foundation models that can read and generate many modalities! But maybe we also see other alternatives developing, one thing we can say for sure is . there is an interesting future ahead. 

To capture more on these recent advances feel free to follow the HF's [Transformers Library](https://huggingface.co/docs/transformers/index), and [Diffusers Library](https://huggingface.co/docs/diffusers/index) where we try to add recent advances and models as fast as possible! If you feel like we are missing something important, you can also open an issue for these libraries and contribute code yourself.

### Transfer Learning of Multimodal Models
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/transfer_learning.md

# Transfer Learning of Multimodal Models

In the preceding sections, we've delved into the fundamental concepts of multimodal models such as CLIP and its related counterparts. In this chapter, we will try to find out how you can use different types of multimodal models for your tasks.

There are several approaches to how you can adapt multimodal models to your use case:

1. **Zero\few-shot learning**. Zero\few-shot learning involves leveraging large pretrained models capable of solving problems not present in the training data. These approaches can be useful when there is little labeled data for a task (5-10 examples) or there is none at all. [Unit 11](https://huggingface.co/learn/computer-vision-course/unit11/1) will delve deeper into this topic.

2. **Training the model from scratch**. When pre-trained model weights are unavailable or the model's dataset substantially differs from your own, this method becomes necessary. Here, we initialize model weights randomly (or via more sophisticated methods like [He initialization](https://arxiv.org/abs/1502.01852)) and proceed with the usual training. However, this approach demands substantial amounts of training data.

3. **Transfer learning**. Transfer learning, unlike training from scratch, uses the weights of the pretrained model as initial weights.

This chapter primarily focuses on the transfer learning aspect within multimodal models. It will recap what transfer learning entails, elucidate its advantages, and provide practical examples illustrating how you can apply transfer learning to your tasks!

## Transfer learning

More formally, transfer learning is the set of machine learning techniques in which knowledge, representations or patterns that are obtained from solving one problem are reused to solve another, but similar problem.

In the context of deep learning, in transfer learning, when training a model for a particular task, we use the weights of another model as the initial weights. The pretrained model has typically been trained on a huge amount of data and has useful knowledge about the nature and relationships in that data. This knowledge is embedded in the weights of this model, and therefore if we use them as initial weights, we transfer the knowledge embedded in the pretrained model into the model we are training.

    

This approach has several advantages:

- **Resource Efficiency:** Since the pretrained model was trained on a huge amount of data over a long period, transfer learning requires much less computing resources for model convergence.

- **Reducing the size of labeled data:** For the same reason, less data is required to achieve decent quality on the test sample.

- **Knowledge Transfer:** When fine-tuning to the new task, the model capitalizes on the pre-existing knowledge encoded within the pre-trained model's weights. This integration of prior knowledge often leads to enhanced performance on the new task.

However, despite its advantages, transfer learning has some challenges that should also be taken into account:

- **Domain Shift:** Adapting knowledge from the source domain to the target domain can be challenging if the data distributions differ substantially.

- **Catastrophic forgetting:** During fine-tuning process, the model adjusts its parameters to adapt to the new task, often causing it to lose the previously learned knowledge or representations related to the initial task.

## Transfer Learning Applications

We'll explore practical applications of transfer learning across various tasks. The table below provides a description of the tasks that can be solved using multimodal models, as well as examples of how you can fine-tune them on your data.

| Task        | Description                                                      | Model                                             |
| ----------- | ---------------------------------------------------------------- | ------------------------------------------------- |
| [Fine-tune CLIP](https://colab.research.google.com/github/fariddinar/computer-vision-course/blob/main/notebooks/Unit%204%20-%20Multimodal%20Models/Clip_finetune.ipynb)| Fine-tuning CLIP on a custom dataset | [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) |
| [VQA](https://huggingface.co/docs/transformers/main/en/tasks/visual_question_answering#train-the-model) | Answering a question in natural  language based on an image | [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) |
| [Image-to-Text](https://huggingface.co/docs/transformers/main/en/tasks/image_captioning) | Describing an image in natural language | [microsoft/git-base](https://huggingface.co/microsoft/git-base) |
| [Open-set object detection](https://docs.ultralytics.com/models/yolo-world/) | Detect objects by natural language input |  [YOLO-World](https://huggingface.co/papers/2401.17270) |
| [Assistant (GTP-4V like)](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#train) | Instruction tuning in the multimodal field | [LLaVA](https://huggingface.co/docs/transformers/model_doc/llava) |

### Multimodal Text Generation (BLIP)
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/blip.md

# Multimodal Text Generation (BLIP)
## Introduction

After the release of CLIP, the AI community recognized the immense potential of larger datasets and contrastive learning in deep learning. One significant development in this area is [BLIP (Bootstrapping Language-Image Pre-training)](https://arxiv.org/abs/2201.12086), which extends the capabilities of multimodal models to include text generation.

## CapFilt: Caption and Filtering
As multimodal models require large datasets, they often have to be scraped from the internet using image and alternative-text (alt-text) pairs. However, the alt-texts often do not accurately describe the visual content of the images, making them a noisy signal that is suboptimal for learning vision-language alignment. Hence, the BLIP paper introduced a caption and filtering mechanism (CapFilt). This is made up of a deep learning model which filters out noisy pairs and another model which creates captions for images. Both of these models are first fine-tuned using a human annotated dataset. They found that cleaning the dataset using CapFit produced superior performance to just using the web dataset. Further details on this process can be found in the [BLIP paper](https://arxiv.org/abs/2201.12086). 

## BLIP Architecture and Training
The BLIP architecture combines a vision encoder and a Multimodal Mixture of Encoder-Decoder (MED), enabling versatile processing of both visual and textual data. Its structure is shown in the figure below which features (blocks with the same color share parameters):

- **Vision Transformer (ViT):** This is a plain vision transformer featuring self-attention, feed-forward blocks, and a [CLS] token for embedding representation.
- **Unimodal Text Encoder:** Resembling BERT's architecture, it uses a [CLS] token for embedding and employs contrastive loss like CLIP, for aligning image and text representations.
- **Image-Grounded Text Encoder:** This substitutes the [CLS] token with an [Encode] token. Cross-attention layers enable the integration of image and text embeddings, creating a multimodal representation. It employs a linear layer to assess the congruence of image-text pairs.
- **Image-Grounded Text Decoder:** Replacing the bidirectional self-attention with causal self-attention, this decoder is trained via cross-entropy loss in an autoregressive manner for tasks like caption generation or answering visual questions.

BLIP's architecture integrates a vision encoder with a multimodal mixture of encoder-decoder components, enabling advanced text and image processing. This design allows it to adeptly handle diverse tasks, from aligning image-text pairs to generating captions and answering visual questions.

## Example Use Case: BLIP-2
Following BLIP's success, it's creator Salesforce introduced BLIP-2, an enhanced iteration. BLIP-2's advancements and capabilities are detailed in the [BLIP-2 paper](https://arxiv.org/abs/2301.12597) and the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/blip-2). Here, we utilize BLIP-2 for visual questioning answering.

```python
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
    "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16
)
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

prompt = "Question: How many remotes are there? Answer:"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(
    device, torch.float16
)
outputs = model.generate(**inputs)
text = processor.tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text)
```
This code snippet illustrates the application of BLIP-2 for visual question answering. Experiment with more complex queries or explore this functionality further using the provided Gradio app:

## Conclusion

BLIP marks a significant milestone in multimodal text generation, leveraging CLIP's strengths to create a robust model. It underscores the importance of dataset quality over quantity, contributing to the advancement of multimodal learning. For more details, refer to the [BLIP paper](https://arxiv.org/abs/2201.12086), [BLIP-2 paper](https://arxiv.org/abs/2301.12597), and the Hugging Face documentation for [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) and [BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2).

### Losses
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/losses.md

# Losses

## Introduction

Before diving into the different losses used to train models like CLIP, it is important to have a clear understanding of what contrastive learning is. Contrastive Learning is an unsupervised deep learning method aimed at representation learning. Its objective is to develop a data representation where similar items are positioned closely in the representation space and dissimilar items are distinctly separated.

In the image below, we have an example where we want to keep the representation from dogs closer to other dogs, but also far from cats.
![image info](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/contrastive_learning.png)

## Training objectives

### Constrative Loss

Contrastive loss is one of the first training objectives used for contrastive learning. It takes a pair of samples as input that can be similar or dissimilar, and the objective is to map similar samples close in the embedding space and push dissimilar samples apart.

Technically speaking, imagine that we have a list of input samples \\(x_n\\) from multiple classes. We want a function where examples from the same class have their embeddings close in the embedding space, and examples from different classes are far apart. Translating this to a mathematical equation, what we have is:

$$L = \mathbb{1}[y_i = y_j]||x_i - x_j||^2 + \mathbb{1}[y_i \neq y_j]max(0, \epsilon - ||x_i - x_j||^2)$$

Explaining in simple terms:

- If the samples are similar \\(y_i = y_j\\), then we minimize the term \\(||x_i - x_j||^2\\) that corresponds to their Euclidean distance, i.e., we want to make them closer;
- If the samples are dissimilar \\((y_i \neq y_j)\\), then we minimize the term \\( max(0, \epsilon - ||x_i - x_j||^2)\\) that is equivalent to maximizing their euclidean distance until some limit \\(\epsilon\\), i.e., we want to make them distant from each other.

## References

- [An Introduction to Contrastive Learning](https://www.baeldung.com/cs/contrastive-learning)
- [Contrastive Representation Learning](https://lilianweng.github.io/posts/2021-05-31-contrastive/)

### CLIP and Relatives
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/Introduction.md

# CLIP and Relatives

So far we have learned about the fundamentals of multimodality with a special spotlight of Vision Language Models. This chapters provide a short overview of CLIP and similar models, highlighting their unique features and applicability to various machine learning tasks.
It sets the stage for a high-level exploration of key multimodal models that have emerged before and after CLIP, showcasing their significant contributions to the advancement of multimodal AI.

## Pre-CLIP

In this part, we explore the innovative attempts in multimodal AI before CLIP.
The focus is on influential papers that used deep learning to make significant strides in the field:

1. **"Multimodal Deep Learning" by Ngiam et al. (2011):** This paper demonstrated the use of deep learning for multimodal inputs, emphasizing the potential of neural networks in integrating different data types. It laid the groundwork for future innovations in multimodal AI.

   - [Multimodal Deep Learning](https://people.csail.mit.edu/khosla/papers/icml2011_ngiam.pdf)

2. **"Deep Visual-Semantic Alignments for Generating Image Descriptions" by Karpathy and Fei-Fei (2015):** This study presented a method for aligning textual data with specific image regions, enhancing the interpretability of multimodal systems and advancing the understanding of complex visual-textual relationships.

   - [Deep Visual-Semantic Alignments for Generating Image Descriptions](https://cs.stanford.edu/people/karpathy/cvpr2015.pdf)

3. **"Show and Tell: A Neural Image Caption Generator" by Vinyals et al. (2015):** This paper marked a significant step in practical multimodal AI by showing how CNNs and RNNs could be combined to transform visual information into descriptive language.
   - [Show and Tell: A Neural Image Caption Generator](https://arxiv.org/abs/1411.4555)

## Post-CLIP

The emergence of CLIP brought new dimensions to multimodal models, as illustrated by the following developments:

1. **CLIP:** OpenAI's CLIP was a game-changer, learning from a vast array of internet text-image pairs and enabling zero-shot learning, contrasting with earlier models.

   - [CLIP](https://openai.com/blog/clip/)

2. **GroupViT:** Innovating in segmentation and semantic understanding, GroupViT combined these aspects with language, showing advanced integration of language and vision.

   - [GroupViT](https://arxiv.org/abs/2202.11094)

3. **BLIP:** BLIP introduced bidirectional learning between vision and language, pushing the boundaries for generating text from visual inputs.

   - [BLIP](https://arxiv.org/abs/2201.12086)

4. **OWL-VIT:** Focusing on object-centric representations, OWL-VIT advanced the understanding of objects within images in context with text.
   - [OWL-VIT](https://arxiv.org/abs/2205.06230)

## Conclusion

Hopefully, this section has provided a concise overview of pivotal works in multimodal AI before and after CLIP.
These developments highlight the evolving methods of processing multimodal data and their implications for AI applications.

The upcoming sections will delve into the "Losses" aspect, focusing on various loss functions and self-supervised learning crucial for training multimodal models.
The "Models" section will provide a deeper understanding of CLIP and its variants, exploring their designs and functionalities.
Finally, the "Practical Notebooks" section will offer hands-on experience, addressing challenges like data bias and applying these models in tasks such as image search engines and visual question answering systems.
These sections aim to deepen your knowledge and practical skills in the multifaceted world of multimodal AI.

### Contrastive Language-Image Pre-training (CLIP)
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/clip.md

# Contrastive Language-Image Pre-training (CLIP)

## Introduction

CLIP is a neural network adept at grasping visual concepts through natural language supervision. It operates by concurrently training a text encoder and an image encoder, focusing on a pretraining task that involves matching captions with corresponding images. This architecture allows CLIP to adapt to a variety of visual classification benchmarks seamlessly. It does so by simply receiving the names of the visual categories to be recognized, demonstrating "zero-shot" learning capabilities akin to those observed in GPT-2 and GPT-3 models.

## Contrastive pre-training

Given a batch of image-text pairs, CLIP computes the dense cosine similarity matrix between all possible (image, text) candidates within this batch. The core idea is to maximize the similarity between the correct pairs (shown in blue in the figure below) and minimize the similarity for incorrect pairs (shown in grey in the image). To do it, they optimize a symmetric cross-entropy loss over these similarity scores. 

![CLIP contrastive pre-training](https://images.openai.com/blob/fbc4f633-9ad4-4dc2-bd94-0b6f1feee22f/overview-a.svg)
_Image taken from OpenAI_

Explaining in simple terms, we want to make the similarity between the image and its corresponding caption as high as we can, while the similarity between the image and the other captions should be small. We apply this logic to the caption too, so we want to maximize the similarity of the caption with its corresponding image, and minimize between all other images.

## Text Encoder and Image Encoder

CLIP's design features independent encoders for images and text, allowing flexibility in their choice. Users can switch the standard image encoder, like a Vision Transformer, for alternatives like ResNet, or opt for different text encoders, enhancing adaptability and experimentation. Of course, if you switch one of the encoders, you will need to train your model again, as your embedding distribution will be different.

## Use cases
CLIP, can be leveraged for a variety of applications. Here are some notable use cases:

- Zero-shot image classification;
- Similarity search;
- Diffusion models conditioning.

## Usage

For practical applications, one typically uses an image, and pre-defined classes as input. The provided Python example demonstrates how to use the transformers library for running CLIP. In this example, we want to zero-shot classify the image below between `dog` or `cat`.

![A photo of cats](http://images.cocodataset.org/val2017/000000039769.jpg)

```python
from PIL import Image
import requests

from transformers import CLIPProcessor, CLIPModel

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(
    text=["a photo of a cat", "a photo of a dog"],
    images=image,
    return_tensors="pt",
    padding=True,
)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```

After executing this code, we got the following probabilities:
- "a photo of a cat": 99.49%
- "a photo of a dog": 0.51%

## Limitations

Despite CLIP's proficiency in zero-shot classification, it is unlikely to outperform a specialized, fine-tuned model. Moreover, its generalization capabilities are somewhat limited, particularly in scenarios involving data or examples not encountered during training.
The paper also shows that CLIP's effectiveness and biases are impacted by the choice of categories, as demonstrated in tests using the Fairface dataset. Notable disparities were found in gender and racial classifications, with gender accuracy over 96% and racial accuracy around 93%.

## Conclusion

In conclusion, the CLIP model from OpenAI has revolutionized the multimodal field. What sets CLIP apart is its proficiency in zero-shot learning, allowing it to classify images into categories it wasn't explicitly trained on. This remarkable ability to generalize comes from its innovative training method, where it learns to match images with text captions.

## References

- [CLIP paper](https://arxiv.org/abs/2103.00020)
- [CLIP by Lilian Weng](https://lilianweng.github.io/posts/2021-05-31-contrastive/#clip)

### Multimodal Object Detection (OWL-ViT)
https://huggingface.co/learn/computer-vision-course/unit4/multimodal-models/clip-and-relatives/owl_vit.md

# Multimodal Object Detection (OWL-ViT)
### Introduction

Object detection, a critical task in computer vision, has seen significant advancements with models like the YOLO ([original paper](https://arxiv.org/abs/1506.02640), [latest code version](https://github.com/ultralytics/ultralytics)). However, traditional models like YOLO have limitations in detecting objects outside their training datasets. To address this, the AI community has shifted towards developing models that can identify a wider range of objects, leading to the creation of models akin to CLIP but for object detection.

### OWL-ViT: Enhancements and Capabilities
OWL-ViT represents a leap forward in open-vocabulary object detection. It starts with a training stage similar to CLIP, focusing on a vision and language encoder using a contrastive loss. This foundational stage enables the model to learn a shared representation space for both visual and textual data.

#### Fine-tuning for Object Detection
The innovation in OWL-ViT comes during its fine-tuning stage for object detection. Here, instead of the token pooling and final projection layer used in CLIP, OWL-VIT employs a linear projection of each output token to obtain per-object image embeddings. These embeddings are then used for classification, while box coordinates are derived from token representations through a small MLP. This approach allows OWL-ViT to detect objects and their spatial locations within images, a significant advancement over traditional object detection models.

Below is a diagram of the Pre-training and Fine-tuning stages of OWL-ViT:

![OWL-ViT Pre-training and Fine-tuning](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/owlvit_architecture.jpg)

#### Open-Vocabulary Detection
After fine-tuning, OWL-ViT excels in open-vocabulary object detection. It can identify objects not explicitly present in the training dataset, thanks to the shared embedding space of the vision and text encoders. This capability makes it possible to use both images and textual queries for object detection, enhancing its versatility.

#### Example Use Case
For practical applications, one typically uses text as the query and the image as context. The provided Python example demonstrates how to use the transformers library for running inference with OWL-ViT. 

```python
import requests
from PIL import Image, ImageDraw
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection

processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog", "remote control", "cat tail"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)

target_sizes = torch.Tensor([image.size[::-1]])
results = processor.post_process_object_detection(
    outputs=outputs, target_sizes=target_sizes, threshold=0.1
)
i = 0  # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]

# Create a draw object
draw = ImageDraw.Draw(image)

# Draw each bounding box
for box, score, label in zip(boxes, scores, labels):
    box = [round(i, 2) for i in box.tolist()]
    print(
        f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}"
    )
    # Draw the bounding box on the image
    draw.rectangle(box, outline="red")

# Display the image
image
```
The image should look like this:
![OWL-ViT Example](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/OWL_ViT_example.jpg)
This example shows a simple use case of OWL-ViT, where we specify objects likely in the dataset like cat and more abstract objects like cat tail which most likely aren't in any object detection dataset. This shows the powerfulness of OWL-ViT in open-vocabulary object detection.

Feel free to use the code to try more complex examples or this Gradio app:

### Conclusion

OWL-ViT's approach to object detection represents a notable shift in how AI models understand and interact with the visual world. By integrating language understanding with visual perception, it pushes the boundaries of object detection, enabling more accurate and versatile models capable of identifying a broader range of objects. This evolution in model capabilities is crucial for applications requiring nuanced understanding of visual scenes, particularly in dynamic, real-world environments.

For more in-depth information and technical details, refer to the [OWL-VIT paper](https://arxiv.org/abs/2205.06230). You can also find information about [OWL-ViT 2 model](https://huggingface.co/docs/transformers/model_doc/owlv2) in the Hugging face documentation.

### Feature Matching
https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/feature-matching.md

# Feature Matching

How can we match detected features from one image to another? Feature matching involves comparing key attributes in different images to find similarities. Feature matching is useful in many computer vision applications, including scene understanding, image stitching, object tracking, and pattern recognition.

## Brute-Force Search

Imagine you have a giant box of puzzle pieces, and you're trying to find a specific piece that fits into your puzzle. This is similar to searching for matching features in images. Instead of having any special strategy, you decide to check every piece, one by one until you find the right one. This straightforward method is a brute-force search. The advantage of brute force is its simplicity. You don't need any special tricks – just patience. However, it can be time-consuming, especially if there are a lot of pieces to check. In the context of feature matching, this brute force approach is akin to comparing every pixel in one image to every pixel in another to see if they match. It's exhaustive and it might take a lot of time, especially for large images.

Now that we have an intuitive idea of how brute-force matches are found, let's dive into the algorithms. We are going to use the descriptors that we learned about in the previous chapter to find the matching features in two images.

First install and load libraries.

```bash
!pip install opencv-python
```

```python
import cv2
import numpy as np
```

**Brute Force with SIFT**

Let's start by initializing SIFT detector.

```python
sift = cv2.SIFT_create()
```

Find the keypoints and descriptors with SIFT.

```python
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
```

Find matches using k nearest neighbors.

```python
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1, des2, k=2)
```

Apply ratio test to threshold the best matches.

```python
good = []
for m, n in matches:
    if m.distance 

**Brute Force with ORB (binary) descriptors**

Initialize the ORB descriptor.

```python
orb = cv2.ORB_create()
```

Find keypoints and descriptors.

```python
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
```

Because ORB is a binary descriptor, we find matches using [Hamming Distance](https://www.geeksforgeeks.org/hamming-distance-two-strings/),
which is a measure of the difference between two strings of equal length.

```python
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
```

We will now find the matches.

```python
matches = bf.match(des1, des2)
```

We can sort them in the order of their distance like the following.

```python
matches = sorted(matches, key=lambda x: x.distance)
```

Draw first n matches.

```python
img3 = cv2.drawMatches(
    img1,
    kp1,
    img2,
    kp2,
    matches[:n],
    None,
    flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS,
)
```

**Fast Library for Approximate Nearest Neighbors (FLANN)**

FLANN was proposed in [Fast Approximate Nearest Neighbors With Automatic Algorithm Configuration](https://www.cs.ubc.ca/research/flann/uploads/FLANN/flann_visapp09.pdf) by Muja and Lowe. To explain FLANN, we will continue with our puzzle solving example. Visualize a giant puzzle with hundreds of pieces scattered around. Your goal is to organize these pieces based on how well they fit together. Instead of randomly trying to match pieces,
FLANN uses some clever tricks to quickly figure out which pieces are most likely to go together. Instead of trying every piece against every other piece, FLANN streamlines the process by finding pieces that are approximately similar. This means it can make educated guesses about which pieces might fit well together, even if they're not an exact match. Under the hood, FLANN is uses something called k-D trees. Think of it as organizing the puzzle pieces in a special way. Instead of checking every piece against every other piece, FLANN arranges them in a tree-like structure that makes finding matches faster. In each node of the k-D tree, FLANN puts pieces with similar features together. It's like sorting puzzle pieces with similar shapes or colors into piles. This way, when you're looking for a match, you can quickly check the pile that's most likely to have similar pieces. Let's say you're looking for a "sky" piece. Instead of searching through all the pieces, FLANN guides you to the right spot in the k-D tree where the sky-colored pieces are sorted. FLANN also adjusts its strategy based on the features of the puzzle pieces. If you have a puzzle with lots of colors, it will focus on color features. Alternately, if it's a puzzle with intricate shapes, it pays attention to those shapes. By balancing speed and accuracy when finding matching features, FLANN substantially improves query time.

First, we create a dictionary to specify the algorithm we will use, for SIFT or SURF it looks like the following.

```python
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
```

For ORB, will use the parameters from the paper.

```python
FLANN_INDEX_LSH = 6
index_params = dict(
    algorithm=FLANN_INDEX_LSH, table_number=12, key_size=20, multi_probe_level=2
)
```

We also create a dictionary to specify the maximum leafs to visit as follows.

```python
search_params = dict(checks=50)
```

Initiate SIFT detector.

```python
sift = cv2.SIFT_create()
```

Find the keypoints and descriptors with SIFT.

```python
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
```

We will now define the FLANN parameters. Here, trees is the number of bins you want.

```python
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

matches = flann.knnMatch(des1, des2, k=2)
```

We will only draw good matches, so create a mask.

```python
matchesMask = [[0, 0] for i in range(len(matches))]
```

We can perform a ratio test to determine good matches.

```python
for i, (m, n) in enumerate(matches):
    if m.distance  0
```

Finally, we can visualize the matches.

```python
draw_LAF_matches(
    KF.laf_from_center_scale_ori(
        torch.from_numpy(mkpts0).view(1, -1, 2),
        torch.ones(mkpts0.shape[0]).view(1, -1, 1, 1),
        torch.ones(mkpts0.shape[0]).view(1, -1, 1),
    ),
    KF.laf_from_center_scale_ori(
        torch.from_numpy(mkpts1).view(1, -1, 2),
        torch.ones(mkpts1.shape[0]).view(1, -1, 1, 1),
        torch.ones(mkpts1.shape[0]).view(1, -1, 1),
    ),
    torch.arange(mkpts0.shape[0]).view(-1, 1).repeat(1, 2),
    K.tensor_to_image(img1),
    K.tensor_to_image(img2),
    inliers,
    draw_dict={
        "inlier_color": (0.1, 1, 0.1, 0.5),
        "tentative_color": None,
        "feature_color": (0.2, 0.2, 1, 0.5),
        "vertical": False,
    },
)
```

The best matches are visualized in green, while less certain matches are in blue.

![LoFTR](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/feature-extraction-feature-matching/LoFTR.png)

## Resources and Further Reading

- [FLANN Github](https://github.com/flann-lib/flann)
- [Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images](https://arxiv.org/pdf/1710.02726.pdf)
- [ORB (Oriented FAST and Rotated BRIEF) tutorial](https://docs.opencv.org/4.x/d1/d89/tutorial_py_orb.html)
- [Kornia tutorial on Image Matching](https://kornia.github.io/tutorials/nbs/image_matching.html)
- [LoFTR Github](https://github.com/zju3dv/LoFTR)
- [OpenCV Github](https://github.com/opencv/opencv-python)
- [OpenCV Feature Matching Tutorial](https://docs.opencv.org/4.x/dc/dc3/tutorial_py_matcher.html)
- [OpenGlue: Open Source Graph Neural Net Based Pipeline for Image Matching](https://arxiv.org/abs/2204.08870)

### Feature Description
https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/feature_description.md

# Feature Description

Features are attributes of the instances learned by the model to be later used to recognize new instances.

## How Can We Represent Features In Data Structures?

Representing features in data is crucial for organizing and manipulating data effectively. Features, or attributes or variables, can be diverse, ranging from numerical values and categories to more complex structures like images or text. Some ways to represent features for computer vision tasks are:

- **Numerical features**

  - Arrays/Lists: Simplest form to store numerical values. Each element in the array corresponds to a feature.
  - Tensors: Multidimensional arrays often used in the machine learning frameworks to handle large sets of numerical data efficiently.

- **Categorical features**

  - Dictionaries/Lists: Assigning categories to numerical labels or directly storing categorical values.
  - One hot encoding: Transforming categorical variables into binary vectors where each bit represents a category.

- **Image features**

  - Pixel Values: Strong pixel values matrices or multi-dimensional arrays.
  - Convolutional Neural Network features(CNN): Extracting features using pre-trained CNN models.

## What Makes a Good Descriptor

A good descriptor in image processing or computer vision is a set of characteristics or features that effectively represent key information about an object or scene in an image. Here are some aspects that contribute to making a good descriptor:

- **Invariant to transformation:** Descriptors should ideally be robust to variations like rotation, translation, scaling and changes in illumination. This means that regardless of how an object is positioned or how the image is altered, the descriptor should remain relatively unchanged and contain the same description as the original one.
- **Distinctiveness:** A good descriptor captures unique information about the object. It should be able to discriminate between different objects or parts of an image and be distinct enough to differentiate them from similar elements.
- **Dimensionality:** Good descriptors often have a manageable size, conveying enough information without being excessively large. Balancing dimensionality is crucial for efficiency in processing and storage.
- **Locality:** Descriptors often identify local features within an image. Local descriptors focus on specific regions or keypoints and describe the characteristics of these areas, enabling matching and recognition of similar regions across different images.
- **Repeatability:** Descriptors should be consistent and reproducible across multiple instances of the same object or scene even in the presence of noise or minor variations.
- **Compatibility with Matching Algorithms:** Descriptors are often used in conjunction with matching algorithms to find correspondences between different images. A good descriptor should be suitable for the matching algorithm being used, whether it's based on distance metrics, machine learning models, or other techniques.
- **Computational Efficiency:** Efficiency is really a crucial part of a descriptor, especially in real-time applications. Descriptors should be computationally feasible for quick processing, particularly in scenarios where speed is crucial, such as robotics or autonomous vehicles.
- **Adaptability:** Descriptors that can adapt or learn from the data they are processing can be highly effective, especially in situations where the characteristics of the objects or scenes may change over time. This can increase the useability of the descriptor.
- **Noise Robustness:** Descriptors should be able to handle noise in the image data without significantly compromising their ability to represent the underlying features accurately.

## Some of the Techniques Used in Feature Descriptors

### SIFT

![Basic Working of SIFT](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/feature-extraction-feature-matching/Original-SIFT-algorithm-flow.png)

It stands for scale invariant feature transform. It is a widely used algorithm in computer vision and image processing for detecting and describing local features in images.

The working of SIFT is given below:

- **Scale Space Extrema detection:** It starts by detecting potential interest points in an image across multiple scales. It looks for locations in the image where the difference of Gaussian function reaches a maximum or minimum over space and scale. These keypoint locations are considered stable under various scale changes.
- **Keypoint Localization:** Once potential keypoints are identified, SIFT refines their positions to sub-pixel accuracy and discards low-contrast keypoints and keypoints on edges to ensure accurate localization.
- **Orientation Assignment:** SIFT computes a dominant orientation for each keypoint based on local image gradient directions. This step makes the descriptor invariant to image rotation.
- **Descriptor Generation:** A descriptor is computed for each keypoint region, capturing information about the local image gradients near the keypoint. This descriptor is a compact representation that encapsulates the key characteristics of the image patch surrounding the keypoint.
- **Descriptor Matching:** Finally, these descriptors are used for matching keypoints between different images. The descriptors from one image are compared to those in another image to find correspondences.

SIFT's robustness to various image transformations and its ability to find distinctive features in an image makes it valuable in applications like object recognition, image stitching, and 3D reconstruction.

You can learn more about SIFT using the following references:

- [Introduction to SIFT (Scale-Invariant Feature Transform)](https://docs.opencv.org/4.x/da/df5/tutorial_py_sift_intro.html)

- [What is SIFT](https://www.educative.io/answers/what-is-sift)

- [SIFT](https://www.cse.iitb.ac.in/~ajitvr/CS763/SIFT.pdf)

### SURF

![Basic Working of SURF](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/feature-extraction-feature-matching/Flow-Chart-for-SURF-Feature-Detection.png)

It stands for Speeded Up Robust features. It is another popular algorithm in computer vision and image processing. It is particularly known for its speed and robustness in detecting and describing local image features.

The basic workflow of SURF is given below:

- **Integral images:** SURF utilizes integral images, precomputed representations of the original image. They allow fast calculations of rectangular area sums within an image, enabling a quicker feature computation.

- **Blob detection:** Similar to other feature detection algorithms, SURF starts by identifying potential interest points or keypoints in the image. It uses a Hessian matrix to detect blobs or regions that exhibit significant variations in intensity in multiple directions and scales. These regions are potential keypoints.

- **Scale Selection**: It determines the scale of the keypoints by identifying regions with significant changes in scale-space. It analyzes the determinant of the Hessian matrix across different scales to find robust keypoints at multiple scales.

- **Orientation Assignment:** For each detected keypoint, SURF assigns a dominant orientation. This is done by calculating Haar wavelet responses in different directions around the keypoint’s neighborhood. Unlike SIFT, SURF uses a set of rectangular filters (Haar wavelets) applied to subregions of the keypoint's neighborhood. The responses of these filters are used to create a feature vector representing the keypoint.

- **Descriptor Matching:** The generated descriptors are then used to match key points between different images. Matching involves comparing the feature vectors of keypoints in one image to those in another image to find correspondences.

The key strengths of SURF lie in its computational efficiency, which is achieved through the use of integral images and Haar wavelet approximations while maintaining robustness to scale, rotation, and illumination changes. This makes SURF suitable for real-time applications where speed plays a crucial part in object detection, tracking, and image stitching.

You can learn more about SURF using the following references:

- [OpenCV Tutorial - Introduction to SURF (Speeded-Up Robust Features)](https://docs.opencv.org/3.4/df/dd2/tutorial_py_surf_intro.html)

- [Journal Paper - Feature Extraction Using SURF Algorithm for Object Recognition](https://www.ijtra.com/view/feature-extraction-using-surf-algorithm-for-object-recognition.pdf)

### Real-world Applications of Feature Extraction in Computer Vision
https://huggingface.co/learn/computer-vision-course/unit1/feature-extraction/real-world-applications.md

# Real-world Applications of Feature Extraction in Computer Vision

## Introduction

Feature extraction is a cornerstone of computer vision, enabling machines to interpret and process visual data like humans. This vital process finds application in diverse fields, impacting our daily lives. We explore key areas where feature extraction significantly contributes: facial recognition, object tracking, and anomaly detection.

### Facial Recognition

**Overview and Techniques**: This technology relies on identifying unique facial features - distances between eyes, nose shape, jawline contours, etc. While traditional methods focus on geometric feature extraction, modern systems predominantly use deep learning, particularly CNNs, to analyze facial features more comprehensively.

**Applications**:

- **Security Systems**: Airports and public spaces often employ facial recognition for surveillance and security. For instance, the facial recognition system at Dubai International Airport provides swift and secure immigration checks.
- **Consumer Electronics**: Smartphones like the iPhone use facial recognition (Face ID) for secure unlocking and authentication of payments and app access.
- **Healthcare**: Facial recognition aids in diagnosing genetic conditions. Tools like Face2Gene assist clinicians in identifying syndromes by analyzing facial features.
- **Marketing and Retail**: Companies use facial recognition to gauge customer reactions to products or advertisements, adapting strategies based on emotional responses.

### Object Tracking

**Overview and Techniques**: In object tracking, key features of an object are continuously detected and followed across video frames. Techniques range from basic methods like color tracking to more sophisticated ones like Kalman filtering and CNN-based trackers.

**Applications**:

- **Automotive Safety**: Tesla's Autopilot system uses object tracking to identify and monitor surrounding vehicles, enhancing driving safety.
- **Sports Broadcasting**: Hawk-Eye technology in sports like tennis and cricket tracks ball movement, aiding in accurate decision-making.
- **Wildlife Conservation**: Camera traps equipped with object tracking algorithms help monitor animal populations and movements, aiding in conservation efforts. For example, systems like TrapTag facilitate tracking rare species in remote areas.

### Anomaly Detection

**Overview and Techniques**: Anomaly detection in visual data seeks to identify patterns that deviate from the norm. Techniques range from simple statistical methods to complex neural networks, like autoencoders, trained on 'normal' data to detect outliers.

**Applications**:

- **Public Safety**: In urban surveillance, anomaly detection algorithms help identify suspicious activities or left-behind objects, contributing to public safety. London's city-wide CCTV network employs such technologies.
- **Industrial Quality Control**: Manufacturing sectors use visual anomaly detection for quality assurance. For instance, BMW uses computer vision to detect minute defects in car parts during production.
- **Healthcare Diagnostics**: In medical imaging, anomaly detection aids in identifying tumors or other abnormalities. AI-driven platforms like Zebra Medical Vision assist radiologists in spotting unusual patterns in medical scans.

## Conclusion

Feature extraction in computer vision is not just a technical concept but a transformative tool impacting various facets of life. From enhancing security, aiding medical diagnostics, to revolutionizing industrial and environmental monitoring, its applications are vast and continually evolving. As technology advances, the scope of feature extraction is bound to expand, offering more sophisticated and impactful solutions across diverse sectors, making learning and understanding of this field both exciting and essential for future innovations.

### Image
https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/image.md

# Image

It might be weird that we will explain to you what an image is in a Computer Vision Course. Presumably, you got here in the first place because you wanted to know more about processing image and video formats. It seems trivial, but you are in here for a surprise! When it comes to images, there is much more than meets the eye (pun-intended).

## Definition of Image

An image is a visual representation of an object, a scene, a person, or even a concept. They can be photographs, paintings, drawings, schemes, scans and more! One of the more surprising things is that an image is also a function. More precisely, an image is an n-dimensional function. We will first consider it to be two-dimensional \\(n=2\\). We will call it \\(F(X,Y)\\), where \\(X\\) and \\(Y\\) are spatial coordinates. Do not get distracted by the fancy name. Spatial coordinates are just the system that we use to describe the positions of objects in a physical space with the most common one being the 2D Cartesian system. The amplitude of F at a pair of coordinates \\(x_i, y_i\\) is the intensity or gray level of the image at that point. The intensity is what gives you the perception of light and dark. Typically, when we have the coordinate pair \\(x_1\\) and \\(y_1\\), we refer to them as pixels (picture elements).

Images are discrete, and the processes involved in assembling them are continuous. The image generation processes will be discussed in the next chapter. Right now, what matters is that the value of \\(F\\) at specific coordinates holds a physical meaning. The function \\(F(X,Y)\\) is characterized by two components: the amount of illumination from the source and the amount of illumination reflected by the object in the scene. Intensity images are also constrained in their intensity since the function is typically non-negative, and their values are finite. 

That is not the only way one can create an image.  Sometimes, they are created by computers with or without the help of AI.  We have a dedicated chapter for images that do have a little helping hand from AI. Most of the terminology we will introduce here will still be applicable.

A different type of image is volumetric or 3D images. The number of dimensions in 3D images is equal to three. As a result, we have a \\(F(X,Y,Z)\\) function. Most of our reasoning still applies, with the only difference being that the triplet \\(x_i,y_i,z_i\\) is called a voxel (volume element). These images can be acquired in 3D; that is, the images are acquired in a way that is reconstructed in a 3D space. Examples of such images include medical scans, magnetic resonance, and certain types of microscopes. It is also possible to reconstruct a 3D image from a 2D one. Reconstructing is a challenging task, and it also has its dedicated chapter.

Now that we have discussed space, we can talk about color. The good news is you have likely heard of image channels, too. You might not understand what they mean, but fear not! Image channels are simply the different color components that make an image. In reference to the \\(F(X,Y)\\), we will have \\(F\\) for each color component. Each color has its own intensity level. For a channel that picks up the color red, a high intensity means that the color is very red and a low intensity means that there is little to no red in there.

If you're only looking at the \\(F(x,y)\\) for one color, it ranges from 0 to 255, where 0 represents no intensity and 255 represents maximum intensity. In a different color system, combining these values might differ. So, it is important to understand where your data comes from when interpretating these values.

There are special types of images where the coordinates \\(F(x_i,y_i)\\) do not describe an intensity value, but instead label a pixel. The simplest example of an operation that results in such an image is separating foreground and background. Everything that is foreground receives the label 1, and everything that is background receives the label 0. These images are commonly referred to as labeled images. When there are only two labels, like our example, we call them binary images or masks. 

You may have heard of 4D or even 5D images. This terminology is mostly used by people in the biomedical field and microscopists. Again, fear not! This naming came to be from people who image volumetric data in time, with different channels, or different imaging modalities (i.e. photo and an x-ray). The idea is that each new source of information becomes an extra dimension. Thus, a 5D image is a volumetric image (3D) imaged in time (4D) and using different channels (5D).

But how are images represented in computers? Most commonly by matrices. It is easy to picture an image as a 2D numerical array. This is an advantage because computers handle arrays really well. Seeing matrices as images helps to understand some of the processes in convolution neural networks and in image preprocessing. We will see more details latter on.

Alternatively, images can be represented as graphs where each node is a coordinate, and the edges are the neighboring coordinates. Take a moment to let that sink in. It also means that the algorithms and models used for graphs can also be used for images! The inverse can also be true - you can transform a graph into an image and analyze it as if it were a picture.

So far, we proposed a rather flexible definition of an image. This definition can accommodate different ways of acquiring visual data, but they all highlight the same crucial aspect: images are data points that contain a lot of spatial information. The key differences are the spatial resolution (either 2D or 3D), their color systems (RGB or others), and whether they have a time component attached to them.

## Images vs Other Data Types

### Difference between Image and Video

If you've been tuned in,  you may have caught on to the idea that videos are a visual representation of images with a time component attached. For 2D image acquisition, you can add a time dimension such that \\(F(X,Y,T)\\) becomes your imaging function. 

Images can naturally have a hidden component in time. They are, after all, taken at a specific point in time, and different images may be related in time, too. However, images and videos differ in how they sample this temporal information. An image is a static representation at a single point in time, while a video is a sequence of images played at a rate that creates an illusion of motion. This rate is what we can call frames per second. 

This is so fundamental, that this course has a dedicated chapter to video. There, we will go over the adaptations required to deal with this added dimension.

### Images vs Tabular Data

In tabular data, dimensionality is usually defined by the number of features (columns) describing one data point. In visual data, dimensionality usually refers to the number of dimensions that describe your data. For a 2D image, we usually refer to numbers \\(x_i\\) and \\(y_i\\) as the image size.  

 Another aspect is the generation of features that describe visual data. They are generated by traditional preprocessing or learned through deep learning methods. We refer to this as feature extraction. It involves different algorithms discussed in more detail in the feature extraction chapter. It contrasts with the feature engineering for tabular data, where new features are built upon existing ones.

Tabular data often require the handling of missing values, encoding categorical variables and re-scaling numerical features. The analogous process for image data is image resizing and intensity value normalization. We call these processes preprocessing and we will discuss them in greater detail in the chapter "preprocessing for computer vision".

### Key Differences

The table below summarizes the key aspects of different data types. 

|   | Feature                | Image                                                | Video                                                 | Audio                                                                     | Tabular Data                                                         |
|---|------------------------|------------------------------------------------------|-------------------------------------------------------|---------------------------------------------------------------------------|----------------------------------------------------------------------|
| 1 | Type                   | Single moment in time                                | Sequence of images over time                          | Single moment in time                                                     | Structured data organized in rows and columns                        |
| 2 | Data Representation    | Typically a 2D array of pixels                       | Typically a 3D array of frames                        | Typically a 1D array of audio samples                                     | Typically a 2D array of features as columns and individual data sample as rows (i.e. spreadsheet, database tables                                         |
| 3 | File types             | JPEG,PNG,RAW, etc.                                   | MP4,AVI, MOV, etc.                                    | WAV, MP3, FLAC, etc.                                                      | CSV, Excel (.xlsx, .xls), Database formats, etc.                     |
| 4 | Data Augmentation	     | Flipping, rotating, cropping	                        | Temporal jittering, speed variations, occlusion       | Background noise addition, reverberation, spectral manipulation           |  ROSE, SMOTE, ADASYN                                |
| 5 | Feature Extraction	    | Edges, textures, colors	                             | Edges, textures, colors, optical flow, trajectories   | Spectrogram, Mel-Frequency Cepstral Coefficients (MFCCs), Chroma features | Statistical analysis, Feature engineering, Data aggregation          |
| 6 | Learning Models	       | CNNs                                                 | RNNs, 3D CNNs                                         | CNNs, RNNs                                                                | Linear Regression, Decision Trees, Random Forests, Gradient Boosting |
| 7 | Machine Learning Tasks | Image classification, Segmentation, Object Detection | Video action recognition, temporal modeling, tracking | Speech recognition, speaker identification, music genre classification    | Regression, Classification, Clustering                               |
| 8 | Computational Cost	    | Less expensive	                                      | More expensive                                        | Moderate to high                                                          | Generally less expensive compared to others                          |
| 9 | Applications           | Facial recognition for security access control	      | Sign language interpretation for live communication   | Voice assistants, Speech-to-text, Music genre classification              | Predictive modeling, Fraud detection, Weather forecasting            |

### Pre-processing for Computer Vision Tasks
https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/examples-preprocess.md

# Pre-processing for Computer Vision Tasks

Now that we have seen what are images, how they are acquired, and their impact, it is time to understand what operations we can perform and how they are used during the model-building process. 

## Operations in Digital Image Processing
In digital image processing, operations on images are diverse and can be categorized into:

- Logical
- Statistical
- Geometrical
- Mathematical
- Transform operations
 
Each category encompasses different techniques, such as morphological operations under logical operations or fourier transforms and principal component analysis (PCA) under transforms. In this context, we refer to morphology as the group of operations that use structuring elements to generate images of the same size by looking into the values of the pixel neighborhood. Understanding the distinction between element-wise and matrix operations is important in image manipulation. Element-wise operations, such as raising an image to a power or dividing it by another image, involve processing each pixel individually. This pixel-based approach contrasts with matrix operations, which utilize matrix theory for image manipulation. Having said that, you can do whatever you want with images, as they are matrices containing numbers!

## Mathematical Tools in Image Processing
Mathematical tools are indispensable in digital image processing. Set theory, for instance, is crucial for understanding and performing operations on images, particularly binary images. In these images, pixels are typically categorized as either foreground (1) or background (0). In set theory, operations such as union and intersection determine relationships between features represented by pixel coordinates. Intensity transformations and spatial filtering are other mathematical tools. They focus on manipulating pixel values within an image, where operators are applied to single images or a set of images for various purposes, like noise reduction.

## Spatial Filtering Techniques and Image Enhancement
Spatial filtering encompasses a broad range of applications in image processing, primarily modifying images by altering each pixel's value based on its neighboring pixels' values. Techniques include linear spatial filters, which can blur (low pass filters) or sharpen (high pass filters) an image. The properties and applications of different filter kernels, such as the Gaussian and box filters, are contrasted. Sharpening filters emphasize transitions in intensity and are often implemented through digital differentiation techniques like the Laplacian, highlighting edges and discontinuities in an image.

## Data Augmentation
Data augmentation plays a crucial role in enhancing the performance and generalization of Convolutional Neural Networks (CNNs) used in image classification. This process involves artificially expanding a training dataset by creating modified versions of data points, either through minor alterations or by generating new data using deep learning techniques. 

Augmented data is created by applying modifications such as geometric and color space transformations to existing data, thereby enriching the original dataset with varied forms. Conversely, synthetic data is entirely new and generated from scratch using advanced techniques like Deep Neural Networks (DNNs) and Generative Adversarial Networks (GANs), adding further diversity and volume to the dataset. Both methods significantly expand the quantity and variety of data available for training machine learning models. Data augmentation is applicable not only to images but also to audio, video, text, and other data types. This is good for scenarios with limited training data. It enhances model accuracy, prevents overfitting, and reduces costs associated with data labelling and cleaning. However, challenges such as the persistence of original dataset biases and the high cost of quality assurance remain.

In practice, data augmentation techniques vary across data types. For audio, this includes noise injection and pitch adjustments; for text, methods like word shuffling and syntax-tree manipulation are used. Image augmentation involves transformations like flipping, cropping, and applying kernel filters. Advanced techniques like Neural Style Transfer and the use of GANs for new data point generation further extend its capabilities. These methods are instrumental in fields like healthcare for medical imaging, self-driving cars using synthetic data, and natural language processing, particularly in low-resource language scenarios. Specific image augmentation practices, such as random rotations, brightness adjustments, shifts, flips, and zoom, are implemented using tools like Pytorch, Augmentor, Albumentations, Imgaug, and OpenCV. These tools facilitate a range of augmentations, from Gaussian noise to perspective skewing, catering to diverse machine learning needs.

The significance of data augmentation becomes particularly evident in the context of image classification with CNNs. Standardized datasets, often used in initial CNN training, set high expectations due to their ample sample sizes and resultant model accuracy. However, when these models are applied to real-world problems, a gap in performance is frequently observed, underscoring the need for more extensive and varied data. Data augmentation addresses this gap by multiplying the number of images in a dataset, potentially by significant factors, without the need for additional data collection. This not only increases dataset size but also introduces variability, enhancing the robustness of the training process. By implementing batch-wise augmentation during model training, it also conserves disk space, as there's no need to store transformed images.

Overall, data augmentation is not merely a method for dataset expansion: it's a vital component in developing effective and practical CNN models for image classification tasks. By improving model performance and its ability to generalize from training data to real-world applications, data augmentation stands as a cornerstone technique in the field of deep learning, addressing the perpetual demand for more comprehensive and diverse data.

### Imaging in Real-life
https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/extension-image.md

#  Imaging in Real-life

Have you ever tried to take a picture of a litter of kittens? If not, you are missing out on a beautiful, chaotic mess.  Kittens are adorable creatures that move around in the most deranged ways. They will do the cutest thing possible, but it will only last half a second before they top it off with an even cuter event. Before you know it, you are bending yourself backward to get that one kitten in the frame while changing the zoom, and the angle of the camera meanwhile you have another kitten climbing your leg. You get so immersed in their fluffiness that you do not have the time to check the photos. When you sit down to check on them. They. are. all. just. a. blur. There is only one or two pictures worth keeping on your phone. You are just left there thinking, I thought kittens were more photogenic. 

The litter of kittens is a simple story, but it reflects why it is so hard to imagine things in real life. The samples (the scenario containing the kittens) often change faster than the camera can adjust to it. A steady position camera that does not try to track the kitten is also a difficult task since our object (the kitten) moves in space in ways that change the focus of the camera. Changing the lenses to capture a white field might also cause distortions depending on the distance of the object to the camera (see the adorable example below). The event of interest (that one adorable pose of the kitten) is lost in hundreds of other rather uninteresting pictures. Our kitten's example is a silly one, but these difficulties also happen in a variety of other scenarios. Imaging is hard. Yet, the internet is flooded with adorable cat pictures.

![Cat kisses showing distortion based on the distance from the object](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/cat_kiss.gif)

It is tempting to think that if we just had a better camera, one that responds more rapidly with a high resolution and then all would be solved. We would get the adorable pictures we want. Moreover, we will use the knowledge in this course to do more than just capture all of the adorable kittens, we will want to build a model on a nanny cam that checks if the kittens are still together with their mommy so we know they are all safe and sound. Sounds perfect, right? 

Before we go out to buy the newest flashiest new camera in the market thinking we will have better data. It will be super easy to train a model. We will have a super-accurate model. Out-of-this-world performance on the kitten tracking market. This paragraph is here to guide you in a more productive direction and possibly save you a lot of time and money. A higher resolution is not the answer to all your problems. For starters, a typical neural network model for dealing with images is a convolution neural network (CNN). CNNs expect an image of a given size.  A large image needs a large model. Training will take a longer time. Chances are that your computers are also limited in RAM. A larger image size will mean fewer images to train on because the RAM will be limited for each iteration.

The evident solution is to say that we just get a computer with more GPU and more RAM. This also means that besides buying the camera, you will have to pay more for whatever service you will use to train the kitten model. More generally, this does not reflect real-world scenarios. Sometimes, the real application of a computer model is a GPU and memory-poor application. Wait, isn't that our case in the first place? How are we going to fit our model into the hardware of the nanny cam? 
 
We have an idea: we will try a smaller model to have the same behavior as the big model! By the way,  this is an actual thing you can do. But even doing so, collecting the highest quality possible might not be a great idea simply because it usually takes longer to acquire and transmit it. A 50Gb of kitten pictures is still a 50Gb of data. No matter how adorable its contents are. Another argument is that computer resources are usually either paid for or shared. In the first case, this might not be a good use of money resources. And as for the second, taking up an entire server is rarely a good way to make friends.

There is even a better reason not to go for the highest resolution possible. The higher resolution might have more noise compared to a low one. Resolution amplifies not only your capability to capture the signal you are interested in but also your capability to pick up noise.  Thus, it might be easier to learn something on a lower-resolution image. Lower resolutions might help to have faster training, higher accuracy, and a cheaper model, computationally and monetarily speaking. The takeaway here is to go for the highest resolution possible given the noise characteristics of the image and the infrastructure required both to train and deploy the model. And lastly, why are we using a high-quality camera in the first place? If we want to build a model on a nanny cam, we might as well get the pictures from the nanny cam.

## Imaging everything 

One thing that is quite impressive about imaging techniques is how much we push for them. We never know when to stop. This is not only true for the kitten picture, but for the world around us. We are curious by nature. As seen in the first chapter, we rely on vision to make decisions. When it is a difficult decision, we want to have a clear vision of it (no pun intended). 

It is not surprising that, as a species, we have developed new ways of seeing beyond the range of what our eyes could capture. We want to see what nature did not allow us to see in the first place. I can almost guarantee that if there is something out there that we are not sure of what it looks like, there is someone there trying to image it.

As human species, we only see a fraction of the spectrum. We call that the visible spectrum. The image below shows us just how narrow it is: 

![Image showing the visible spectrum compared to the Electromagnetic Spectrum by 
https://open.lib.umn.edu/intropsyc/chapter/4-2-seeing/
](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/human_spectrum.jpg)

To see more than what Mother Nature has given us, we need sensors capturing beyond that spectrum. In other words, we need to detect things at different wavelengths.  Infrared (IR) is used in night vision devices and some astronomical observations. Magnetic resonance uses strong magnetic fields and radio waves to image soft human tissues. We created ways to see things that do not rely on light. For instance, electron microscopy uses electrons to zoom in at much higher resolution than traditional light.  Ultrasound is another great example. Ultrasound imaging harnesses sound waves to create detailed, real-time images of internal organs and tissues, offering a non-invasive and dynamic perspective that goes beyond what is achievable with standard light-based imaging methods.

We then directed our colossal lenses outwards toward the sky, using them to envision what was once unseen and unknown.  We also pointed them out to the minuscule realm by building images of the DNA structure and individual atoms. Both of these instruments operate on the idea of manipulating light. We use different types of mirrors or lenses, bend and focus light in the specific ways we are interested in.

We are so obsessive about seeing things that scientists have even changed the DNA sequence of certain animals so they can tag proteins of interest with a special type of protein (green fluorescence protein, GFP). As the name suggests, when a green wavelength of light illuminates the sample, the GFP emits a fluorescent signal back.  Now, it is easier to know where the protein of interest is being expressed because scientists can image it.

After that, it was a matter of improving this system to get more channels in place, in a longer timescale, in a better resolution. A great example of this is how microscopes now generate terabytes of data overnight. 

A great example of this combined effort is the video below. In it, you see the time lapse of the projection of the 3D image of a developping embryo of a fish tagged with a fluorescent protein. Each colored dot you see on the image represents an individual cell.

![Fisho Embryo Image adapted from https://www.biorxiv.org/content/10.1101/2023.03.06.531398v2.supplementary-material ](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/fish.gif)

This diversity in imaging is quite phenomenal. These optical tools have become the eyes through which we perceive the universe. They provide us with insights that have revolutionized our understanding of the universe and life itself.  We use it on a daily basis to send pictures of our loved ones when they are away. We get an x-ray when the doctors need a closer look. Pregnant people have ultrasounds to check on their babies. It might sound a bit magical, even whimsical, that we managed to image things as massive as black holes and as small as electrons. And well, it kind of is. 

## Perspective on Imaging

As we have seen previously, we grew accustomed to different ways to image things. It is just a routine thing now, but it took a lot of time and effort.  It does not look like we are slowing it down. We are continuously finding new ways to see. New ways to image. As we continue to construct new instruments to see better, new stories and mysteries will be reviewed. In this part, we will illustrate some mysteries that were already reviewed to us in the past.

### Picture 51

![Picture 51 by By Raymond Gosling/King's College London ](
https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Photo_51_x-ray_diffraction_image.jpg)

The first picture of the DNA is also known as Photo 51. They use a technology based on fiber diffraction image of a crystal gel composed of DNA fiber to image it. It was taken by Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin in May 1952. It was a key piece of the double helix model constructed by Watson and Crick in 1953. There is a lot of controversy surrounding this photo. Part of it comes from the unrecognized contribution made by Rosalind Franklin's early work and the circumstances under which the photo was shared with Watson and Crick. Nevertheless, it has significantly contributed to our understanding of DNA's structure and the technologies that were developed thereafter. 

### The pale blue dot

![The Pale Blue Dot By Voyager 1 ](
https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Pale_Blue_Dot.png)

The pale blue dot is a picture taken in 1990 by a space probe. Earth's size is so small that is less than a pixel. The picture received a lot of notoriety by showing how tiny and short Earth is relative to the vast majority of the space. It inspired Carl Sagan to write the book "The Pale Blue Dot". This picture was taken by the 1500mm high-resolution narrow-angle camera on the Voyager 1. The space probe is also responsible for taking the "Family portrait of the solar system".

### Black hole

![M87 by Event Horizon Telescope ](
https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Black_hole_-_Messier_87.jpg)
Another astronomically important event occurred in April 2019 when researchers captured the first image of a black hole! It was the image of the supermassive black hole at the center of the M87 galaxy in the constellation Virgo, about 55 million light-years away from Earth. The remarkable image was a product of the Event Horizon Telescope, a global network of synchronized radio observatories that worked together to create a virtual telescope as large as Earth. The data collected was enormous, over a petabyte, and had to be physically transported for processing due to its size. They needed to combine data coming from near-infrared, x-ray, millimeter wavelengths, and radio observations. This achievement was the culmination of years of effort by the Event Horizon Telescope Collaboration.

![Sag A by Event Horizon Telescope](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/sag-event-image.jpg)

Following the success with M87\*, astronomers aimed to image the supermassive black hole at the center of our galaxy, Sagittarius A\*. Imaging Sagittarius A\* posed unique challenges due to its smaller size and the rapid variability in its surrounding environment, which changes much faster than the environment around larger black holes like M87\*. This rapid movement made it difficult to capture a stable image that accurately represents the structure around Sagittarius A\*. Just like our kitten example! Despite these challenges, the images obtained are significant for testing Einstein's theory of general relativity under extreme gravitational conditions. While these observations are crucial, they are part of a broader array of methods used to test the predictions of general relativity.

### Images, images, images

![Video of a horse decoded from DNA from https://doi.org/10.1038/nature23017](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/horsegif_0.gif)

This one is a bit of a twist. It does not involve a new way to image, but rather a new way to read and archive images. The GIF you see above is an image that was stored in the DNA of a living bacteria. This was first made in 2017 by a group of scientists to show as a proof-of-concept that a living organism is an excellent way to archive data. To do this, they first translated the image values into nucleotides code (the famous ATCG). Then, they put this sequence into the DNA by using a system called CRISPR which is capable of editing the DNA. Then, they resequenced the DNA and reconstructed the gif you see below.

That is already quite impressive, but buckle up.  We can also see this in action! Well, not this precise example, but another group of scientists used high-speed atomic force microscopy to show how this works. This type of microscopy uses a sharp tip mechanically attached to the scan. The tip's interaction with the surface generates a topological description of a sample. All of this is at the nanoscale. The video below shows the CRIPR-cas-9 system, the DNA editor, doing its first step by chewing up the DNA. Yummy!

![CRIPRS-Cas9 chewing up DNA adapted from https://doi.org/10.1038/s41467-017-01466-8](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/cas_9.gif)

There is more. Have you ever wondered how scientists image DNA? Believe it or not, that process also involves imaging. To know a DNA sequence, scientists need to make a copy of it first. These copies are created by labeling the nucleotides (the things we refer to as ATCG) with different fluorescent dyes. Each nucleotide is matched to sequence one at a time. While they are added, a camera captures an image. The color that fluorescences gives which nucleotide was added. By tracking individual locations, we can reconstruct the sequence of a DNA molecule. This sequencing technology goes beyond reconstructing images. It is used to understand different biological processes and it has a lot of application in clinical settings. Doctors can do all sorts of things from these sequences. For instance, a sample of the tumor can be sequenced and used to classify it as aggressive or not. This generates highly dimensional data. Making any conclusion in that high dimensional setting is difficult, so they often reduce them into 2D images. These 2D images can be processed just like any image. That means you can classify it using CNN. Mind-boggling, right?

## Image characteristics depend on the acquisition

Regardless of the image type, all images share the same fundamental characteristics. They represent spatial components and are typically represented by matrices. However, it's crucial to recognize that images are not created equally. The distinct characteristics of an image come from both the subject matter and the method of image acquisition. In other words, we do not expect black holes and DNA to look alike. However, we do not expect a photograph and x-ray of the same person to look the same either.

Understanding image characteristics is a really good first step in building a computer vision model. Not only because it will influence the performance of the computer vision model, but because it will dictate what models are more suitable for your problem. Notably, not every image type requires the development of a new neural network architecture. Sometimes, you can adapt a pre-existing model by fine-tuning it or manipulating the last layer to do a different task. Sometimes this manipulation is not needed; instead, preprocessing is employed to make your image more similar to the input that the network was trained on. Do not worry too much about the details of this right now, they will be addressed in the latter chapter of this course. They are mentioned here to help you understand why the context in which an image is acquired is relevant.

For images acquired in different wavelengths but in the same coordinates system, it can be as easy as seeing each acquisition as a different color channel. For instance, in an image acquired by both an X-ray and a near-infrared, you can treat them as if they were different color channels. In that way, each image is in its own grayscale. 

While it may seem straightforward, certain technologies, like radar and ultrasound, use a distinct coordinate system known as a polar grid. This grid originates from the center where the signal is emitted. Unlike the Cartesian system, the pixel size is not consistent. As the distance from the center increases, the coordinates in this system also increase. In practical terms, this implies that pixels represent larger areas as the distance from the center grows. There are two different approaches.  The first one changes the coordinate system into one where the pixels are the same size. This will lead to a lot of missing information which might not be very interesting and it might result in a suboptimal storage system. The alternative approach is to leave it as it is but to add the distance from the center as another input for the model.

That is not the only scenario where the coordinates system comes into play. Another one is for satellite imaging. When there are multiple wavelengths captured under the same coordinates, you can treat them as different color channels,  as we have seen before. However, it is more complicated when the data are under different coordinate systems. Such as satellite images and an earth image being combined for a given task. In that case, the coordinates will need to be remapped into each other.

Lastly, image acquisition comes with its own set of biases. We can loosely define bias here as an undesired characteristic of the dataset, either because it is noise or because it changes the model behavior. There are many sources of bias, but a relevant one in image acquisition is measurement bias. Measurement bias happens when the dataset used to train your model varies too much from the dataset that your model actually sees, like our previous example of a high-resolution kitten image and the nanny cam. There can be other sources of measurement bias, such as the measurement coming from the labelers themselves (i.e different groups and different people label images differently), or from the context of the image (i.e. in trying to classify dogs and cats, if all the pictures of cats are on the sofa, the model might learn to distinguish sofa from non-sofa instead of cats and dogs). 

All of that is to say that recognizing and addressing the characteristics of images originating from different instruments is a good first step into building a computer vision model. Preprocessing techniques and strategies to address the problems we identify in this case can be used to mitigate its impact on the model. The "Preprocessing for Computer Vision Tasks" chapter will address deeper into specific preprocessing methods used to enhance model performance.

### Image Acquisition Fundamentals in Digital Processing
https://huggingface.co/learn/computer-vision-course/unit1/image_and_imaging/imaging.md

# Image Acquisition Fundamentals in Digital Processing
Image acquisition in digital processing is the first step into turning the physical phenomena (what we see in real life), into a digital representation (what we see in our computers). It starts with the interaction between an illumination source and the subject being imaged. This illumination can be of various types, from conventional light sources to more sophisticated forms like electromagnetic or ultrasound energy. The interaction results in energy either being reflected from or transmitted through the objects in the scene. This energy is captured by sensors, which transform one form of energy into another (i.e. transducers converting the incident energy into electrical voltage). The voltage signal is then digitized, resulting in a digital image. To do so, we need advanced technology and precise calibration to ensure that we have an accurate representation of the physical scene. In the next sections, we will explore some of these technologies. 

![ The first photograph of Moon by Ranger 7 in 1964 (Courtesy of NASA)](https://huggingface.co/datasets/hf-vision/course-assets/resolve/4def8c412ee6b08f4522e818a0474d155363d87b/pic_3.png)

## Sensor Technologies and Their Role in Image Acquisition
As mentioned before, the first step in digital imaging is the sensors. To create a two-dimensional image, single sensing elements (such as photodiodes) are moved along the x and y axes. In contrast, the more common sensor strips capture images linearly in one direction. Thus, to obtain a complete 2D image, these strips are moved perpendicularly. This technology is commonly found in devices like flatbed scanners and used in airborne imaging systems. In more specialized applications, like medical imaging (e.g., CAT scans), ring-configured sensor strips are used. These setups involve complex reconstruction advanced algorithms to transform the captured data into meaningful images. 

Sensor arrays, like CCDs in digital cameras, consist of 2D arrays of sensing elements. They capture a complete image without motion, as each element detects part of the scene. These arrays are advantageous as they don't require movement to capture an image, unlike single sensing elements and sensor strips. The captured energy is focused onto the sensor array, converted into an analog signal, and then digitized to form a digital image.

## Digital Image Formation and Representation
The core of digital image formation is the function \\(f(x,y)\\), which is determined by the illumination source \\(i(x,y)\\), and the reflectance \\(r(x,y)\\) from the scene.

    

In transmission-based imaging, such as X-rays, transmissivity takes the place of reflectivity. The digital representation of an image is essentially a matrix or array of numerical values, each corresponding to a pixel. The process of transforming continuous image data into a digital format is twofold: 
- Sampling, which digitizes the coordinate values.
- Quantization, which converts amplitude values into discrete quantities. 

The resolution and quality of a digital image significantly depend on the following:
- The number of samples and discrete intensity levels used. 
- The dynamic range of the imaging system, which is the ratio of the maximum measurable intensity to the minimum detectable intensity. This also plays a crucial role in the appearance and contrast of the image.
  

    

## Understanding Resolution in Digital Imaging
Spatial resolution refers to the smallest distinguishable detail in an image and is often measured in line pairs per unit distance or pixels per unit distance. The meaningfulness of spatial resolution is context-dependent, varying according to the spatial units used. For example, a 20-megapixel camera typically offers higher detail resolution than an 8-megapixel camera. 
Intensity resolution relates to the smallest detectable change in intensity level and is often limited by the hardware's capabilities. It's quantized in binary increments, such as 8 bits or 256 levels. The perception of these intensity changes is influenced by various factors, including noise, saturation, and the capabilities of human vision.

    

## Image Restoration and Reconstruction Techniques
Image restoration focuses on recovering a degraded image using knowledge about the degradation phenomenon. It often involves modelling the degradation process and applying inverse processes to regain the original image.

    

In contrast, image enhancement is more subjective. It aims to improve the visual appearance of an image. Restoration techniques include dealing with issues like noise, which can originate from various sources during image acquisition or transmission. Advanced filters, both adaptive and non-adaptive, are used in this case because of their noise-reduction capabilities. In medical imaging, particularly in computed tomography (CT), image reconstruction from projections is a crucial application.

![The first photograph of a person Louis Daguerre, 1838 at the Boulevard du Temple, in Paris](https://huggingface.co/datasets/hf-vision/course-assets/resolve/4def8c412ee6b08f4522e818a0474d155363d87b/pic_2.png)

## Colour in Image Processing
Color is a powerful descriptor in image processing. It plays a role in object identification and recognition. Color image processing includes both pseudo-color and full-color processing. 

    

Pseudo-color processing assigns colors to grayscale intensities, while full-color processing uses actual color data from sensors. Understanding the fundamentals of color, including human color perception, the color spectrum, and the attributes of chromatic light, is key. The fundamentals of color involve the trichromatic nature of human vision, which perceives red, green, and blue. On the other hand, color perception is how the three types of cones in our eyes are stimulated. Finally, the color spectrum is the range of wavelengths in the electromagnetic spectrum that elicit distinct visual sensations.

Different color models, such as RGB for monitors and cameras and CMY/CMYK for printing, standardize color representation in digital imaging. In the RGB color model, images have three components (i.e., channels), each for red, green, and blue. The pixel depth in RGB images determines the number of possible colors, with a typical full-color image having a 24-bit depth (8 bits for each color component). This allows for over 16 million possible colors! The RGB color cube represents the range of colors achievable in this model, with the grayscale extending from black to white. 

    

## Image Compression
Data compression reduces the data needed to represent information. It distinguishes between data (the means of conveying information) and information itself. It targets redundancy, which is data that is either irrelevant or repetitive. For example, a 10:1 compression ratio indicates 90% data redundancy. 

In digital image compression, particularly with 2-D intensity arrays, the three main types of redundancies are:
- **Coding redundancy:** Coding redundancy is particularly prevalent in images where the distribution of intensity values does not spread evenly across all possible values, which is depicted as a non-uniform histogram. In such images, some intensity values occur more frequently than others, yet natural binary encoding assigns the same number of bits to represent each intensity value, regardless of its frequency. This means that common values are not encoded more efficiently than rare values, leading to inefficient use of bits and, thus coding redundancy. Ideally, more frequent values should be assigned shorter and less frequent values longer codes to minimize the number of bits used, which is not the case with natural binary encoding in non-uniform histograms.
- **Spatial and temporal redundancy:** Spatial and temporal redundancy appear in correlated pixel values within an image or across video frames.
- **Irrelevant information:** Irrelevant information includes data ignored by the human visual system or unnecessary for the image's purpose.

Efficient coding considers event probabilities, like intensity values in images. Techniques like run-length encoding reduce spatial redundancy in images with constant intensity lines, significantly compressing data. Similarly, temporal redundancy in video sequences can be addressed. Removing irrelevant information, though, leads to quantization, an irreversible loss of quantitative information. Information theory, with concepts like entropy, helps determine the minimum data needed for accurate image representation. Image quality post-compression is assessed using objective fidelity criteria (mathematical functions of input and output) and subjective fidelity criteria (human evaluations).

Image compression systems use encoders and decoders. Encoders eliminate redundancies through mapping (to reduce spatial/temporal redundancy), quantization (to discard irrelevant information), and symbol coding (assigning codes to quantizer output). Decoders reverse these processes, except for quantization. Image file formats, containers, and standards like JPEG and MPEG are used for data organization and storage. Huffman coding is a notable method for removing coding redundancy, creating efficient representations by coding the least probable source symbols first.

### Applications of Computer Vision
https://huggingface.co/learn/computer-vision-course/unit1/chapter1/applications.md

# Applications of Computer Vision

In today's world, computer vision systems perform ever more challenging tasks. Some of which are even difficult for humans to do. Let's consider India. Do you know India has the highest number of registered two-wheelers in the world? With that many drivers, some of them forget to use a helmet. This is dangerous practice, and can cause severe injuries. To address this issue, the government of India, in collaboration with other institutions, developed a computer vision system that automatically catches riders without a helmet and their license plate. The system imposes strict fines against them to discourage people from breaking the law.

Of course, fining people is not the only application of computer vision. There are uses across healthcare, retail, and many other industries. Here, we provide a few high-level examples of computer vision systems.

## A High Level Overview of Computer Vision System with Examples

### Autonomous Vehicles

    

Self-driving cars heavily rely on computer vision to perceive and interpret the environment. They use cameras and sensors to identify objects, pedestrians, traffic signs, lane markings, and other vehicles on the road. Based on the analyzed data, computer vision algorithms help these vehicles make real-time decisions, such as steering, accelerating, or braking. Companies like Tesla, Waymo, and Uber are actively working on this technology to make transportation safer and more efficient.

### Retail and E-commerce

Computer vision is revolutionizing the retail industry. Many online retailers and brick-and-mortar stores are using it for various purposes. One significant application for this is the object recognition and recommendation systems. By analyzing images or videos of products, computer vision algorithms can identify items, understand their features, and recommend similar products to customers. For example, platforms like Amazon, eBay, and Walmart use computer vision to suggest related products based on users' viewing or purchasing. Additionally, in physical stores, computer vision-powered systems can track inventory levels, detect stock shortages, and even analyze customer behavior to optimize store layouts and marketing strategies, which can ultimately help them enhance their stores.

### Quality Control in Assembly Lines

CV in quality control on assembly lines helps achieve higher accuracy, efficiency, and consistency in detecting and rectifying defects, reducing waste, improving product quality, and streamlining manufacturing processes. There are many domains in which this is used:

**Defect detection**:  CV systems can analyze products on assembly lines in real-time, identifying defects or irregularities that might not be immediately visible to the human eye. For instance, CV can inspect electronic components, automotive parts, or packaged goods in manufacturing to spot imperfections, scratches, dents, or incorrect assembly. These systems compare the product against a standard reference to determine if it meets quality standards.

**Automated Inspection**: Traditional quality control often involves manual inspection, which is time-consuming and prone to human error. CV systems automate this process by using cameras and machine learning algorithms to capture images or videos of products as they move along the assembly line. These images are then analyzed to detect deviations from the standard, ensuring consistency and high quality in mass production.

**Real-time Feedback and Maintenance System**: By integrating CV into assembly lines, manufacturers can receive real-time feedback about product quality. If a defect is detected, the system can trigger immediate actions, such as alerting a human operator, diverting the faulty product for rework, or even adjusting the machinery to correct the issue, minimizing the production of defective items and optimizing the overall production process.

### Medical Image Analysis

    

Medical image analysis applies computer vision and machine learning techniques to interpret and extract information from medical images like X-rays, CT scans, MRIs, ultrasounds, and histopathology slides.

- **Diagnostic Assistance**: Computer vision aids in diagnosing diseases and conditions by analyzing medical images. For instance, in radiology, algorithms can detect abnormalities such as tumors and fractures in X-rays or MRIs. These systems assist healthcare professionals by highlighting areas of concern or providing quantitative data that helps decision-making.

- **Segmentation and Detection**: Medical image analysis involves segmenting and detecting specific structures or anomalies within the images. This process helps isolate organs, tissues, or pathologies for closer examination. For example, in cancer detection, computer vision algorithms can segment and analyze tumors from MRI or CT scans, assisting in treatment planning and monitoring.

- **Treatment Planning and Monitoring**: Computer vision contributes to treatment planning by providing precise measurements, tracking changes over time, and assisting in surgical planning. It helps doctors understand the extent and progression of a disease, enabling them to plan and adjust treatment strategies accordingly. Doctors were already capable of doing most of these tasks, but they needed to do them by hand. CV systems can do it automatically, which frees us doctors to do other tasks.

- **AI-Assisted Radiology**: AI-powered systems in radiology assist radiologists by automating routine tasks, reducing workload, and improving accuracy. These systems can flag potentially abnormal findings, provide quantitative analysis, and even predict potential health issues based on patterns identified in medical images.

- **Drug Development and Research**:  In drug development and medical research, computer vision techniques assist in analyzing cellular structures, tissue samples, or genetic materials. This aids in understanding diseases at a microscopic level, contributing to the development of new drugs, therapies, or diagnostic tools.

## Challenges for CV Systems

Computer Vision systems face a multitude of challenges that arise from the complexity of processing visual information in real-world scenarios, ranging from poor data quality, privacy, and ethical concerns along with other concerns, which are mentioned in the table below:

| Factor                               | Challenges                                                                                                                                                                                                                                    |
|--------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Variability in Data	                 | The data collected from the real world is highly diverse, with variations in lighting, viewpoint, occlusions, and backgrounds, making it challenging for reliable computer vision systems to be developed.                                    |
| Scalability                          | Computer vision systems need to be scalable to manage large datasets and meet real-time processing requirements due to the continuous increase in visual data.                                                                                |
| Accuracy                             | Achieving high accuracy in object detection, scene interpretation, and tracking is a significant challenge, especially in complex or cluttered scenes, often due to noise, irrelevant features, and poor image quality.                       |
| Robustness to Noise	                 | Real-world data is noisy, containing defects, sensor artifacts, and distortions. Computer vision systems must be robust enough to handle and process such noisy data effectively.                                                             |
| Integration with Other Technologies	 | Integrating computer vision with technologies like natural language processing, robotics, or augmented reality poses challenges related to system interoperability, expanding the usability of machine learning and computer vision.          |
| Privacy and Ethical Concerns	        | Real-world applications of computer vision, especially in surveillance, facial recognition, and data gathering, raise concerns about privacy and ethics, necessitating proper handling of databases and personal information.                 |
| Real-time Processing	                | Applications like autonomous vehicles and augmented reality require real-time processing, posing challenges in achieving the necessary computational efficiency, often requiring substantial computational power and capable cloud platforms. |
| Long-term Reliability	               | Maintaining the reliability of computer vision systems over extended periods in real-life scenarios is challenging, as ensuring continued accuracy and flexibility can be difficult.                                                          |
| Generalization                       | Developing models with good generalization across diverse contexts and domains is a significant challenge, requiring the ability to adapt to changing circumstances without extensive retraining.                                             |
| Calibration and Maintenance	         | Calibrating and maintaining hardware, such as cameras and sensors, in real-world settings presents challenges, often due to logistical complications and the need to withstand extreme weather conditions.                                    |

## Ethical Considerations

    

Ethical considerations in computer vision are of paramount importance as this technology becomes increasingly integrated into various aspects of our lives. These ethical considerations have existed long before the spread of AI based technologies. It is intrinsically tied to its birth.

The London Hospital survival predictor is a great example of that. It was created in 1972 and its job was to predict whether patients would recover from a coma. It had a dial that either indicated "survive" or "irreversible brain damage". It was one of the first applications of pattern recognition, and of artificial neural networks. Even at early stages, this raised concerns. Doctors were advised not to make decisions solely from the predictor's judgements, and the machine was never used to remove patient's life support.

Life has changed between then and now. The world has become a more digital, interconnected place and the ethical considerations for a model must reflect that. Now, we consider fairness and bias in a global perspective. Fairness, in this context, refers to a property of a model to behave in a equitative manner, without target discrimination or unfair bias against a group or individual. Bias refers to the inclination for or against a person or group. Considering fairness and bias can be tricky in practice.

In contrast to performance metrics, there is not a mathematical metric of fairness. To evaluate it, you must understand the problem at hand. Making things more complicated, bias can pop at any point in model development; on the data, the AI design, the deployment, and the model applications.

    

There are several efforts to assist this, including the systematic report of model risks, limitation and biases in model cards (special files that accopany the model and provide important information on it). There is a lot more to be said on this topic and that is why there is a dedicated chapter on it in this course. However, there are some key concepts we will introduce here to provide a high-level overview of some of the ethical considerations behind it. We summarize these in the table below.

| Ethical Considerations                | Challenges                                                                                                                                                                                                                                                                                    |
|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Privacy Concerns	                     | Computer vision often involves collecting and analyzing visual data, raising concerns about individual privacy. Issues include unauthorized surveillance, facial recognition, and the potential for misuse of sensitive information.                                                          |
| Bias and Fairness	                    | Biases in data, algorithms, or the design of computer vision systems can lead to unfair outcomes, perpetuating social inequalities. Ensuring fairness in data collection, algorithm design, and decision-making is crucial to prevent discrimination based on race, gender, or other factors. |
| Accuracy and Accountability	          | Computer vision systems must be accurate and reliable. Accountability measures are necessary to address errors or failures, ensuring that those responsible for system development are held accountable for any unintended consequences.                                                      |
| Consent and Informed Decision Making	 | Obtaining informed consent from individuals whose data is being collected or used by computer vision systems is essential. Users should be informed about how their data will be used and have the right to make informed decisions about its usage.                                          |
| Dual Use Concerns	                    | Computer vision technology can have both beneficial and potentially harmful uses. Ensuring that the technology is not used for malicious purposes, such as surveillance or invasion of privacy, is crucial.                                                                                   |
| Transparency and Explainability	      | Computer vision systems should be transparent in their functioning and decisions. Users should be able to understand how these systems work and the reasons behind their decisions.                                                                                                           |
| Child Protection	                     | Special care must be taken when dealing with visual data involving children. Safeguards should be in place to protect minors from privacy violations or any other potential harm.                                                                                                             |
| Cultural and Contextual Sensitivity	  | Computer vision systems should be sensitive to cultural differences and diverse contexts to avoid misinterpretations or biases based on cultural or regional norms.                                                                                                                           |
| Human Oversight	                      | Human oversight and intervention are crucial in ensuring that computer vision systems operate ethically and make accurate decisions. Humans should have the ability to intervene in cases where the system's decisions might cause harm.                                                      |
| Environmental Impact                  | The development and deployment of computer vision systems should consider their environmental impact. This includes energy consumption, electronic waste, and other ecological factors.                                                                                                       |
| Educational and Ethical Training	     | Training programs and educational initiatives are essential to raise awareness about the ethical implications of computer vision technology among developers, users, and policymakers.                                                                                                        |

### What Is Computer Vision
https://huggingface.co/learn/computer-vision-course/unit1/chapter1/definition.md

# What Is Computer Vision

Let's revisit our example from the previous chapter: kicking a ball. As we have seen, this involves multiple tasks that our brains can do in a split second. The extraction of what is meaningful information from an image input is at the core of computer vision. But what is computer vision?

## Definition

Computer vision is the science and technology of making machines see. It involves the development of theoretical and algorithmic methods to acquire, process, analyze, and understand visual data, and to use this information to produce meaningful representations, descriptions, and interpretations of the world (*Forsyth & Ponce, Computer Vision: A Modern Approach*).

    

## Deep Learning and the Computer Vision Renaissance

The evolution of computer vision has been marked by a series of incremental advancements in and across its interdisciplinary fields, where each step forward gave rise to breakthrough algorithms, hardware, and data, giving it more power and flexibility. One such leap was the jump to the widespread use of deep learning methods.

Initially, to extract and learn information in an image, you extract features through image-preprocessing techniques (Pre-processing for Computer Vision Tasks). Once you have a group of features describing your image, you use a classical machine learning algorithm on your dataset of features. It is a strategy that already simplifies things from the hard-coded rules, but it still relies on domain knowledge and exhaustive feature engineering. A more state-of-the-art approach arises when deep learning methods and large datasets meet. Deep learning (DL) allows machines to automatically learn complex features from the raw data. This paradigm shift allowed us to build more adaptive and sophisticated models, causing a renaissance in the field.

The seeds of computer vision were sown long before the rise of deep learning models during 1960's, pioneers like David Marr and Hans Moravec wrestled with the fundamental question: Can we get machines to see? Early breakthroughs like edge detection algorithms, object recognition were achieved with a mix of cleverness and brute-force which laid the ground work for this developing computer vision systems. Over time, as research and development advanced and hardware capabilities improved, the computer vision community expanded exponentially. This vibrant community is composed of researchers,engineers, data scientists, and passionate hobbyists across the globe coming from a vast array of disciplines. With open-source and community driven projects we are witnessing democratized access to cutting-edge tools and technologies helping to create a renaissance in this field.

## Interdisciplinary with other fields and Image Understanding

Just as it is hard to draw a line that separates artificial intelligence and computer vision, it is also hard to separate computer vision from its neighbouring fields. Take image preprocessing and analysis as an example. A tentative separation is that the input and output of image analysis are always images. However, this is a shortsighted take. Even simple tasks, such as calculating the medium value of an image, would be classified under computer vision. To clarify their differences, we must introduce a new concept of image understanding.

Image understanding is the process of making sense of the content of an image. It can be defined in three different levels:

**Low-level processes** are primitive operations on images (i.e. image sharpening, changing the contrast). The input and the output are images.

**Mid-level processes** include segmentation, description of objects, and object classification. The information is an image, but the result is attributes associated with the image. This could be done with a combination of image preprocessing and ML algorithms.

**High-level processes** include making sense of the entirety of an image, i.e., recognition of a given object, scene reconstruction, and image-to-text. These are tasks typically associated with human cognition.

Image analysis is mainly concerned with low and mid-level processes. However, computer vision is interested in mid- and high-level processes. Thus, there is an overlap in the mid-level processes between image analysis and computer vision.

It is essential to remember this since allocating resources to develop a sophisticated model, such as a neural network for data-poor scenarios or simple images, might not be appropriate. From a business point of view, model development costs time and money; it is necessary to know when to use the right tools.

Combining a “preprocessing” part before moving on to a more robust model is usual.
On the opposite side, sometimes, the layers of a neural network automatically perform tasks like these, eliminating the need for explicit preprocessing. Image analysis might act as a first exploratory data analysis step for those familiar with data science. Lastly, classical image analysis methods can also be used for data augmentation to improve the quality and diversity of training data for computer vision models.

## Computer Vision Tasks Overview

We have seen before that computer vision is really hard for computers because they have no previous knowledge of the world. In our example, we start knowing what a ball is, how to track its movement, how objects usually move in space, how to estimate when the ball will reach us, where your foot is, how a foot moves, and how to estimate how much force you need to hit the ball. If we were to break this down into specific computer vision tasks, we would have:
  - Scene Recognition 
  - Object Recognition
  - Object Detection
  - Segmentation (instance, semantic)
  - Tracking 
  - Dynamic Environment Adaptation
  - Path Planning

You will read more about the core tasks of computer vision in the Computer Vision Tasks chapter. But there are many more tasks that computer vision can do! Here is a non-exhaustive list:
  - Image Captioning
  - Image Classification
  - Image Description
  - Anomaly Detection
  - Image Generation
  - Image Restoration
  - Autonomous Exploration
  - Localization

## Task Complexity

The complexity of a given task in the realm of image analysis and computer vision is not solely determined by how noble or difficult a question or task may seem to an informed audience. Instead, it primarily hinges on the properties of the image or data being analyzed. Take, for example, the task of identifying a pedestrian in an image. To a human observer, this might appear straightforward and relatively simple, as we are adept at recognizing people. However, from a computational perspective, the complexity of this task can vary significantly based on factors such as lighting conditions, the presence of occlusions, the resolution of the image, and the quality of the camera. In low-light conditions or with pixelated images, even the seemingly basic task of pedestrian detection can become exceedingly complex for computer vision algorithms,requiring advanced image enhancement and machine learning techniques. Therefore, the challenge in image analysis and computer vision often lies not in the inherent nobility of a task, but in the intricacies of the visual data and the computational methods required to extract meaningful insights from it.

## Link to computer vision applications
As a field, computer vision has a growing importance in society. There are many ethical considerations regarding its applications. For example, a model that is deployed to detect cancer can have terrible consequences if it classifies a cancer sample as healthy. Surveillance technology, such as models that are capable of tracking people, also raises a lot of privacy concerns. This will be discussed in detail in "Unit 12 - Ethics and Biases". We will give you a taste of some of its cool applications in "Applications of Computer Vision".

### Vision
https://huggingface.co/learn/computer-vision-course/unit1/chapter1/motivation.md

# Vision

Most of us know that sunlight is responsible for sustaining life on our planet, but have you ever wondered how this shaped our lives? For starters, almost every being on the earth has some way to sense it (even some bacteria and single-celled organisms). Humans share this ability, but we have a much more complicated system for interacting with light. We capture light through a lens that then emits an electrical signal in our eyes that passes through a cable-like structure (our nervous system), and that signal gets reconstructed to tell us what our surroundings look like in our brains.

This process is what we call vision. It is a fundamental step in our evolution. It is so important that scientists have hypothesized that the development of centralized nervous systems (which ultimately lead us to our big brains) follows the advent of vision. It makes sense without sensors capturing such vast information, why waste resources on making the machinery required to develop it?

## Significance of Vision to Humans

    

If you ever spontaneously kicked a ball, your brain performs a myriad of tasks unconsciously in a split second. It correctly identifies the ball, tracks its movement, predicts its trajectory, calculates the speed at which the ball would arrive at your location, predicts your foot trajectory, adjusts the strength and angle of impact, and sends the signal from your brain to your foot to change its position.  To take an image as an input (in this case, the signal captured by our retina) and transform it into information (kicking the ball)  is the core of computer vision. We will go into more detail about this in the next chapter. 

Shockingly, we don't need any formal education for this. We don't attend classes for most of the decisions we make daily. No mental math 101 can estimate the foot strength required for kicking a ball. We learned that from trial and error growing up. And some of us might never have learned at all. This is a striking contrast to the way we built programs. Programs are mostly rule-based.

Let’s try to replicate just the first task that our brain did: detecting that there is a ball. One way to do it is to define what a ball is and then exhaustively search for one in the image. Defining what a ball is is actually difficult. Balls can be as small as tennis balls but as big as Zorb balls, so size won’t help us much. We could try to describe its shape, but some balls, like rugby, are not always perfectly spherical. Not everything spherical is a ball either, otherwise bubbles, candies, and even our planet would all be considered balls.

    

## Pure Programming vs Machine Learning Approach
We could do a tentative definition and say  “A ball is a sphere-like object used in sports or in play”. It seems correct, but we run into another problem. How do you know they are playing a sport?  What do you use to detect that they are doing so? What if it was a dog with a ball? Is that not a ball? What if it is a ball on its own, with no people, and no sports? But what about something like the shuttlecock? It is something that we use to play with, that is not perfectly spherical, but we do not consider it a ball. All those nuances add up so that a simple problem that humans solve unconsciously is already hard to break down into simple rules.

 We know these things ourselves. This implicit understanding comes from the mental images we construct over the years of what balls look like. While a shuttlecock does not fit into that mental image of a ball, it is hard to explain why. It is not just due to its size or due to the feathers. There are similar-sized balls and even if we cover a ball with feathers, we would still recognize it as a ball. 

    

All of this is to show you that our ability to distinguish objects extends beyond strict definitions; we often generalize from related concepts and rely on context clues. When a familiar concept takes on a different form, we can still identify it without significant discomfort--this ability is natural to us. However, it is not inherent in systems governed by rigid, hard-coded rules.

This underscores the necessity for more robust systems--ones capable of adapting to a variety of scenarios. This is why the field is so closely related to artificial intelligence. Vision is context-rich, and we need models that are capable of leveraging these clues similarly to what we do.

Let's take the example of Indiana Jones running from a boulder. There is a ball and there is running, but no one would really call that a sport! We know this because we rely on some context clues. The ball Indiana Jones is running away from looks heavy and twice his size. His face reflects his distress. The space is very narrow and it looks like a cave which is unusual for sports. Plus, we recognize his attire and that is not usually how players dress themselves. 

## The Motivation Behind Creating Artificial Systems Capable of Simulating Human Vision and Cognition

Although they have similar input and output, human vision and computer vision are different processes. Sometimes they overlap. However, computer vision is primarily concerned with developing and understanding algorithms and models in vision systems and their decisions. It is not constrained to the creation of systems that replicate human vision. It can be used for problems that would be too tedious, time-consuming, expensive, or error-prone for humans to do.
Our ball example is still a simple one, and you might not think that is super useful. However, a model capable of tracking a ball can be used in sports events to provide faster and more fair decisions during gameplay. With the popularization of image-to-text and text-to-speech models, we could also make live sports events more accessible for people who have vision disabilities by automatically tracking the ball and its players and describing it in real time. Thus, even simple use cases can have a positive impact on society. We will discuss more about this in Section 3.

We are now on the cusp of an AI renaissance.  A moment in time when we can train, deploy, and share our model freely. A moment when our models can detect things in images that we would not be able to see ourselves. 

The limits of computer vision have been expanded, too. We can now generate images from text and generate descriptive text from images. And we can do that from our smartphones. Computer vision applications are everywhere. The possibilities are ours to explore and that is precisely what we will do in this course.  

We welcome you to the field of computer vision. Take a seat. Enjoy the ride. It is going to be amazing.

### Transfer Learning and Fine-tuning Vision Transformers for Image Classification
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformers-for-image-classification.md

# Transfer Learning and Fine-tuning Vision Transformers for Image Classification

## Introduction

As the Transformers architecture scaled well in Natural Language Processing, the same architecture was applied to images by creating small patches of the image and treating them as tokens. The result was a Vision Transformer (ViT). Before we get started with transfer learning / fine-tuning concepts, let's compare Convolutional Neural Networks (CNNs) with Vision Transformers.

## Vision Transformer (VT) a Summary

To summarize, in Vision transformer, images are reorganized as 2D grids of patches. The models are trained on those patches.

The main idea can be found at the picture below:
![Vision Transformer](https://huggingface.co/datasets/hf-vision/course-assets/blob/main/Screenshot%20from%202024-12-27%2014-25-49.png)

But there is a catch! The Convolutional Neural Networks (CNN) are designed with an assumption missing in the VT. This assumption is based on how we perceive the objects in the images as humans. It is described in the following section.

## What are the differences between CNNs and Vision Transformers?

### Inductive Bias

Inductive bias is a term used in machine learning to describe the set of assumptions that a learning algorithm uses to make predictions. In simpler terms, inductive bias is like a shortcut that helps a machine learning model make educated guesses based on the information it has seen so far.

Here's a couple of inductive biases we observe in CNNs:

- Translational Equivariance: an object can appear anywhere in the image, and CNNs can detect its features.
- Locality: pixels in an image interact mainly with its surrounding pixels to form features.

CNN models are very good at these two biases. ViT do not have this assumption. That is why for a dataset size up to a certain threshold actually CNNs are better than ViT. But ViT has another power!
The transformer architecture being (mostly) different types of linear functions allows ViT to become highly scalable. And that in turn allows ViT to overcome the problem of not having the above two
inductive biases with massive ammount of data!

### But how can everyone get access to massive datasets?

It's not feasible for everyone to train a Vision Transformer on millions of images to get good performance. Instead, one can use openly available model weights from places such as the [Hugging Face Hub](https://huggingface.co/models?sort=trending).

What do you do with the pre-trained model? You can apply transfer learning and fine-tune it!

## Transfer Learning & Fine-Tuning for Image Classification

The idea of transfer learning is that we can leverage the features learned by the Vision Transformers trained on a very large dataset and apply these features to our dataset. This can lead to significant improvements in model performance, especially when our dataset has limited data available for training.

Since we are taking advantage of the learned features, we do not need to update the entire model either. By freezing most of the weights, we can train only certain layers to get excellent performance with less training time and low GPU consumption.

### Multi-class Image Classification

You can go through the transfer learning tutorial using Vision Transformers for image classification in this notebook:

  

This is what we'll be building: an image classifier to tell apart dog and cat breeds:

---

It might be that the domain of your dataset is not very similar to the pre-trained model's dataset. Yet, instead of training a Vision Transformer from scratch, we can choose to update the weights of the entire pre-trained model albeit with a lower learning rate, which will "fine-tune" the model to perform well with our data.

  However, in most scenarios, applying transfer learning is ample in the case of
  Vision Transformers.

### Multi-label Image Classification

The tutorial above teaches multi-class image classification, where each image only has 1 class assigned to it. What about scenarios where each image has multiple labels in a multi-class dataset?

This notebook will walk you through a fine-tuning tutorial using Vision Transformer for multi-label image classification:

  

We'll also be learning how to use [Hugging Face Accelerate](https://huggingface.co/docs/accelerate/index) to write our custom training loops.
This is what you can expect to see as the outcome of the multi-label classification tutorial:

---

### Additional Resources

- Original Vision Transformers Paper: _An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [Paper](https://huggingface.co/papers/2010.11929)_
- Swin Transformers Paper: _Swin Transformer: Hierarchical Vision Transformer using Shifted Windows [Paper](https://huggingface.co/papers/2103.14030)_
- A systematic empirical study in order to better understand the interplay between the amount of training data, regularization, augmentation, model size and compute budget for Vision Transformers: _How to train your Vision Transformers? Data, Augmentation, and Regularization in Vision Transformers [Paper](https://huggingface.co/papers/2106.10270)_

### Vision Transformers for Object Detection
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformer-for-object-detection.md

# Vision Transformers for Object Detection 

This section will describe how object detection tasks are achieved using Vision Transformers. We will understand how to fine-tune existing pre-trained object detection models for our use case. Before starting, check out this HuggingFace Space, where you can play around with the final output.

## Introduction

![Object detection example](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/object_detection_wiki.png)

Object detection is a computer vision task that involves identifying and localizing objects within an image or video. It consists of two main steps:

- First, recognizing the types of objects present (such as cars, people, or animals).
- Second, determining their precise locations by drawing bounding boxes around them.

These models typically receive images (static or frames from videos) as their inputs, with multiple objects present in each image. For example, consider an image containing several objects such as cars, people, bicycles, and so on. Upon processing the input, these models produce a set of numbers that convey the following information:  

- Location of the object (XY coordinates of the bounding box).
- Class of the object.

There are a lot of of applications around object detection. One of the most significant examples is in the field of autonomous driving, where object detection is used to detect different objects (like pedestrians, road signs, traffic lights, etc)  around the car that become one of the inputs for taking decisions.  

To deepen your understanding of the ins-and-outs of object detection, check out our [dedicated chapter](https://huggingface.co/learn/computer-vision-course/unit6/basic-cv-tasks/object_detection) on Object Detection 🤗.

### The Need to Fine-tune Models in Object Detection 🤔

Should you build a new model, or alter an existing one? That is an awesome question. Training an object detection model from scratch means:

- Doing already done research over and over again.
- Writing repetitive model code, training them, and maintaining different repositories for different use cases.
- A lot of experimentation and waste of resources.

Rather than doing all this, take a well-performing pre-trained model (a model which that does an awesome job in recognizing general features), and tweak or re-tune its weights (or some part of its weights) to adapt it for your use case. We believe or assume that the pre-trained model has already learned enough to extract significant features inside an image to locate and classify objects. So, if new objects are introduced, then the same model can be trained for a small period of time and compute to start detecting those new objects with the help of already learned and new features.  

By the end of this tutorial, you should be able to make a full pipeline (from loading datasets, fine-tuning a model and doing inference) for object detection use case.

## Installing Necessary Libraries

Let's start with installation. Just execute the below cells to install the necessary packages. For this tutorial, we will be using Hugging Face Transformers and PyTorch.

```bash
!pip install -U -q datasets transformers[torch] evaluate timm albumentations accelerate
```

## Scenario

To make this tutorial interesting, let's consider a real-world example. Consider this scenario: construction workers require the utmost safety when working in construction areas. Basic safety protocol requires wearing a helmet every time. Since there are many construction workers, it is hard to keep and eye on everyone every time.

But, if we can have a camera system that can detect persons and whether the person is wearing a helmet or not in real-time, that would be awesome, right?

So, we are going to fine-tune a lightweight object detection model for doing just that. Let's dive in.  

### Dataset

For the above scenario, we will use the [hardhat](https://huggingface.co/datasets/hf-vision/hardhat) dataset provided by [Northeastern University China](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/7CBGOS). We can download and load this dataset with  🤗 `datasets`.  

```python
from datasets import load_dataset

dataset = load_dataset("anindya64/hardhat")
dataset
```

This will give you the following data structure:

```
DatasetDict({
    train: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'objects'],
        num_rows: 5297
    })
    test: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'objects'],
        num_rows: 1766
    })
})
```

Above is a [DatasetDict](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict), which is an efficient dict-like structure containing the whole dataset in train and test splits. As you can see, under each split (train and test), we have `features` and `num_rows`. Under features, we have the `image`, a [Pillow Object](https://realpython.com/image-processing-with-the-python-pillow-library/), the id of the image, height and width, and objects. 
Now let's see what each datapoint (in train/test set) looks like. To do that, run the following line:

```python
dataset["train"][0]
```

And this will give you the following structure:

```
{'image': ,
 'image_id': 1,
 'width': 500,
 'height': 375,
 'objects': {'id': [1, 1],
  'area': [3068.0, 690.0],
  'bbox': [[178.0, 84.0, 52.0, 59.0], [111.0, 144.0, 23.0, 30.0]],
  'category': ['helmet', 'helmet']}}
```

As you see, `objects` is an another dict containing the object ids (which are the class ids here), the area of the objects, and the bounding box coordinates (`bbox`) and the category (or the label). Here is a more detailed explaination of each of the keys and values of a data element. 

- `image`: This is a Pillow Image object that helps to look into the image directly before even loading from the path.
- `image_id`: Denotes which number of images is from the train file.
- `width`: The width of the image.
- `height`: The height of the image.
- `objects`:  Another dictionary containing information about annotation. This contains the following:
    - `id`: A list, where the length of the list denotes the number of objects and the value of each denotes the class index.
    - `area`: The area of the object.
    - `bbox`: Denotes bounding box coordinates of the object.
    - `category`: The class (string) of the object.

Now let's properly extract the train and test samples. For this tutorial, we have around 5000 training samples and 1700 test samples. 

```python
# First, extract out the train and test set

train_dataset = dataset["train"]
test_dataset = dataset["test"]
```

Now that we know what a sample data point contains, let's start by plotting that sample. Here we are going to first draw the image and then also draw the corresponding bounding box.  

Here is what we are going to do:

1. Get the image and its corresponding height and width.
2. Make a draw object that can easily draw text and lines on image.
3. Get the annotations dict from the sample.
4. Iterate over it.
5. For each, get the bounding box co-ordinates, which are x (where the bounding box starts horizontally), y (where the bounding box starts vertically), w (width of the bounding box), h (height of the bounding box).
6. Now if the bounding box measures are normalized then scale it, else leave it.
7. And finally draw the rectangle and the the class category text.

```python 
import numpy as np
from PIL import Image, ImageDraw

def draw_image_from_idx(dataset, idx):
    sample = dataset[idx]
    image = sample["image"]
    annotations = sample["objects"]
    draw = ImageDraw.Draw(image)
    width, height = sample["width"], sample["height"]

    for i in range(len(annotations["id"])):
        box = annotations["bbox"][i]
        class_idx = annotations["id"][i]
        x, y, w, h = tuple(box)
        if max(box) > 1.0:
            x1, y1 = int(x), int(y)
            x2, y2 = int(x + w), int(y + h)
        else:
            x1 = int(x * width)
            y1 = int(y * height)
            x2 = int((x + w) * width)
            y2 = int((y + h) * height)
        draw.rectangle((x1, y1, x2, y2), outline="red", width=1)
        draw.text((x1, y1), annotations["category"][i], fill="white")
    return image

draw_image_from_idx(dataset=train_dataset, idx=10)
```

We have a function to plot one single image, let's write a simple function using the above to plot multiple images. This will help us with some analysis.

```python
import matplotlib.pyplot as plt

def plot_images(dataset, indices):
    """
    Plot images and their annotations.
    """
    num_rows = len(indices) // 3
    num_cols = 3
    fig, axes = plt.subplots(num_rows, num_cols, figsize=(15, 10))

    for i, idx in enumerate(indices):
        row = i // num_cols
        col = i % num_cols

        # Draw image
        image = draw_image_from_idx(dataset, idx)

        # Display image on the corresponding subplot
        axes[row, col].imshow(image)
        axes[row, col].axis("off")

    plt.tight_layout()
    plt.show()

# Now use the function to plot images

plot_images(train_dataset, range(9))
```
Running the function will give us a beautiful collage shown below.

![input-image-plot](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/object_detection_train_image_with_annotation_plots.png)

## AutoImageProcessor

Before fine-tuning the model, we must preprocess the data in such a way that it matches exactly with the approach used during the time of pre-training. HuggingFace [AutoImageProcessor](https://huggingface.co/docs/transformers/v4.36.0/en/model_doc/auto#transformers.AutoImageProcessor) takes care of processing the image data to create `pixel_values`, `pixel_mask`, and `labels` that a DETR model can train with.

Now, let us instantiate the image processor from the same checkpoint we want to use our model to fine-tune.

```python
from transformers import AutoImageProcessor

checkpoint = "facebook/detr-resnet-50-dc5"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
```

## Preprocessing the Dataset

Before passing the images to the `image_processor`, let's also apply different types of augmentations to the images along with their corresponding bounding boxes.

In simple terms, augmentations are some set of random transformations like rotations, resizing etc. These are applied to get more samples and to make the vision model more robust towards different conditions of the image. We will use the [albumentations](https://github.com/albumentations-team/albumentations) library to achieve this. It let's you to create random transformations of the images so that your sample size increases for training. 

```python
import albumentations
import numpy as np
import torch

transform = albumentations.Compose(
    [
        albumentations.Resize(480, 480),
        albumentations.HorizontalFlip(p=1.0),
        albumentations.RandomBrightnessContrast(p=1.0),
    ],
    bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]),
)
```

Once we initialize all the transformations, we need to make a function which formats the annotations and returns the a list of annotation with a very specific format.

This is because the `image_processor` expects the annotations to be in the following format: `{'image_id': int, 'annotations': List[Dict]}`, where each dictionary is a COCO object annotation. 

```python
def formatted_anns(image_id, category, area, bbox):
    annotations = []
    for i in range(0, len(category)):
        new_ann = {
            "image_id": image_id,
            "category_id": category[i],
            "isCrowd": 0,
            "area": area[i],
            "bbox": list(bbox[i]),
        }
        annotations.append(new_ann)

    return annotations
```

Finally, we combine the image and annotation transformations to do transformations over the whole batch of dataset.

Here is the final code to do so:

```python
# transforming a batch

def transform_aug_ann(examples):
    image_ids = examples["image_id"]
    images, bboxes, area, categories = [], [], [], []
    for image, objects in zip(examples["image"], examples["objects"]):
        image = np.array(image.convert("RGB"))[:, :, ::-1]
        out = transform(image=image, bboxes=objects["bbox"], category=objects["id"])

        area.append(objects["area"])
        images.append(out["image"])
        bboxes.append(out["bboxes"])
        categories.append(out["category"])

    targets = [
        {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)}
        for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes)
    ]

    return image_processor(images=images, annotations=targets, return_tensors="pt")
```

Finally, all you have to do is apply this preprocessing function to the entire dataset. You can achieve this by using HuggingFace 🤗 [Datasets with transform](https://huggingface.co/docs/datasets/v2.15.0/en/package_reference/main_classes#datasets.Dataset.with_transform) method.

```python
# Apply transformations for both train and test dataset

train_dataset_transformed = train_dataset.with_transform(transform_aug_ann)
test_dataset_transformed = test_dataset.with_transform(transform_aug_ann)
```

Now let's see how a transformed train dataset sample looks like:

```python
train_dataset_transformed[0]
```

This will return a dictionary of tensors. What we mainly require here is the `pixel_values` which represent the image, `pixel_mask` which is the attention masks and the `labels`. Here is one data point looks like:

```
{'pixel_values': tensor([[[-0.1657, -0.1657, -0.1657,  ..., -0.3369, -0.4739, -0.5767],
          [-0.1657, -0.1657, -0.1657,  ..., -0.3369, -0.4739, -0.5767],
          [-0.1657, -0.1657, -0.1828,  ..., -0.3541, -0.4911, -0.5938],
          ...,
          [-0.4911, -0.5596, -0.6623,  ..., -0.7137, -0.7650, -0.7993],
          [-0.4911, -0.5596, -0.6794,  ..., -0.7308, -0.7993, -0.8335],
          [-0.4911, -0.5596, -0.6794,  ..., -0.7479, -0.8164, -0.8507]],
 
         [[-0.0924, -0.0924, -0.0924,  ...,  0.0651, -0.0749, -0.1800],
          [-0.0924, -0.0924, -0.0924,  ...,  0.0651, -0.0924, -0.2150],
          [-0.0924, -0.0924, -0.1099,  ...,  0.0476, -0.1275, -0.2500],
          ...,
          [-0.0924, -0.1800, -0.3200,  ..., -0.4426, -0.4951, -0.5301],
          [-0.0924, -0.1800, -0.3200,  ..., -0.4601, -0.5126, -0.5651],
          [-0.0924, -0.1800, -0.3200,  ..., -0.4601, -0.5301, -0.5826]],
 
         [[ 0.1999,  0.1999,  0.1999,  ...,  0.6705,  0.5136,  0.4091],
          [ 0.1999,  0.1999,  0.1999,  ...,  0.6531,  0.4962,  0.3916],
          [ 0.1999,  0.1999,  0.1825,  ...,  0.6356,  0.4614,  0.3568],
          ...,
          [ 0.4788,  0.3916,  0.2696,  ...,  0.1825,  0.1302,  0.0953],
          [ 0.4788,  0.3916,  0.2696,  ...,  0.1651,  0.0953,  0.0605],
          [ 0.4788,  0.3916,  0.2696,  ...,  0.1476,  0.0779,  0.0431]]]),
 'pixel_mask': tensor([[1, 1, 1,  ..., 1, 1, 1],
         [1, 1, 1,  ..., 1, 1, 1],
         [1, 1, 1,  ..., 1, 1, 1],
         ...,
         [1, 1, 1,  ..., 1, 1, 1],
         [1, 1, 1,  ..., 1, 1, 1],
         [1, 1, 1,  ..., 1, 1, 1]]),
 'labels': {'size': tensor([800, 800]), 'image_id': tensor([1]), 'class_labels': tensor([1, 1]), 'boxes': tensor([[0.5920, 0.3027, 0.1040, 0.1573],
         [0.7550, 0.4240, 0.0460, 0.0800]]), 'area': tensor([8522.2217, 1916.6666]), 'iscrowd': tensor([0, 0]), 'orig_size': tensor([480, 480])}}
```

We are almost there 🚀. As a last preprocessing step, we need to write a custom `collate_fn`. Now what is a `collate_fn` ?

A `collate_fn` is responsible for taking a list of samples from a dataset and converting them into a batch suitable for model's input format.

In general a `DataCollator` typically performs tasks such as padding, truncating etc. In a custom collate function, we often define what and how we want to group the data into batches or simply, how to represent each batch.

Data collator mainly puts the data together and then preprocesses them. Let's make our collate function. 

```python
def collate_fn(batch):
    pixel_values = [item["pixel_values"] for item in batch]
    encoding = image_processor.pad(pixel_values, return_tensors="pt")
    labels = [item["labels"] for item in batch]
    batch = {}
    batch["pixel_values"] = encoding["pixel_values"]
    batch["pixel_mask"] = encoding["pixel_mask"]
    batch["labels"] = labels
    return batch
```

## Training a DETR Model.

So, all the heavy lifting is done so far. Now, all that is left is to assemble each part of the puzzle one by one. Let's go!

The training procedure involves the following steps:

1. Loading the base (pre-trained) model with [AutoModelForObjectDetection](https://huggingface.co/docs/transformers/v4.36.0/en/model_doc/auto#transformers.AutoModelForObjectDetection) using the same checkpoint as in the preprocessing.

2. Defining all the hyperparameters and additional arguments inside [TrainingArguments](https://huggingface.co/docs/transformers/v4.36.0/en/main_classes/trainer#transformers.TrainingArguments).

3. Pass the training arguments inside [HuggingFace Trainer](https://huggingface.co/docs/transformers/v4.36.0/en/main_classes/trainer#transformers.Trainer), along with the model, dataset and image.

4. Call the `train()` method and fine-tune your model.

> When loading the model from the same checkpoint that you used for the preprocessing, remember to pass the `label2id` and `id2label` maps that you created earlier from the dataset’s metadata. Additionally, we specify `ignore_mismatched_sizes=True` to replace the existing classification head with a new one.

```python
from transformers import AutoModelForObjectDetection

id2label = {0: "head", 1: "helmet", 2: "person"}
label2id = {v: k for k, v in id2label.items()}

model = AutoModelForObjectDetection.from_pretrained(
    checkpoint,
    id2label=id2label,
    label2id=label2id,
    ignore_mismatched_sizes=True,
)
```

Before proceeding further, log in to Hugging Face Hub to upload your model on the fly while training. In this way, you do not need to handle the checkpoints and save them somewhere. 
```python
from huggingface_hub import notebook_login

notebook_login()
```

Once done, let's start training the model. We start by defining the training arguments and defining a trainer object that uses those arguments to do the training, as shown here:

```python
from transformers import TrainingArguments
from transformers import Trainer

# Define the training arguments

training_args = TrainingArguments(
    output_dir="detr-resnet-50-hardhat-finetuned",
    per_device_train_batch_size=8,
    num_train_epochs=3,
    max_steps=1000,
    fp16=True,
    save_steps=10,
    logging_steps=30,
    learning_rate=1e-5,
    weight_decay=1e-4,
    save_total_limit=2,
    remove_unused_columns=False,
    push_to_hub=True,
)

# Define the trainer

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=collate_fn,
    train_dataset=train_dataset_transformed,
    eval_dataset=test_dataset_transformed,
    tokenizer=image_processor,
)

trainer.train()
```

Once training is finished, you can now delete the model, because checkpoints are already uploaded in HuggingFace Hub. 

```python
del model
torch.cuda.synchronize()
```

### Testing and Inference

Now we will try to do inference of our new fine-tuned model. For this tutorial, we will be testing for this image:

![input-test-image](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/test_input_for_od.png)

Here we first write a very simple code on doing inference for object detection for some new images. We start of with inferencing for one single image and after that we will club togather everything up and make a function out of it.

```python
import requests
from transformers import pipeline

# download a sample image

url = "https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/test-helmet-object-detection.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# make the object detection pipeline

obj_detector = pipeline(
    "object-detection", model="anindya64/detr-resnet-50-dc5-hardhat-finetuned"
)
results = obj_detector(train_dataset[0]["image"])

print(results)
```

Now let's make a very simple function to plot the results on our image. We get score, label and corresponding bounding boxes co-ordinates from results, which we will we use to draw in the image.

```python
def plot_results(image, results, threshold=0.7):
    image = Image.fromarray(np.uint8(image))
    draw = ImageDraw.Draw(image)
    for result in results:
        score = result["score"]
        label = result["label"]
        box = list(result["box"].values())
        if score > threshold:
            x, y, x2, y2 = tuple(box)
            draw.rectangle((x, y, x2, y2), outline="red", width=1)
            draw.text((x, y), label, fill="white")
            draw.text(
                (x + 0.5, y - 0.5),
                text=str(score),
                fill="green" if score > 0.7 else "red",
            )
    return image
```

And finally use this function for the same test image we used.

```
results = obj_detector(image)
plot_results(image, results)
```

And this will plot the output below:

![output-test-image-plot](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/test_output_for_od.png)

Now, let's club everything together into a simple function.

```python
def predict(image, pipeline, threshold=0.7):
    results = pipeline(image)
    return plot_results(image, results, threshold)

# Let's test for another test image

img = test_dataset[0]["image"]
predict(img, obj_detector)
```

Let's even plot multiple images using our inference function on a small test sample. 

```python 
from tqdm.auto import tqdm

def plot_images(dataset, indices):
    """
    Plot images and their annotations.
    """
    num_rows = len(indices) // 3
    num_cols = 3
    fig, axes = plt.subplots(num_rows, num_cols, figsize=(15, 10))

    for i, idx in tqdm(enumerate(indices), total=len(indices)):
        row = i // num_cols
        col = i % num_cols

        # Draw image
        image = predict(dataset[idx]["image"], obj_detector)

        # Display image on the corresponding subplot
        axes[row, col].imshow(image)
        axes[row, col].axis("off")

    plt.tight_layout()
    plt.show()

plot_images(test_dataset, range(6))
```
Running this function will give us an output like this:

![test-sample-output-plot](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/sample_od_test_set_inference_output.png)

Well, that's not bad. We can improve the results if we fine-tune further. You can find this fine-tuned checkpoint [here](hf-vision/detr-resnet-50-dc5-harhat-finetuned).

### Knowledge Distillation with Vision Transformers
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/knowledge-distillation.md

# Knowledge Distillation with Vision Transformers

We are going to learn about Knowledge Distillation, the method behind [distilGPT](https://huggingface.co/distilgpt2) and [distilbert](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english), two of _the most downloaded models on the Hugging Face Hub!_

Presumably, we've all had teachers who "teach" by simply providing us the correct answers and then testing us on questions we haven't seen before, analogous to supervised learning of machine learning models where we provide a labeled dataset to train on. Instead of having a model train on labels, however, we can pursue [Knowledge Distillation](https://arxiv.org/abs/1503.02531) as an alternative to arriving at a much smaller model that can perform comparably to the larger model and much faster to boot.

## Intuition Behind Knowledge Distillation

Imagine you were given this multiple-choice question:

![Multiple Choice Question](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/multiple-choice-question.png)

If you had someone just tell you, "The answer is Draco Malfoy," that doesn't teach you a whole lot about each of the characters' relative relationships with Harry Potter.

On the other hand, if someone tells you, "I am very confident it is not Ron Weasley, I am somewhat confident it is not Neville Longbottom, and I am very confident that it _is_ Draco Malfoy," this gives you some information about these characters' relationships to Harry Potter! This is precisely the kind of information that gets passed down to our student model under the Knowledge Distillation paradigm.

## Distilling the Knowledge in a Neural Network

In the paper [_Distilling the Knowledge in a Neural Network_](https://arxiv.org/abs/1503.02531), Hinton et al. introduced the training methodology known as knowledge distillation, taking inspiration from _insects_, of all things. Just as insects transition from larval to adult forms that are optimized for different tasks, large-scale machine learning models can initially be cumbersome, like larvae, for extracting structure from data but can distill their knowledge into smaller, more efficient models for deployment.

The essence of Knowledge Distillation is using the predicted logits from a teacher network to pass information to a smaller, more efficient student model. We do this by re-writing the loss function to contain a _distillation loss_, which encourages the student model's distribution over the output space to approximate the teacher's.

The distillation loss is formulated as:

![Distillation Loss](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/KL-Loss.png)

The KL loss refers to the [Kullback-Leibler Divergence](https://en.wikipedia.org/wiki/Kullback–Leibler_divergence) between the teacher and the student's output distributions. The overall loss for the student model is then formulated as the sum of this distillation loss with the standard cross-entropy loss over the ground-truth labels.

To see this loss function implemented in Python and a fully worked out example in Python, let's check out the [notebook for this section](https://github.com/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/KnowledgeDistillation.ipynb).

  

# Leveraging Knowledge Distillation for Edge Devices

Knowledge distillation has become increasingly crucial as AI models are deployed on edge devices. Deploying a large-scale model, such as one with a 1 GB size and a latency of 1 second, is impractical for real-time applications due to high computational and memory requirements. These limitations are primarily attributed to the model's size. As a result, the field has embraced knowledge distillation, a technique that reduces model parameters by over 90% with minimal performance degradation.

## The Consequences(good & bad) of Knowledge Distillation

### 1. Entropy Gain

In the context of information theory, entropy is analogous to its counterpart in physics, where it measures the "chaos" or disorder within a system. In our scenario, it quantifies the amount of information a distribution contains. Consider the following example:

- Which is harder to remember: `[0, 1, 0, 0]` or `[0.2, 0.5, 0.2, 0.1]`?

The first vector, `[0, 1, 0, 0]`, is easier to remember and compress, as it contains less information. This can be represented as "1" in the second position. On the other hand, `[0.2, 0.5, 0.2, 0.1]` contains more information.
Building on that, let’s say, for example, we trained an 80M parameter network on ImageNet and then distilled it (as discussed earlier) into a 5M parameter student model. We would find that the entropy contained in the output of the teacher model is much lower than that of the student model. This means that the output of the student model, even though correct, is more chaotic than the teacher’s outputs. This comes down to a simple fact: the teacher’s additional parameters help it discern between classes more easily as it extracts more features. This perspective on knowledge distillation is very interesting and is actively being researched to reduce the student’s entropy, either by using it as a loss function or by applying similar metrics inspired by physics (such as energy).

### 2. Coherent Gradient Updates

Models learn iteratively by minimizing a loss function and updating their parameters through gradient descent. Consider a set of parameters `P = {w1, w2, w3, ..., wn}`, whose role in the teacher model is to activate when detecting a sample of class A. If an ambiguous sample resembles class A but belongs to class B, the model's gradient update will be aggressive after the misclassification, leading to instability. In contrast, the distillation process, with the teacher model's soft targets, promotes more stable and coherent gradient updates during training, resulting in a smoother learning process for the student model.

### 3. Ability to Train on Unlabeled Data

The presence of a teacher model allows the student model to train on unlabeled data. The teacher model can generate pseudo-labels for these unlabeled samples, which the student model can then use for training. This approach significantly increases the amount of usable training data.

### 4. A Shift in Perspective

Deep learning models are typically trained with the assumption that providing enough data will allow them to approximate a function `F` that accurately represents the underlying phenomenon. However, in many cases, data scarcity makes this assumption unrealistic. The traditional approach involves building larger models and fine-tuning them iteratively to achieve optimal results. In contrast, knowledge distillation shifts this perspective: given that we already have a well-trained teacher model `F`, the goal becomes approximating `F` using a smaller model `f`.

### Convolutional Vision Transformer (CvT)
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/cvt.md

# Convolutional Vision Transformer (CvT)

In this section, we will be doing a deep dive into Convolutional Vision Transformer (CvT) which is a variant of Vision Transformer (ViT)[[1]](#vision-transformer) and extensively used for the Image Classification task in Computer Vision.

## Recap

Before going into CvT, let's have a small recap on ViT architecture covered in the previous sections to better appreciate the CvT architecture. ViT decomposes each image into a sequence of tokens (i.e. non-overlapping patches) with a fixed length and then applies multiple standard Transformer layers, consisting of Multi-head Self Attention and Position-wise Feed-forward module (FFN) to model global relations for classification.

## Overview

Convolutional Vision Transformer (CvT) model was proposed in CvT: Introducing Convolutions to Vision Transformers [[2]](#cvt) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. CvT employs all the benefits of CNNs: _local receptive fields_, _shared weights_, and _spatial subsampling_ along with _shift_, _scale_, _distortion invariance_ while keeping merits of Transformers: _dynamic attention_, _global context fusion_, and _better generalization_. CvT achieves superior performance while maintaining computational efficiency compared to ViT. Furthermore, due to built-in local context structure introduced by convolutions, CvT no longer requires a position embedding, giving it a potential advantage for adaption to a wide range of vision tasks requiring variable input resolution.

## Architecture

![CvT Architecture](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/cvt_architecture.png)
_(a) Overall architecture, showing the hierarchical multi-stage structure facilitated by the Convolutional Token Embedding layer. (b) Details of the Convolutional Transformer Block, which contains the convolutional projection as the first layer. [[2]](#cvt)_

The above image of CvT architecture illustrates the main steps of 3-stage pipeline. At its core, CvT blends two convolution-based operations into the Vision Transformer architecture:

- **Convolutional Token Embedding**: Imagine splitting the input image into overlapping patches, reshaping them into tokens, and then feeding them to a convolution layer. This reduces the number of tokens (like pixels in a downsampled image) while boosting their feature richness, similar to traditional CNNs. Unlike other Transformers, we skip adding pre-defined position information to the tokens, relying solely on convolutional operations to capture spatial relationships.

![Projection Layer](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/cvt_conv_proj.png)
_(a) Linear Projection in ViT. (b) Convolutional Projection. (c) Squeezed Convolutional Projection (Default in CvT). [[2]](#cvt)_

- **Convolutional Transformer Blocks**: Each stage in CvT contains a stack of these blocks. Here, instead of the usual linear projections in ViT, we use depth-wise separable convolutions (Convolutional Projection) to process the "query," "key," and "value" components of the self-attention module as shown in the above image. This maintains the benefits of Transformers while improving efficiency. Note that the "classification token" (used for final prediction) is only added in the last stage. Finally, a standard fully-connected layer analyzes the final classification token to predict the image class.

### Comparision of CvT Architecture with other Vision Transformers

The below table shows the key differences in terms of the necessity of positional encodings, type of token embedding, type of projection, and Transformer structure in the backbone, between the above representative concurrent works and CvT.

| Model                                            | Needs Position Encoding (PE) | Token Embedding                 | Projection for Attention | Hierarchical Transformers |
| ------------------------------------------------ | ---------------------------- | ------------------------------- | ------------------------ | ------------------------- |
| ViT[[1]](#vision-transformer), DeiT [[3]](#deit) | Yes                          | Non-overlapping                 | Linear                   | No                        |
| CPVT[[4]](#cpvt)                                 | No (w/PE Generator)          | Non-Overlapping                 | Linear                   | No                        |
| TNT[[5]](#tnt)                                   | Yes                          | Non-overlapping (Patch + Pixel) | Linear                   | No                        |
| T2T[[6]](#t2t)                                   | Yes                          | Overlapping (Concatenate)       | Linear                   | Partial (Tokenization)    |
| PVT[[7]](#pvt)                                   | Yes                          | Non-overlapping                 | Spatial Reduction        | Yes                       |
| _CvT_[[2]](#cvt)                                 | _No_                         | _Overlapping (Convolution)_     | _Convolution_            | _Yes_                     |

### Main Highlights

The four main highlights of CvT that helped achieve superior performance and computational efficiency are the following:

- **Hierarchy of Transformers** containing a new **Convolutional token embedding**.
- Convolutional Transformer block leveraging a **Convolutional Projection**.
- **Removal of Positional Encoding** due to built-in local context structure introduced by convolutions.
- Fewer **Parameters** and lower **FLOPs** (Floating Point Operations per second) compared to other vision transformer architectures.

## PyTorch Implementation

Time to get hands-on! Let's explore how to code each major blocks of the CvT architecture in PyTorch shown in the official implementation [[8]](#cvt-imp).

1. Importing required libraries

```python
from collections import OrderedDict
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops import rearrange
from einops.layers.torch import Rearrange
```

2. Implementation of **Convolutional Projection**

```python
def _build_projection(self, dim_in, dim_out, kernel_size, padding, stride, method):
    if method == "dw_bn":
        proj = nn.Sequential(
            OrderedDict(
                [
                    (
                        "conv",
                        nn.Conv2d(
                            dim_in,
                            dim_in,
                            kernel_size=kernel_size,
                            padding=padding,
                            stride=stride,
                            bias=False,
                            groups=dim_in,
                        ),
                    ),
                    ("bn", nn.BatchNorm2d(dim_in)),
                    ("rearrage", Rearrange("b c h w -> b (h w) c")),
                ]
            )
        )
    elif method == "avg":
        proj = nn.Sequential(
            OrderedDict(
                [
                    (
                        "avg",
                        nn.AvgPool2d(
                            kernel_size=kernel_size,
                            padding=padding,
                            stride=stride,
                            ceil_mode=True,
                        ),
                    ),
                    ("rearrage", Rearrange("b c h w -> b (h w) c")),
                ]
            )
        )
    elif method == "linear":
        proj = None
    else:
        raise ValueError("Unknown method ({})".format(method))

    return proj
```

The method takes several parameters related to a convolutional layer (such as input and output dimensions, kernel size, padding, stride, and method) and returns a projection block based on the specified method.

- If the method is `dw_bn` (depthwise separable with batch normalization), it creates a Sequential block consisting of a depthwise separable convolutional layer followed by batch normalization and rearranges the dimensions.

- If the method is `avg` (average pooling), it creates a Sequential block with an average pooling layer followed by rearranging the dimensions.

- If the method is `linear`, it returns None, indicating that no projection is applied.

The rearrangement of dimensions is performed using the `Rearrange` operation, which reshapes the input tensor. The resulting projection block is then returned.

3. Implementation of **Convolutional Token Embedding**

```python
class ConvEmbed(nn.Module):
    def __init__(
        self, patch_size=7, in_chans=3, embed_dim=64, stride=4, padding=2, norm_layer=None
    ):
        super().__init__()
        patch_size = to_2tuple(patch_size)
        self.patch_size = patch_size

        self.proj = nn.Conv2d(
            in_chans, embed_dim, kernel_size=patch_size, stride=stride, padding=padding
        )
        self.norm = norm_layer(embed_dim) if norm_layer else None

    def forward(self, x):
        x = self.proj(x)

        B, C, H, W = x.shape
        x = rearrange(x, "b c h w -> b (h w) c")
        if self.norm:
            x = self.norm(x)
        x = rearrange(x, "b (h w) c -> b c h w", h=H, w=W)

        return x
```

This code defines a ConvEmbed module that performs patch-wise embedding on an input image.

- The `__init__` method initializes the module with parameters such as `patch_size` (size of the image patches), `in_chans` (number of input channels), `embed_dim` (dimensionality of the embedded patches), `stride` (stride for the convolution operation), `padding` (padding for the convolution operation), and `norm_layer` (a normalization layer, which is optional).

- In the constructor, a 2D convolutional layer (`nn.Conv2d`) is created with specified parameters, including the patch size, input channels, embedding dimension, stride, and padding. This convolutional layer is assigned to `self.proj`.

- If a normalization layer is provided, an instance of the normalization layer is created with embed_dim channels, and it is assigned to `self.norm`.

- The forward method takes an input tensor x and applies the convolution operation using `self.proj`. The output is reshaped using the rearrange function to flatten the spatial dimensions. If a normalization layer is present, it is applied to the flattened representation. Finally, the tensor is reshaped back to the original spatial dimensions and returned.

In summary, this module is designed for patch-wise embedding of images, where each patch is processed independently through a convolutional layer, and optional normalization is applied to the embedded features.

4. Implementation of **Vision Transformer** Block

```python
class VisionTransformer(nn.Module):
    """Vision Transformer with support for patch or hybrid CNN input stage"""

    def __init__(
        self,
        patch_size=16,
        patch_stride=16,
        patch_padding=0,
        in_chans=3,
        embed_dim=768,
        depth=12,
        num_heads=12,
        mlp_ratio=4.0,
        qkv_bias=False,
        drop_rate=0.0,
        attn_drop_rate=0.0,
        drop_path_rate=0.0,
        act_layer=nn.GELU,
        norm_layer=nn.LayerNorm,
        init="trunc_norm",
        **kwargs,
    ):
        super().__init__()
        self.num_features = self.embed_dim = embed_dim

        self.rearrage = None

        self.patch_embed = ConvEmbed(
            patch_size=patch_size,
            in_chans=in_chans,
            stride=patch_stride,
            padding=patch_padding,
            embed_dim=embed_dim,
            norm_layer=norm_layer,
        )

        with_cls_token = kwargs["with_cls_token"]
        if with_cls_token:
            self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
        else:
            self.cls_token = None

        self.pos_drop = nn.Dropout(p=drop_rate)
        dpr = [
            x.item() for x in torch.linspace(0, drop_path_rate, depth)
        ]  # stochastic depth decay rule

        blocks = []
        for j in range(depth):
            blocks.append(
                Block(
                    dim_in=embed_dim,
                    dim_out=embed_dim,
                    num_heads=num_heads,
                    mlp_ratio=mlp_ratio,
                    qkv_bias=qkv_bias,
                    drop=drop_rate,
                    attn_drop=attn_drop_rate,
                    drop_path=dpr[j],
                    act_layer=act_layer,
                    norm_layer=norm_layer,
                    **kwargs,
                )
            )
        self.blocks = nn.ModuleList(blocks)

        if self.cls_token is not None:
            trunc_normal_(self.cls_token, std=0.02)

        if init == "xavier":
            self.apply(self._init_weights_xavier)
        else:
            self.apply(self._init_weights_trunc_normal)

        def forward(self, x):
            x = self.patch_embed(x)
            B, C, H, W = x.size()

            x = rearrange(x, "b c h w -> b (h w) c")

            cls_tokens = None
            if self.cls_token is not None:
                cls_tokens = self.cls_token.expand(B, -1, -1)
                x = torch.cat((cls_tokens, x), dim=1)

            x = self.pos_drop(x)

            for i, blk in enumerate(self.blocks):
                x = blk(x, H, W)

            if self.cls_token is not None:
                cls_tokens, x = torch.split(x, [1, H * W], 1)
            x = rearrange(x, "b (h w) c -> b c h w", h=H, w=W)

            return x, cls_tokens
```

This code defines a Vision Transformer module. Here's a brief overview of the code:

- **Initialization:** The `VisionTransformer` class is initialized with various parameters that define the model architecture, such as patch size, embedding dimensions, number of layers, number of attention heads, dropout rates, etc.

- **Patch Embedding:** The model includes a patch embedding layer (`patch_embed), which processes the input image by dividing it into non-overlapping patches and embedding them using Convolutions.

- **Transformer Blocks:** The model consists of a stack of transformer blocks (`Block`). The number of blocks is determined by the depth parameter. Each block contains multi-head self-attention mechanisms and a feedforward neural network.

- **Classification Token:** Optionally, the model can include a learnable classification token (`cls_token`) appended to the input sequence. This token is used for classification tasks.

- **Stochastic Depth:** Stochastic depth is applied to the transformer blocks, where a random subset of blocks is skipped during training to improve regularization. This is controlled by the `drop_path_rate` parameter.

- **Initialization of Weights:** The model weights are initialized using either truncated normal distribution (`trunc_norm`) or Xavier initialization (`xavier`).

- **Forward Method:** The forward method processes the input through the patch embedding, rearranges the dimensions, adds the classification token if present, applies dropout, and then passes the data through the stack of transformer blocks. Finally, the output is rearranged back to the original shape, and the classification token (if present) is separated from the rest of the sequence before returning the output.

5. Implementation of Convolutional Vision Transformer Block (**Hierarchy of Transformers**)

```python
class ConvolutionalVisionTransformer(nn.Module):
    def __init__(
        self,
        in_chans=3,
        num_classes=1000,
        act_layer=nn.GELU,
        norm_layer=nn.LayerNorm,
        init="trunc_norm",
        spec=None,
    ):
        super().__init__()
        self.num_classes = num_classes

        self.num_stages = spec["NUM_STAGES"]
        for i in range(self.num_stages):
            kwargs = {
                "patch_size": spec["PATCH_SIZE"][i],
                "patch_stride": spec["PATCH_STRIDE"][i],
                "patch_padding": spec["PATCH_PADDING"][i],
                "embed_dim": spec["DIM_EMBED"][i],
                "depth": spec["DEPTH"][i],
                "num_heads": spec["NUM_HEADS"][i],
                "mlp_ratio": spec["MLP_RATIO"][i],
                "qkv_bias": spec["QKV_BIAS"][i],
                "drop_rate": spec["DROP_RATE"][i],
                "attn_drop_rate": spec["ATTN_DROP_RATE"][i],
                "drop_path_rate": spec["DROP_PATH_RATE"][i],
                "with_cls_token": spec["CLS_TOKEN"][i],
                "method": spec["QKV_PROJ_METHOD"][i],
                "kernel_size": spec["KERNEL_QKV"][i],
                "padding_q": spec["PADDING_Q"][i],
                "padding_kv": spec["PADDING_KV"][i],
                "stride_kv": spec["STRIDE_KV"][i],
                "stride_q": spec["STRIDE_Q"][i],
            }

            stage = VisionTransformer(
                in_chans=in_chans,
                init=init,
                act_layer=act_layer,
                norm_layer=norm_layer,
                **kwargs,
            )
            setattr(self, f"stage{i}", stage)

            in_chans = spec["DIM_EMBED"][i]

        dim_embed = spec["DIM_EMBED"][-1]
        self.norm = norm_layer(dim_embed)
        self.cls_token = spec["CLS_TOKEN"][-1]

        # Classifier head
        self.head = (
            nn.Linear(dim_embed, num_classes) if num_classes > 0 else nn.Identity()
        )
        trunc_normal_(self.head.weight, std=0.02)

    def forward_features(self, x):
        for i in range(self.num_stages):
            x, cls_tokens = getattr(self, f"stage{i}")(x)

        if self.cls_token:
            x = self.norm(cls_tokens)
            x = torch.squeeze(x)
        else:
            x = rearrange(x, "b c h w -> b (h w) c")
            x = self.norm(x)
            x = torch.mean(x, dim=1)

        return x

    def forward(self, x):
        x = self.forward_features(x)
        x = self.head(x)

        return x
```

This code defines a PyTorch module called `ConvolutionalVisionTransformer`.

- The model consists of multiple stages, each represented by an instance of the `VisionTransformer` class.
- Each stage has different configurations such as patch size, stride, depth, number of heads, etc., specified in the spec dictionary.
- The `forward_features` method processes the input x through all the stages, and it aggregates the final representation.
- The class has a classifier head that performs a linear transformation to produce the final output.
- The `forward` method calls `forward_features` and then passes the result through the classifier head.
- The vision transformer stages are sequentially named as stage0, stage1, etc., and each stage is an instance of the `VisionTransformer` class forming a hierarchy of transformers.

Congratulations! Now you know how to implement CvT architecture in PyTorch. You can view complete code of the CvT architecture [here](https://github.com/microsoft/CvT/blob/main/lib/models/cls_cvt.py).

## Try it out

If you're looking to use CvT without getting into the complex details of its PyTorch implementation, you can easily do so by leveraging the Hugging Face `transformers` library. Here's how:

```bash
pip install transformers
```

You can find the documentation for CvT model [here](https://huggingface.co/docs/transformers/model_doc/cvt#overview).

### Usage

Here is how to use CvT model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:

```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/cvt-13")
model = CvtForImageClassification.from_pretrained("microsoft/cvt-13")

inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```

## References

- [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 
- [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 
- [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 
- [Conditional Positional Encodings for Vision Transformers](https://arxiv.org/abs/2102.10882) 
- [Transformer in Transformer](https://arxiv.org/abs/2103.00112v3)
- [Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet](https://arxiv.org/abs/2101.11986) 
- [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) 
- [Implementation of CvT](https://github.com/microsoft/CvT/tree/main)

### Swin Transformer
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/swin-transformer.md

# Swin Transformer
Introduced in the 2021 paper, [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/pdf/2103.14030.pdf), the Swin Transformer architecture optimizes for latency and performance using a shifted window (as opposed to sliding window) approach which reduces the number of operations required. Swin is considered a **hierarchical backbone** for computer vision. Swin can be used for tasks like image classification.

A backbone, in terms of deep learning, is a part of a neural network that does feature extraction. Additional layers can be added to the backbone to do a variety of vision tasks. Hierarchical backbones have tiered structures, sometimes with varying resolutions. This is in contrast to the non-hierarchical **plain backbone** in [VitDet](https://arxiv.org/abs/2203.16527) model.

## Main Highlights
### Shifted windows
In the original ViT, attention is done between each patch and all other patches, which gets computationally intensive. Swin optimizes this process by reducing the normally quadratic complexity ViT into linear complexity (with respect to image size). Swin achieves this using a technique similar to CNN, where patches only attend to other patches in the same window, as opposed to all other patches, and then are gradually merged with neighboring patches. This is what makes Swin a hierarchical model.

![Architecture Diagram of Swin vs Vit, taken from Swin transformer paper](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/swin_transformer_architecture.png)
_Image taken from Swin Transformer paper_

## Advantages
### Computational efficiency
Swin is more performant than completely patch-based approaches like ViT.
### Large datasets
SwinV2 is one of the first 3B parameter models. As training size goes up, Swin outperforms CNN. The large number of parameters enables increased capacity for learning and more complex representations.

## Swin Transformer V2 [(paper)](https://arxiv.org/abs/2111.09883)
Swin Transformer V2 is a large vision model that can support up to 3B parameters and capable of training with high resolution images. It improves upon the original Swin Transformer by stabilizing training, transfer models pre-trained with low-resolution images to high-resolution tasks, and using [SimMIM](https://arxiv.org/abs/2111.09886), a self-supervised training approach that reduces the number of labeled images required for training.

## Applications in Image Restoration

### SwinIR [(paper)](https://arxiv.org/abs/2108.10257)
SwinIR is a model for turning low resolution images into high resolution images based on Swin Transformer.

### Swin2SR  [(paper)](https://arxiv.org/abs/2209.11345)
Swin2SR is another image restoration model. It is an improvement on SwinIR by incorporating Swin Transformer V2, applying the benefits of Swin V2 like training stability and higher image resolution capacity.

## Overview of PyTorch Implementation of Swin
Key parts of the [implementation of Swin from the original paper](https://github.com/microsoft/Swin-Transformer/blob/main/models/swin_transformer.py) is outlined below:

### Swin Transformer class
1. **Initialize Parameters**. Among various other dropout and normalization parameters, these parameters include:
    - `window_size`: Size of the windows for local self-attention.
    - `ape (bool)`: If True, add absolute position embedding to the patch embedding. 
    - `fused_window_process`: Optional hardware optimization.

2. **Apply Patch Embedding**: Similar to ViT, Images are split into non-overlapping patches and linearly embedded using `Conv2D`.
    
3. **Apply Positional Embeddings**: `SwinTransformer` optionally uses absolute position embeddings (`ape`), added to the patch embeddings. Absolute positional embeddings often help the model learn to use positional information about each patch to make more informed predictions.

4. **Apply Depth Decay**: Depth decay helps with regularization and preventing overfitting. Depth decay usually done by skipping layers during training. In this Swin implementation, **stochastic** depth decay is used, which means the deeper the layer, the higher the chance it might be skipped.
    
4. **Layer Construction**:
    - The model is composed of multiple layers (`BasicLayer`) of `SwinTransformerBlock`s, each downsampling the feature map for hierarchical processing using `PatchMerging`.
    - The dimensionality of features and resolution of feature maps change across layers.
      
7. **Classification Head**: Similar to ViT, it uses an Multi-Layer Perceptron (MLP) head for classification tasks, as defined in `self.head`, as the last step.

```python
class SwinTransformer(nn.Module):
    def __init__(
        self,
        img_size=224,
        patch_size=4,
        in_chans=3,
        num_classes=1000,
        embed_dim=96,
        depths=[2, 2, 6, 2],
        num_heads=[3, 6, 12, 24],
        window_size=7,
        mlp_ratio=4.0,
        qkv_bias=True,
        qk_scale=None,
        drop_rate=0.0,
        attn_drop_rate=0.0,
        drop_path_rate=0.1,
        norm_layer=nn.LayerNorm,
        ape=False,
        patch_norm=True,
        use_checkpoint=False,
        fused_window_process=False,
        **kwargs,
    ):
        super().__init__()

        self.num_classes = num_classes
        self.num_layers = len(depths)
        self.embed_dim = embed_dim
        self.ape = ape
        self.patch_norm = patch_norm
        self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
        self.mlp_ratio = mlp_ratio

        # split image into non-overlapping patches
        self.patch_embed = PatchEmbed(
            img_size=img_size,
            patch_size=patch_size,
            in_chans=in_chans,
            embed_dim=embed_dim,
            norm_layer=norm_layer if self.patch_norm else None,
        )
        num_patches = self.patch_embed.num_patches
        patches_resolution = self.patch_embed.patches_resolution
        self.patches_resolution = patches_resolution

        # absolute position embedding
        if self.ape:
            self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
            trunc_normal_(self.absolute_pos_embed, std=0.02)

        self.pos_drop = nn.Dropout(p=drop_rate)

        # stochastic depth
        dpr = [
            x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
        ]  # stochastic depth decay rule

        # build layers
        self.layers = nn.ModuleList()
        for i_layer in range(self.num_layers):
            layer = BasicLayer(
                dim=int(embed_dim * 2**i_layer),
                input_resolution=(
                    patches_resolution[0] // (2**i_layer),
                    patches_resolution[1] // (2**i_layer),
                ),
                depth=depths[i_layer],
                num_heads=num_heads[i_layer],
                window_size=window_size,
                mlp_ratio=self.mlp_ratio,
                qkv_bias=qkv_bias,
                qk_scale=qk_scale,
                drop=drop_rate,
                attn_drop=attn_drop_rate,
                drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
                norm_layer=norm_layer,
                downsample=PatchMerging if (i_layer  0
            else nn.Identity()
        )

        self.apply(self._init_weights)

    def _init_weights(self, m):
        if isinstance(m, nn.Linear):
            trunc_normal_(m.weight, std=0.02)
            if isinstance(m, nn.Linear) and m.bias is not None:
                nn.init.constant_(m.bias, 0)
        elif isinstance(m, nn.LayerNorm):
            nn.init.constant_(m.bias, 0)
            nn.init.constant_(m.weight, 1.0)

    @torch.jit.ignore
    def no_weight_decay(self):
        return {"absolute_pos_embed"}

    @torch.jit.ignore
    def no_weight_decay_keywords(self):
        return {"relative_position_bias_table"}

    def forward_features(self, x):
        x = self.patch_embed(x)
        if self.ape:
            x = x + self.absolute_pos_embed
        x = self.pos_drop(x)

        for layer in self.layers:
            x = layer(x)

        x = self.norm(x)  # B L C
        x = self.avgpool(x.transpose(1, 2))  # B C 1
        x = torch.flatten(x, 1)
        return x

    def forward(self, x):
        x = self.forward_features(x)
        x = self.head(x)
        return x
```

### Swin Transformer Block
The `SwinTransformerBlock` encapsulates the core operations of the Swin Transformer: local windowed attention and subsequent MLP processing. It plays a key role in enabling the Swin Transformer to efficiently handle large images by focusing on local patches while maintaining the ability to learn global representations.

**Layer Components**:
- **Normalization Layer 1 (`self.norm1`)**: Applied before the attention mechanism.
- **Window Attention (`self.attn`)**: Computes self-attention within local windows.
- **Drop Path (`self.drop_path`)**: Implements stochastic depth for regularization.
- **Normalization Layer 2 (`self.norm2`)**: Applied before the MLP layer.
- **MLP (`mlp`)**: A multi-layer perceptron for processing features post-attention.
- **Attention Mask (`self.register_buffer`)**: The attention mask is used during the self-attention computation to control which elements in the windowed input are allowed to interact (i.e., attend to each other). The shifted window approach helps the model to capture broader contextual information by allowing some cross-window interaction.
#### Swin Transformer Block's Initialization
```python
class SwinTransformerBlock(nn.Module):
    r"""Swin Transformer Block.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resulotion.
        num_heads (int): Number of attention heads.
        window_size (int): Window size.
        shift_size (int): Shift size for SW-MSA.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float, optional): Stochastic depth rate. Default: 0.0
        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(
        self,
        dim,
        input_resolution,
        num_heads,
        window_size=7,
        shift_size=0,
        mlp_ratio=4.0,
        qkv_bias=True,
        qk_scale=None,
        drop=0.0,
        attn_drop=0.0,
        drop_path=0.0,
        act_layer=nn.GELU,
        norm_layer=nn.LayerNorm,
        fused_window_process=False,
    ):
        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.num_heads = num_heads
        self.window_size = window_size
        self.shift_size = shift_size
        self.mlp_ratio = mlp_ratio
        if min(self.input_resolution)  0.0 else nn.Identity()
        self.norm2 = norm_layer(dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(
            in_features=dim,
            hidden_features=mlp_hidden_dim,
            act_layer=act_layer,
            drop=drop,
        )

        if self.shift_size > 0:
            # calculate attention mask for SW-MSA
            H, W = self.input_resolution
            img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
            h_slices = (
                slice(0, -self.window_size),
                slice(-self.window_size, -self.shift_size),
                slice(-self.shift_size, None),
            )
            w_slices = (
                slice(0, -self.window_size),
                slice(-self.window_size, -self.shift_size),
                slice(-self.shift_size, None),
            )
            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    img_mask[:, h, w, :] = cnt
                    cnt += 1

            mask_windows = window_partition(
                img_mask, self.window_size
            )  # nW, window_size, window_size, 1
            mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
            attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
            attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
                attn_mask == 0, float(0.0)
            )
        else:
            attn_mask = None

        self.register_buffer("attn_mask", attn_mask)
        self.fused_window_process = fused_window_process

    ### New cell ###
    def forward(self, x):
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"

        shortcut = x
        x = self.norm1(x)
        x = x.view(B, H, W, C)

        # cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = torch.roll(
                    x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)
                )
                # partition windows
                x_windows = window_partition(
                    shifted_x, self.window_size
                )  # nW*B, window_size, window_size, C
            else:
                x_windows = WindowProcess.apply(
                    x, B, H, W, C, -self.shift_size, self.window_size
                )
        else:
            shifted_x = x
            # partition windows
            x_windows = window_partition(
                shifted_x, self.window_size
            )  # nW*B, window_size, window_size, C

        x_windows = x_windows.view(
            -1, self.window_size * self.window_size, C
        )  # nW*B, window_size*window_size, C

        # W-MSA/SW-MSA
        attn_windows = self.attn(
            x_windows, mask=self.attn_mask
        )  # nW*B, window_size*window_size, C

        # merge windows
        attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)

        # reverse cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = window_reverse(
                    attn_windows, self.window_size, H, W
                )  # B H' W' C
                x = torch.roll(
                    shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)
                )
            else:
                x = WindowProcessReverse.apply(
                    attn_windows, B, H, W, C, self.shift_size, self.window_size
                )
        else:
            shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C
            x = shifted_x
        x = x.view(B, H * W, C)
        x = shortcut + self.drop_path(x)

        # Feed-forward network (FFN)
        x = x + self.drop_path(self.mlp(self.norm2(x)))

        return x
```

#### Swin Transformer Block's Forward Pass
There are 4 key steps:

1. **Cyclic shift**:
The feature map is partitioned into windows via `window_partition`. A **cyclic shift** is then applied to the partitions. Cyclic shift is done by moving elements (in this case, partitions) in a sequence to the left or right, and wrapping around the elements that go off one end back to the other end. This process changes the positions of the elements relative to each other but keeps the sequence otherwise intact. For example, if you cyclically shift the sequence `A, B, C, D` to the right by one position, it becomes `D, A, B, C`.

Cyclic shift allows the model to capture relationships between adjacent windows, enhancing its ability to learn spatial contexts beyond the local scope of individual windows.

2. **Windowed attention**: Perform attention using window-based multi-head self attention (W-MSA) module.

3. **Merge Patches**: Patches are merged via `PatchMerging`.

4. **Reverse cyclic shift**: After attention is done, the window partitioning is undone via `reverse_window`, and the cyclic shift operation is reversed, so that the feature map retains its original form.

```python
class WindowAttention(nn.Module):
    """
    Args:
        dim (int): Number of input channels.
        window_size (tuple[int]): The height and width of the window.
        num_heads (int): Number of attention heads.
        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
        proj_drop (float, optional): Dropout ratio of output. Default: 0.0
    """

    def __init__(
        self,
        dim,
        window_size,
        num_heads,
        qkv_bias=True,
        qk_scale=None,
        attn_drop=0.0,
        proj_drop=0.0,
    ):

        super().__init__()
        self.dim = dim
        self.window_size = window_size  # Wh, Ww
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim**-0.5

        # define a parameter table of relative position bias
        self.relative_position_bias_table = nn.Parameter(
            torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
        )  # 2*Wh-1 * 2*Ww-1, nH

        # get pair-wise relative position index for each token inside the window
        coords_h = torch.arange(self.window_size[0])
        coords_w = torch.arange(self.window_size[1])
        coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Ww
        coords_flatten = torch.flatten(coords, 1)  # 2, Wh*Ww
        relative_coords = (
            coords_flatten[:, :, None] - coords_flatten[:, None, :]
        )  # 2, Wh*Ww, Wh*Ww
        relative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2
        relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0
        relative_coords[:, :, 1] += self.window_size[1] - 1
        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
        relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Ww
        self.register_buffer("relative_position_index", relative_position_index)

        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop)

        trunc_normal_(self.relative_position_bias_table, std=0.02)
        self.softmax = nn.Softmax(dim=-1)

    def forward(self, x, mask=None):
        """
        Args:
            x: input features with shape of (num_windows*B, N, C)
            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
        """
        B_, N, C = x.shape
        qkv = (
            self.qkv(x)
            .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
            .permute(2, 0, 3, 1, 4)
        )
        q, k, v = (
            qkv[0],
            qkv[1],
            qkv[2],
        )  # make torchscript happy (cannot use tensor as tuple)

        q = q * self.scale
        attn = q @ k.transpose(-2, -1)

        relative_position_bias = self.relative_position_bias_table[
            self.relative_position_index.view(-1)
        ].view(
            self.window_size[0] * self.window_size[1],
            self.window_size[0] * self.window_size[1],
            -1,
        )  # Wh*Ww,Wh*Ww,nH
        relative_position_bias = relative_position_bias.permute(
            2, 0, 1
        ).contiguous()  # nH, Wh*Ww, Wh*Ww
        attn = attn + relative_position_bias.unsqueeze(0)

        if mask is not None:
            nW = mask.shape[0]
            attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(
                1
            ).unsqueeze(0)
            attn = attn.view(-1, self.num_heads, N, N)
            attn = self.softmax(attn)
        else:
            attn = self.softmax(attn)

        attn = self.attn_drop(attn)

        x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
        x = self.proj(x)
        x = self.proj_drop(x)
        return x
```

### Window Attention
`WindowAttention` is a window-based multi-head self attention (W-MSA) module with relative position bias. This can be used for both shifted and non-shifted windows.

```python
class PatchMerging(nn.Module):
    r"""Patch Merging Layer.

    Args:
        input_resolution (tuple[int]): Resolution of input feature.
        dim (int): Number of input channels.
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
    """

    def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
        super().__init__()
        self.input_resolution = input_resolution
        self.dim = dim
        self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
        self.norm = norm_layer(4 * dim)

    def forward(self, x):
        """
        x: B, H*W, C
        """
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"
        assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."

        x = x.view(B, H, W, C)

        x0 = x[:, 0::2, 0::2, :]  # B H/2 W/2 C
        x1 = x[:, 1::2, 0::2, :]  # B H/2 W/2 C
        x2 = x[:, 0::2, 1::2, :]  # B H/2 W/2 C
        x3 = x[:, 1::2, 1::2, :]  # B H/2 W/2 C
        x = torch.cat([x0, x1, x2, x3], -1)  # B H/2 W/2 4*C
        x = x.view(B, -1, 4 * C)  # B H/2*W/2 4*C

        x = self.norm(x)
        x = self.reduction(x)

        return x
```

### Patch Merging Layer
Patch merging method is used for downsampling. It is used to reduce the spatial dimensions of the feature map, similar to pooling in traditional convolutional neural networks (CNNs). It helps in building hierarchical feature representations by progressively increasing the receptive field and reducing the spatial resolution.

```python
from datasets import load_dataset
from transformers import AutoImageProcessor, SwinForImageClassification
import torch

model = SwinForImageClassification.from_pretrained(
    "microsoft/swin-tiny-patch4-window7-224"
)
image_processor = AutoImageProcessor.from_pretrained(
    "microsoft/swin-tiny-patch4-window7-224"
)

dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]

inputs = image_processor(image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

predicted_label_id = logits.argmax(-1).item()
predicted_label_text = model.config.id2label[predicted_label_id]

print(predicted_label_text)
```

## Try it out
You can find the 🤗 documentation for Swin [here](https://huggingface.co/docs/transformers/model_doc/swin).

### Usage of pretrained Swin model for classification
Here is how to use Swin model to classify a cat image into one of the 1,000 ImageNet classes:

```py
from datasets import load_dataset
from transformers import AutoImageProcessor, SwinForImageClassification
import torch

model = SwinForImageClassification.from_pretrained(
    "microsoft/swin-tiny-patch4-window7-224"
)
image_processor = AutoImageProcessor.from_pretrained(
    "microsoft/swin-tiny-patch4-window7-224"
)

dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]

inputs = image_processor(image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

predicted_label_id = logits.argmax(-1).item()
predicted_label_text = model.config.id2label[predicted_label_id]

print(predicted_label_text)
```

### MobileViT v2
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/mobilevit.md

# MobileViT v2 

The previously discussed Vision Transformer architectures are computationally intensive and hard to run on mobile devices. The previous state-of-the-art architecture used CNNs for mobile vision tasks. However, CNNs cannot learn global representations, and as a result they perform worse than their transformer counterparts.

The MobileViT architecture aims to solve the required problems for vision mobile tasks, such as low-latency and lightweight architecture, while providing the advantages of transformers and CNNs. The mobileViT Architecture was developed by Apple and builds MobileNet from Google's research team. The MobileViT architecture builds upon the previous MobileNet architecture by adding the MobileViT Block and separable self-attention. These two features allow for lightning-fast latency, reduction of parameters, computational complexity, and deployment of vision ML models on resource-constrained devices.

## MobileViT Architecture

The architecture of MobileViT presented in the paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer" by Sachin Mehta and Mohammad Rastegari is as follows:
![MobileViT Architecture](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MobileViT-Architecture.png)

Some of this should look similar to the previous chapter. The MobileNet blocks, nxn convolutions, downsampling, global pooling, and the final linear layer.

As seen by the global pooling layer and the linear layer, the model shown here is for classification. However, the same blocks introduced in this paper can be used for a variety of vision applications.

## MobileViT Block

The MobileViT block combines CNN's local processing and global processing, as seen in transformers. It uses a combination of convolutions and a transformer layer, allowing it to capture spatially local information and global dependencies in the data. 

A diagram of the MobileViT Block is shown below:
![MobileViT Block](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MobileViT-MobileViTBlock.png)

Okay, that's a lot to take in. Let's break that down.

- The block takes in an image with multiple channels. Let's say for an RGB image 3 channels, so the block takes in a three channeled image. 
- It then performs a N by N convolution on the channels appending them to the existing channels.
- The block then creates a linear combination of these channels and adds them to the existing stack of channels.
- For each channel these images are unfolded into flattened patches.
- Then these flattened patches are passed through a transformer to project them into new patches. 
- These patches are then folded back together to create an image with d dimensions. 
- Afterwards a pointwise convolution is overlayed on the stitched image. 
- And then the stitched image is then recombined with the original RGB images.

This approach allows for a receptive field of H x W (the entire input size) while modeling non-local dependencies and local dependencies through retaining patch locational information. This can be seen in the unfolding and refolding of the patches.

A receptive field is the size of a region in an input space that affects the features of a particular layer.

This compound approach allows MobileViT to have fewer parameters than traditional CNNs and even better accuracy!
![MobileViT CNNPreformance](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MobileViT-CNNPreformance.png)  

The main efficiency bottleneck in the original MobileViT architecture is the multi-head self-attention in Transformers, which requires O(k^2) time complexity concerning the input tokens.

Multi-head self-attention also requires costly operations like batch-wise matrix multiplications, which can impact latency on resource-constrained devices.

These same authors wrote another paper on exactly how to make attention operate faster. They've called it separable self-attention.

# Separable Self-attention

In traditional multihead attention, the big O concerning input tokens is quadratic (O(k^2)). Separable self-attention introduced in this paper has a complexity of O(k) concerning input tokens.

In addition, the attention method does not use any batch-wise matrix multiplications, which helps reduce latency on resource-constrained devices like mobile phones.

This is a massive improvement!

There have been many other forms of Attention such that complexity has ranged from O(k), O(k*sqrt(k)), O(k*log(k)).

Separable self-attention was not the first paper to have O(k) complexity. In Linformer, O(k) complexity for Attention was also achieved in [Linformer](https://arxiv.org/abs/2006.04768) before separable self-attention.

However, it still used costly operations like batch-wise matrix multiplications.

A comparison between the attention mechanisms in Transformer, Linformer, and MobileViT is shown below:
![Attention Comparison](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MobileViT-Attention.png)  

The image above gives a comparison of each of the individual types of attention between the Transformer, Linformer, and MobileViT v2 architectures.

For example, in both the transformer and Linformer architectures, the attention computations perform two batch-wise matrix multiplications.  

However, in the case of separable self-attention, these two batch-wise multiplications are replaced by two separate linear computations. This allows for further boosted inference speed. 

# Conclusion

MobileViT blocks retain spatially local information while developing global representations, combining the strengths of Transformers and CNNs. They provide a receptive field that encompasses the entire image.

The introduction of separable self-attention into this existing architecture even further boosted both accuracy and inference speed.
![Inference Tests](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MobileViT-Inference.png)

Tests performed with different architectures on the iPhone 12s exhibited a large jump in performance with the introduction of separable attention, as shown above!

Overall, the MobileViT Architecture is an extraordinarily powerful architecture for resource-limited vision tasks that provides fast inference and high accuracy.

# Transformers Library

If you want to try out MobileViTv2 locally, you can use it from HuggingFace's `transformers` library, here's how:

```bash
pip install transformers
```
Below is a short snippet on how to use MobileViT model to classify an image.

```python
from transformers import AutoImageProcessor, MobileViTV2ForImageClassification
from datasets import load_dataset
from PIL import Image

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained(
    "apple/mobilevitv2-1.0-imagenet1k-256"
)
model = MobileViTV2ForImageClassification.from_pretrained(
    "apple/mobilevitv2-1.0-imagenet1k-256"
)

inputs = image_processor(image, return_tensors="pt")

logits = model(**inputs).logits

# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```

# Inference API

For an even lighter computer vision setup, you can use the Hugging Face Inference API with MobileViTv2.
Inference API is an API to interact with many models available on Hugging Face Hub.
We can query Inference API like following through Python.

```py
import json
import requests

headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = (
    "https://api-inference.huggingface.co/models/apple/mobilevitv2-1.0-imagenet1k-256"
)

def query(filename):
    with open(filename, "rb") as f:
        data = f.read()
    response = requests.request("POST", API_URL, headers=headers, data=data)
    return json.loads(response.content.decode("utf-8"))

data = query("cats.jpg")
```
We can do the same with javascript like following.
```js
import fetch from "node-fetch";
import fs from "fs";
async function query(filename) {
    const data = fs.readFileSync(filename);
    const response = await fetch(
        "https://api-inference.huggingface.co/models/apple/mobilevitv2-1.0-imagenet1k-256",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "POST",
            body: data,
        }
    );
    const result = await response.json();
    return result;
}
query("cats.jpg").then((response) => {
    console.log(JSON.stringify(response));
});
```
Finally, we can query inference API through curl.
```bash
curl https://api-inference.huggingface.co/models/apple/mobilevitv2-1.0-imagenet1k-256 \
        -X POST \
        --data-binary '@cats.jpg' \
        -H "Authorization: Bearer ${HF_API_TOKEN}"
```

### Transformer-based image segmentation
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformers-for-image-segmentation.md

# Transformer-based image segmentation

In this section, we'll explore how Vision Transformers compare to Convolutional Neural Networks (CNNs) in image segmentation and detail the architecture of a vision transformer-based segmentation model as an example.

  This section assumes familiarity with image segmentation, Convolutional Neural
  Networks (CNNs), and the basics of Vision Transformers. If you're new to these
  concepts, we recommend exploring related materials in the course before
  proceeding.

## CNNs vs Transformers for Segmentation

Before the emergence of Vision Transformers, CNNs (Convolutional Neural Networks) have been the go-to choice for image segmentation. Models like [U-Net](https://arxiv.org/abs/1505.04597) and [Mask R-CNN](https://arxiv.org/abs/1703.06870) captured the details that are needed to distinguish different objects in an image, making them state-of-the-art for segmentation tasks.

Despite their excellent results over the past decade, CNN-based models have some limitations, which Transformers aim to solve:

- **Spatial limitations**: CNNs learn local patterns through small receptive fields. This local focus makes it hard for them to "link" features that are far apart but related within the image, affecting their ability to accurately segment complex scenes/objects. Unlike CNNs, ViTs are designed to capture global dependencies within an image, leveraging the attention mechanism. This means ViT-based models consider the entire image at once, allowing them to understand complex relationships between distant parts of an image. For segmentation, this global perspective can lead to a more accurate delineation of objects.
- **Task-Specific Components**: Methods like Mask R-CNN incorporate hand-designed components (e.g., non-maximum suppression, spatial anchors) to encode prior knowledge about segmentation tasks. These components add complexity and require manual tuning. In contrast, ViT-based segmentation methods simplify the segmentation process by eliminating the need for hand-designed components, making them more straightforward to optimize.
- **Segmentation Task Specialization**: CNN-based segmentation models approach semantic, instance, and panoptic segmentation tasks individually, leading to specialized architectures for each task and separate research efforts into each. Recent ViT-based models like [MaskFormer](https://arxiv.org/abs/2107.06278), [SegFormer](https://arxiv.org/abs/2105.15203) or [SAM](https://arxiv.org/abs/2304.02643) provide a unified approach to tackling semantic, instance, and panoptic segmentation tasks within a single framework.

## Spotlight on MaskFormer: Illustrating ViT for Image Segmentation

MaskFormer ([paper](https://arxiv.org/abs/2107.06278), [Hugging Face transformers documentation](https://huggingface.co/docs/transformers/en/model_doc/maskformer)), introduced in the paper "MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation" is a model that predicts segmentation masks for each class present in an image, unifying semantic and instance segmentation in one architecture.

### MaskFormer Architecture

The figure below shows the architecture diagram taken from the paper.

The architecture is composed of three components:

**Pixel-level Module**: Uses a backbone to extract image features and a pixel decoder to generate per-pixel embeddings.

**Transformer Module**: Employs a standard Transformer decoder to compute per-segment embeddings from image features and learnable positional embeddings (queries), encoding global information about each segment.

**Segmentation Module**: Generates class probability predictions and mask embeddings for each segment using a linear classifier and a Multi-Layer Perceptron (MLP), respectively. The mask embeddings are used in combination with per-pixel embeddings to predict binary masks for each segment.

The model is trained with a binary mask loss, the same one as [DETR](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/detr), and a cross-entropy classification loss per predicted segment.

### Panoptic Segmentation Inference Example with Hugging Face Transformers

Panoptic segmentation is the task of labeling every pixel in an image with its category and identifying distinct objects within those categories, combining both semantic and instance segmentation.

```python
from transformers import pipeline
from PIL import Image
import requests

segmentation = pipeline("image-segmentation", "facebook/maskformer-swin-base-coco")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

results = segmentation(inputs=image, subtask="panoptic")
results
```

As you can see below, the results include multiple instances of the same classes, each with distinct masks.

```bash
[
  {
    "score": 0.993197,
    "label": "remote",
    "mask": 
  },
  {
    "score": 0.997852,
    "label": "cat",
    "mask": 
  },
  {
    "score": 0.998006,
    "label": "remote",
    "mask": 
  },
  {
    "score": 0.997469,
    "label": "cat",
    "mask": 
  }
]
```

## Fine-tuning Vision Transformer-based Segmentation Models

With many pre-trained segmentation models available, transfer learning and finetuning are commonly used to adapt these models to specific use cases, especially since transformer-based segmentation models, like MaskFormer, are data-hungry and challenging to train from scratch.
These techniques leverage pre-trained representations to adapt these models to new data efficiently. Typically, for MaskFormer, the backbone, the pixel decoder, and the transformer decoder are kept frozen to leverage their learned general features, while the transformer module is finetuned to adapt its class prediction and mask generation capabilities to new segmentation tasks.

[This notebook](https://colab.research.google.com/github/huggingface/computer-vision-course/blob/main/notebooks/Unit%203%20-%20Vision%20Transformers/transfer-learning-segmentation.ipynb) will walk you through a transfer learning tutorial on image segmentation using MaskFormer.

## References

- [MaskFormer Hugging Face documentation](https://huggingface.co/docs/transformers/en/model_doc/maskformer)
- [Image Segmentation Hugging Face Task Guide](https://huggingface.co/docs/transformers/en/tasks/semantic_segmentation)

### Dilated Neighborhood Attention Transformer (DINAT)
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/dinat.md

# Dilated Neighborhood Attention Transformer (DINAT)

![DINAT Architecture Diagram](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/dinat_images/dina_comparison.png)
## Overview of Architecture

The Dilated Neighborhood Attention Transformer (DiNAT) is an innovative hierarchical vision transformer designed to enhance the performance of deep learning models, particularly in visual recognition tasks. Unlike traditional transformers, which employ self-attention mechanisms that may become computationally expensive as models scale, DiNAT introduces Dilated Neighborhood Attention (DiNA). DiNA extends a local attention mechanism called Neighborhood Attention (NA) by incorporating sparse global attention without additional computational overhead. This extension allows DiNA to capture more global context, expand the receptive field exponentially, and model longer-range inter-dependencies efficiently. 

DiNAT combines both NA and DiNA in its architecture, resulting in a transformer model capable of preserving locality, maintaining translational equivariance, and achieving significant performance boosts in downstream vision tasks. The experiments conducted with DiNAT demonstrate its superiority over strong baseline models such as NAT, Swin, and ConvNeXt across various visual recognition tasks.

## Core of DiNAT: Neighborhood Attention

DiNAT is based on Neighborhood Attention (NA) architecture, an attention mechanism specifically designed for computer vision tasks, aiming to capture relationships between pixels in an image efficiently. In a simple analogy, imagine you have an image, and each pixel in that image needs to understand and focus on its nearby pixels to make sense of the entire picture. Let's examine the key features of NA:

- **Local Relationships**: NA captures local relationships, allowing each pixel to consider information from its immediate surroundings. This is similar to how we might understand a scene by looking at the objects closest to us first before considering the entire view.

- **Receptive Field**: NA allows pixels to grow their understanding of their surrounding without needing too much extra computation. It dynamically expands their scope or "attention span" to include more distant neighbors when necessary.

Essentially, Neighborhood Attention is a technique that enables pixels in an image to focus on their surroundings, helping them understand local relationships efficiently. This localized understanding contributes to building a detailed understanding of the entire image while managing computational resources efficiently.

![DiNAT Architecture Diagram](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/dinat_images/dinat_architecture.png)

## The Evolution of DINAT 

The development of the Dilated Neighborhood Attention Transformer represents a significant improvement in vision transformers. It addresses the limitations of existing attention mechanisms. Initially, Neighborhood Attention was introduced to provide locality and efficiency, but it fell short in capturing global context. To overcome this limitation, the concept of Dilated Neighborhood Attention (DiNA) was introduced. DiNA extends the NA by expanding neighborhoods into larger sparse regions. This allows for the capture of more global context and exponentially increases the receptive field without adding computational burden. The next development is DiNAT, which combines localized NA with the expanded global context of DiNA. DiNAT achieves this by gradually changing dilation throughout the model, optimizing receptive fields, and simplifying feature learning.

## Image Classification with DiNAT
You can classify among ImageNet-1k images using [shi-labs/dinat-mini-in1k-224](https://huggingface.co/shi-labs/dinat-small-in1k-224) model with 🤗 transformers. You can also fine-tune it for your own use case.

```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-mini-in1k-224")

inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```

## References
- [DiNAT Paper](https://arxiv.org/abs/2209.15001) [1] 
- [Hugging Face DiNAT Transformer](https://huggingface.co/docs/transformers/model_doc/dinat) [2]  
- [Neighborhood Attention(NA)](https://arxiv.org/abs/2204.07143) [3] 
- [SHI Labs](https://huggingface.co/shi-labs) [4]  
- [OneFormer Paper](https://arxiv.org/abs/2211.06220) [5]
- [Hugging Face OneFormer](https://huggingface.co/docs/transformers/main/en/model_doc/oneformer) [6]

### OneFormer: One Transformer to Rule Universal Image Segmentation
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/oneformer.md

# OneFormer: One Transformer to Rule Universal Image Segmentation

## Introduction

OneFormer is a groundbreaking approach to image segmentation, a computer vision task that involves dividing an image into meaningful segments. Traditional methods used separate models and architectures for different segmentation tasks, like identifying objects (instance segmentation) or labeling regions (semantic segmentation). More recent attempts aimed to unify these tasks using shared architectures but still required separate training for each task.

Enter OneFormer, a universal image segmentation framework designed to overcome these challenges. It introduces a unique multi-task approach, allowing a single model to handle semantic, instance, and panoptic segmentation tasks without the need for separate training on each. The key innovation lies in a task-conditioned joint training strategy, where the model is guided by a task input, making it dynamic and adaptive to different tasks during both training and inference.

This breakthrough not only simplifies the training process but also outperforms existing models across various datasets. OneFormer achieves this by using panoptic annotations, unifying the ground truth information needed for all tasks. Additionally, the framework introduces query-text contrastive learning to better distinguish between tasks and improve overall performance.

## Background of OneFormer

![Oneformer Method](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/oneformer/oneformer.svg)
_Image taken from OneFormer paper_

To understand OneFormer's significance, let's look into the background of image segmentation. In image processing, segmentation involves dividing an image into different parts, which is crucial for tasks like recognizing objects and understanding the content of a scene. Traditionally, two main types of segmentation tasks were semantic segmentation, where pixels are labeled with categories like "road" or "sky," and instance segmentation, which identifies objects with well-defined boundaries.

Over time, researchers proposed panoptic segmentation as a way to unify semantic and instance segmentation tasks. However, even with these advancements, there were challenges. Existing models designed for panoptic segmentation still required separate training for each task, making them semi-universal at best.

This is where OneFormer comes in as a game-changer. It introduces a novel approach – a multi-task universal architecture. The idea is to train this framework only once, using a single universal architecture, a lone model, and just one dataset. The magic happens as OneFormer outperforms specialized frameworks across semantic, instance, and panoptic segmentation tasks. This breakthrough is not just about improving accuracy; it's about making image segmentation more universal and efficient. With OneFormer, the need for extensive resources and separate training for different tasks becomes a thing of the past.

## Core Concepts of OneFormer

![Task Conditioned Joint Training](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/oneformer/text_gen.svg)

Now, let's break down the key features of OneFormer that make it stand out:

### Task-Dynamic Mask

OneFormer uses a clever trick called "Task-Dynamic Mask" to better understand and tackle different types of image segmentation tasks. So, when the model encounters an image, it uses this "Task-Dynamic Mask" to decide whether to pay attention to the overall scene, identify specific objects with clear boundaries, or do a combination of both.

### Task-Conditioned Joint Training
One of the groundbreaking features of OneFormer is its task-conditioned joint training strategy. Instead of training separately for semantic, instance, and panoptic segmentation, OneFormer uniformly samples the task during training. This strategy enables the model to learn and generalize across different segmentation tasks simultaneously. By conditioning the architecture on the specific task through the task token, OneFormer unifies the training process, reducing the need for task-specific architectures, models, and datasets. This innovative approach significantly streamlines the training pipeline and resource requirements.

### Query-Text Contrastive Loss

Lastly, let's talk about "Query-Text Contrastive Loss." Think of it as a way for OneFormer to teach itself about the differences between tasks and classes. In the training process, the model compares the features it extracts from the image (queries) with the corresponding text descriptions (like "a photo with a car"). This helps the model understand the unique characteristics of each task and reduces confusion between different classes. OneFormer's "Task-Dynamic Mask" allows it to be versatile like a multitasking assistant, and the "Query-Text Contrastive Loss" helps it learn the specifics of each task by comparing visual features with textual descriptions.

By combining these core concepts, OneFormer becomes a smart and efficient tool for image segmentation, making the process more universal and accessible. 

## Conclusion

![result comperison](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/oneformer/plots.svg)
_Image taken from OneFormer paper_

In conclusion, the OneFormer framework represents a groundbreaking approach in the field of image segmentation, aiming to simplify and unify the task across various domains. Unlike traditional methods that rely on specialized architectures for each segmentation task, OneFormer introduces a novel multi-task universal architecture that requires only a single model, trained once on a universal dataset, to outperform existing frameworks. Additionally, the incorporation of query-text contrastive loss during training enhances the model's ability to learn inter-task and inter-class differences. OneFormer utilizes transformer-based architectures, inspired by recent successes in computer vision, and introduces task-guided queries to improve task sensitivity. The results are impressive, as OneFormer surpasses state-of-the-art models across semantic, instance, and panoptic segmentation tasks on benchmark datasets like ADE20k, Cityscapes, and COCO. The framework's performance is further enhanced with new ConvNeXt and DiNAT backbones.

In summary, OneFormer represents a significant step towards universal and accessible image segmentation. By introducing a single model capable of handling diverse segmentation tasks, the framework streamlines the segmentation process and reduces resource requirements.

## Example use of model

Let's see an example use of the model. Dinat backbone requires Natten library, which may take a while to install.
```bash
!pip install -q natten 
```
We can see an inference code below depending on different segmentation types.

```python
from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt

def run_segmentation(image, task_type):
    """Performs image segmentation based on the given task type.

    Args:
        image (PIL.Image): The input image.
        task_type (str): The type of segmentation to perform ('semantic', 'instance', or 'panoptic').

    Returns:
        PIL.Image: The segmented image.

    Raises:
        ValueError: If the task type is invalid.
    """

    processor = OneFormerProcessor.from_pretrained(
        "shi-labs/oneformer_ade20k_dinat_large"
    )  # Load once here
    model = OneFormerForUniversalSegmentation.from_pretrained(
        "shi-labs/oneformer_ade20k_dinat_large"
    )

    if task_type == "semantic":
        inputs = processor(images=image, task_inputs=["semantic"], return_tensors="pt")
        outputs = model(**inputs)
        predicted_map = processor.post_process_semantic_segmentation(
            outputs, target_sizes=[image.size[::-1]]
        )[0]

    elif task_type == "instance":
        inputs = processor(images=image, task_inputs=["instance"], return_tensors="pt")
        outputs = model(**inputs)
        predicted_map = processor.post_process_instance_segmentation(
            outputs, target_sizes=[image.size[::-1]]
        )[0]["segmentation"]

    elif task_type == "panoptic":
        inputs = processor(images=image, task_inputs=["panoptic"], return_tensors="pt")
        outputs = model(**inputs)
        predicted_map = processor.post_process_panoptic_segmentation(
            outputs, target_sizes=[image.size[::-1]]
        )[0]["segmentation"]

    else:
        raise ValueError(
            "Invalid task type. Choose from 'semantic', 'instance', or 'panoptic'"
        )

    return predicted_map

def show_image_comparison(image, predicted_map, segmentation_title):
    """Displays the original image and the segmented image side-by-side.

    Args:
        image (PIL.Image): The original image.
        predicted_map (PIL.Image): The segmented image.
        segmentation_title (str): The title for the segmented image.
    """

    plt.figure(figsize=(12, 6))
    plt.subplot(1, 2, 1)
    plt.imshow(image)
    plt.title("Original Image")
    plt.axis("off")
    plt.subplot(1, 2, 2)
    plt.imshow(predicted_map)
    plt.title(segmentation_title + " Segmentation")
    plt.axis("off")
    plt.show()

url = "https://huggingface.co/datasets/shi-labs/oneformer_demo/resolve/main/ade20k.jpeg"
response = requests.get(url, stream=True)
response.raise_for_status()  # Check for HTTP errors
image = Image.open(response.raw)

task_to_run = "semantic"
predicted_map = run_segmentation(image, task_to_run)
show_image_comparison(image, predicted_map, task_to_run)
```
![semantic segmentation](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/oneformer/oneformer_semantic.png)

## References

- [OneFormer Paper](https://arxiv.org/pdf/2211.06220.pdf)  
- [HuggingFace OneFormer model](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)

### DEtection TRansformer (DETR)
https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/detr.md

# DEtection TRansformer (DETR)
## Overview of architecture
DETR is mainly used for object detection tasks, which is the process of detecting objects in an image. For example, the input of the model would be an image of a road, and the output of the model could be `[('car',X1,Y1,W1,H1),('pedestrian',X2,Y2,W2,H2)]`, in which X, Y, W, H represent the x, y coordinates denoting the location of the bounding box, as well as the width and height of the box. 
A traditional object detection model like YOLO consists of hand-crafted features like anchor box priors, which requires initial guesses of object locations and shapes, affecting downstream training. Post-processing steps are then used to remove overlapping bounding boxes, which require careful selection of its filtering heuristics. 
DEtection TRansformer, DETR for short, simplifies the detector by using an encoder-decoder transformer after the feature extraction backbone to directly predict bounding boxes in parallel, requiring minimal post-processing. 

![Architecture Diagram of DETR](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/DETR.png)
The model architecture of DETR begins with a CNN backbone, similar to other image-based networks, the output of which is processed and fed into a transformer encoder, resulting in N embeddings. The encoder embeddings are added to learned positional embeddings (called object queries) and used in a transformer decoder, generating another N embeddings. As a final step, each of the N embeddings are put through individual feed forward layers to predict the width, height, coordinates of the bounding box, as well as the object class (or whether there is an object). 

## Key Features

### Encoder-Decoder
As with other transformers, the transformer encoder expects the output of the CNN backbone to be a sequence. Thus, the feature map of size `[dimension, height, width]` is downsized and then flattened to `[dimension, less than height x width]`.
![Feature Maps of Encoder](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/DETR_FeatureMaps.png)
_**Left**: 12 out of 256 dimensions in the feature maps are visualized. Each dimension extracts some features of the original cat image while downsizing the original image. Some dimensions have a higher focus on the patterns on the cat; some have a higher focus on the bed sheets._ 
_**Right**: Keeping the original feature dimension of size 256, the width and height are further downsized and flattened into size 850._  
Since transformers are permutation invariant, positional embeddings are added to both the encoder and decoder to remind the model where on the image the embeddings come from. In the encoder, fixed positional encodings are used, while in the decoder, learned positional encodings (object queries) are used. Fixed encodings are similar to the ones used in the original Transformer paper, in which the encodings are defined by sinusoidal functions of varying frequencies at different feature dimensions. It gives the sense of position without having any learned parameters, indexed by the position on the image. Learned encodings are also indexed by the positions, but each position has a separate encoding that is learned throughout training to denote the positions in a way that the model understands. 

### Set-based Global Loss Function
In YOLO, a popular object detection model, the loss function comprises the bounding box, objectness (ie. the probability of an object existing in a region of interest), and class loss. The loss is calculated over multiple bounding boxes per each grid cell, which is a fixed number. On the other hand, in DETR, the architecture is expected to generate unique bounding boxes in a permutation-invariant manner (i.e., the order of the detections does not matter in the output, and the bounding boxes must vary and cannot all be the same). Thus, matching is required to assess how good the predictions are.

**Bipartite Matching**   
Bipartite matching is a way to compute one-on-one matching between the ground truth bounding boxes and the predicted boxes. It finds matches with the highest similarity between ground truth and predicted bounding boxes, as well as classes. This ensures the closest prediction would be matched with the corresponding ground truth in order to properly adjust the boxes and classes in the loss function. If matching is not done, predictions not aligned with the order of the ground truth would be marked as incorrect even if it were correct.

## Using DETR to Detect Objects
To see an example of how you can perform inference with DETR using Hugging Face transformers, see `DETR.ipynb`.

## Evolution of DETR
### Deformable DETR
Two of the main problems of DETR are a long and slow process of convergence and suboptimal small object detection. 
**Deformable Attention**   
The first problem is resolved by using deformable attention, which reduces the amount of sampling points to pay attention to. Traditional attention is inefficient due to global attention and heavily limit the resolution that the image can have. The model only attends to a fixed amount of sampling points around each reference point, and the reference points are learned by the model based on the input. For example, in an image of a dog, a reference point may be in the center of the dog, with sampling points near the ears, mouth, tail, etc. 

**Multi-scale Deformable Attention Module**   
The second problem is resolved similarly to YOLOv3, in which multi-scale feature maps are introduced. In convolutional neural networks, earlier layers extract smaller details (ex. lines) while later layers extract larger details (ex. wheels, ears). In a similar manner, different layers of deformable attention result in different levels of resolution. By connecting the outputs of some of these layers from the encoder to the decoder, it allows for the model to detect objects of a multitude of sizes. 

### Conditional DETR
Conditional DETR also sets out to resolve the problem of slow training convergence in the original DETR, resulting in convergence that is over 6.7 times faster. The authors found that the object queries are general and are not specific to the input image. Using **Conditional Cross-Attention** in the decoder, the queries can better localize the areas for bounding box regression.
![A decoder layer for Conditional DETR](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/DETR_DecoderLayer.png)
_Left: DETR Decoder Layer. Right: Conditional DETR Decoder Layer_   
The original DETR and Conditional DETR decoder layers are compared in the figure above, with the main difference being the query input of the cross-attention block. The authors make a distinction between content query cq (decoder self attention output) and spatial query pq. The original DETR simply adds them together. In Conditional DETR, they are concatenated, with cq focusing on the content of the object and pq focusing on the bounding box regions.   
The spatial query pq is the result of both the decoder embeddings and object queries projecting to the same space (to become T and ps respectively) and multiplied together. The previous layers' decoder embeddings contain information for the bounding box regions, and the object queries contains information of learned reference points for each bounding box. Thus, their projections combine into a representation that allows for cross-attention to measure their similarities with the encoder input and sinusoidal positional embedding. This is more effective than DETR which only uses object queries and fixed reference points.

## DETR Inference 

You can infer with existing DETR models on Hugging Face Hub like below:

```python
from transformers import DetrImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# initialize the model
processor = DetrImageProcessor.from_pretrained(
    "facebook/detr-resnet-101", revision="no_timm"
)
model = DetrForObjectDetection.from_pretrained(
    "facebook/detr-resnet-101", revision="no_timm"
)

# preprocess the inputs and infer
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)

# convert outputs (bounding boxes and class logits) to COCO API
# non max supression above 0.9
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(
    outputs, target_sizes=target_sizes, threshold=0.9
)[0]

for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
    box = [round(i, 2) for i in box.tolist()]
    print(
        f"Detected {model.config.id2label[label.item()]} with confidence "
        f"{round(score.item(), 3)} at location {box}"
    )
```

Outputs are below.

```bash 
Detected cat with confidence 0.998 at location [344.06, 24.85, 640.34, 373.74]
Detected remote with confidence 0.997 at location [328.13, 75.93, 372.81, 187.66]
Detected remote with confidence 0.997 at location [39.34, 70.13, 175.56, 118.78]
Detected cat with confidence 0.998 at location [15.36, 51.75, 316.89, 471.16]
Detected couch with confidence 0.995 at location [-0.19, 0.71, 639.73, 474.17]
```

## PyTorch Implementation of DETR
The implementation of DETR from the original paper is shown below:
```python
import torch
from torch import nn
from torchvision.models import resnet50

class DETR(nn.Module):
    def __init__(
        self, num_classes, hidden_dim, nheads, num_encoder_layers, num_decoder_layers
    ):
        super().__init__()
        self.backbone = nn.Sequential(*list(resnet50(pretrained=True).children())[:-2])
        self.conv = nn.Conv2d(2048, hidden_dim, 1)
        self.transformer = nn.Transformer(
            hidden_dim, nheads, num_encoder_layers, num_decoder_layers
        )
        self.linear_class = nn.Linear(hidden_dim, num_classes + 1)
        self.linear_bbox = nn.Linear(hidden_dim, 4)
        self.query_pos = nn.Parameter(torch.rand(100, hidden_dim))
        self.row_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
        self.col_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))

    def forward(self, inputs):
        x = self.backbone(inputs)
        h = self.conv(x)
        H, W = h.shape[-2:]
        pos = (
            torch.cat(
                [
                    self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
                    self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),
                ],
                dim=-1,
            )
            .flatten(0, 1)
            .unsqueeze(1)
        )
        h = self.transformer(
            pos + h.flatten(2).permute(2, 0, 1), self.query_pos.unsqueeze(1)
        )
        return self.linear_class(h), self.linear_bbox(h).sigmoid()
```
### Going line by line in the `forward` function: 
**Backbone**   
The input image is first put through a ResNet backbone and then a convolution layer, which reduces the dimension to the `hidden_dim`.
```python
x = self.backbone(inputs)
h = self.conv(x)
```
They are declared in the `__init__` function.
```python
self.backbone = nn.Sequential(*list(resnet50(pretrained=True).children())[:-2])
self.conv = nn.Conv2d(2048, hidden_dim, 1)
```

**Positional Embeddings**

While in the paper fixed and trained embeddings are used in the encoder and decoder respectively, the authors used trained embeddings for both in the implementation for simplicity.    
```python
pos = (
    torch.cat(
        [
            self.col_embed[:W].unsqueeze(0).repeat(H, 1, 1),
            self.row_embed[:H].unsqueeze(1).repeat(1, W, 1),
        ],
        dim=-1,
    )
    .flatten(0, 1)
    .unsqueeze(1)
)
```
They are declared here as `nn.Parameter`. The row and column embeddings combined denote the locations in the image.
```python
self.row_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
self.col_embed = nn.Parameter(torch.rand(50, hidden_dim // 2))
```
**Resize**   
Before going into the transformer, the features with size `(batch size, hidden_dim, H, W)` are reshaped to `(hidden_dim, batch size, H*W)`. This makes them a sequential input for the transformer.
```python
h.flatten(2).permute(2, 0, 1)
```
**Transformer**   
The `nn.Transformer` function takes in the first parameter as the input to the encoder, and the second parameter as the input of the encoder. As you can see, the encoder takes in the resized features added to the positional embeddings, while the decoder takes in `query_pos`, which is the decoder positional embedding.
```python
h = self.transformer(pos + h.flatten(2).permute(2, 0, 1), self.query_pos.unsqueeze(1))
```
**Feed-Forward Network**   
In the end, the outputs, which is a tensor of size `(query_pos_dim, batch size, hidden_dim)`, is fed through two linear layers. 
```python
return self.linear_class(h), self.linear_bbox(h).sigmoid()
```
The first of which predicts the class. An additional class is added for the `No Object` class.
```python
self.linear_class = nn.Linear(hidden_dim, num_classes + 1)
```
The second linear layer predicts the bounding box with an output size 4 for the xy coordinates, height and width.
```python
self.linear_bbox = nn.Linear(hidden_dim, 4)
```

## References
- [DETR](https://arxiv.org/abs/2005.12872) 
- [YOLO](https://arxiv.org/abs/1506.02640) 
- [YOLOv3](https://arxiv.org/abs/1804.02767) 
- [Conditional DETR](https://arxiv.org/abs/2108.06152) 
- [Deformable DETR](https://arxiv.org/abs/2010.04159) 
- [`facebook/detr-resnet-50`](https://huggingface.co/facebook/detr-resnet-50)

### Hyena
https://huggingface.co/learn/computer-vision-course/unit13/hyena.md

# Hyena 

## Overview

### What is Hyena 
While Transformer is a well established and very capable architecture, the quadratic computational cost is an expensive price to pay, especially in inference. 

Hyena is a new type of operator that serves as a substitute for the attention mechanism. 
Developed by Hazy Research, it features a subquadratic computational efficiency, constructed by interleaving implicitly parametrized long convolutions and data-controlled gating.

Long convolutions are similar to standard convolutions except the kernel is the size of the input. 
It is equivalent to having a global receptive field instead of a local one. 
Having an implicitly parametrized convolution means that the convolution filters values are not directly learned. Instead, learning a function that can recover thoses values is preferred. 

Gating mechanisms control the path through which information flows in the network. They help to define how long an information should be remembered. Usally they consist in elementwise multiplications.
An interresting blog article about gating can be found here.

![transformer2hyena.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/transformer2hyena.png)
The Hyena operator consists of recursively computing convolutions and multiplicative element-wise gating operations with one projection at a time, until all projections are exhausted. This approach builds on top of the [Hungry Hungry Hippo (H3)](https://arxiv.org/abs/2212.14052) mechanism, also developed by the same researchers. The H3 mechanism is characterized by its data-controlled, parametric decomposition, acting as a surrogate attention mechanism.

Another way of understanding Hyena is to consider it as a generalization of the H3 layer for an arbitrary number of projections, where the Hyena layer extends recursively H3 with a different choice of parametrization for the long convolution.
![hyena_recurence.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/hyena_recurence.png)
### From Attention to Hyena operator

The attention mechanism is characterized by two fundamental properties:
1. It possesses a global contextual awareness, enabling it to assess interactions between pairs of visual tokens within a sequence.
2. It is data-dependent, meaning the operation of the attention equation varies based on the input data itself, specifically the input projections  \\(q\\), \\(k\\), \\(v\\).

![Alt text](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/self-attention-schema.png)

The attention mechanism is defined by three projections: query \\(q\\), key \\(k\\), value \\(v\\), that are generated by mutiliplying the input visual token by three matrices \\(W_q\\), \\(W_k\\) and \\(W_v\\) that are learned during training. 

For a given visual token, we can compute an attention score using thoses projections. The attention score determines how much focus to give on other parts of the input image.  
For a nice detailled explainer of Attention you can refer on this  [illustrated blog article](https://jalammar.github.io/illustrated-transformer/).

In an attempt to replicate these characteristics, the Hyena operator incorporates two key elements:
1. It employs long convolution to provide a sense of global context, akin to the first property of the attention mechanism.
2. For data dependency, Hyena uses element-wise gating. This is essentially an element-wise multiplication of input projections, mirroring the data-dependent nature of traditional attention.

In the realm of computational efficiency, the Hyena operator attains an evaluation time complexity of \\(O(L \times \log_2 L\\)), indicating a noteworthy enhancement in processing speed.

### Hyena operator

Let's delve into the second-order recursion of the Hyena operator, which simplifies its representation for illustrative purposes.
![hyena_mechanism.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/hyena-order2-schema.png)

In this order, we compute 3 projections analogous to \\(q\\), \\(k\\) and \\(v\\) attention vectors from the Attention mechanism. 

However, unlike the attention mechanism, which typically uses a single dense layer for projecting the input sequence into representations, Hyena incorporates both a dense layer and standard convolutions that are performed on each channels (refered as \\(T_q\\), \\(T_k\\) and \\(T_v\\) on the schema, but it is an explicit convolution in practice). The softmax function is also discared. 

The core idea is to repeatedly apply linear operators that are fast to evaluate to an input sequence \\(u \in \mathbb{R}^{L}\\)  with \\(L\\) the length of the sequence. 
Because global convolutions have a large number of parameters, they are expensive to train. A notable design choice is the use of **implicit convolutions**. 
Unlike standard convolutional layers, the convolution filter \\(h\\) is learned implicitly with a small neural network \\(\gamma_{\theta}\\) (also called the Hyena Filter). 
This network takes the positional index and potentially positional encodings as inputs. From the outputs of \\(\gamma_{\theta}\\) one can construct a Toeplitz matrix \\(T_h\\). 

This implies that instead of learning the values of the convolution filter directly, we learn a mapping from a temporal positional encoding to the values, which is more computationally efficient, especially for long sequences.

It's important to note that the mapping function can be conceptualized within various abstract models, such as Neural Field or State Space Models (S4) as discussed in H3 Paper.

### Implicit convolutions

A linear convolution can be formulated as a matrix multiplication in which one of the inputs is reshaped into a [Toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix).

This transformation leads to greater parameter efficiency. 
Instead of directly learning fixed kernel weight values, a parametrized function is employed. 
This function intelligently deduces the values of the kernel weights and their dimensions during the network's forward pass, optimizing resource use.

One way to have an intuition about implicit parametrization is to think about an afine function \\(y=f(x)= a \times x + b\\) we want to learn. Instead of learning every single point positions it is more efficient to learn a and b and compute the points when needed. 

In practice, convolutions are accelerated to a subquadratic time complexity by the Cooley-Tukey fast Fourier transform (FFT) algorithm. 
Some work has been conducted to speed up this computation like FastFFTConv based on Monarch decomposition. 

### Wrapping Up Everything

![nd_hyena.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/nd_hyena.png)
In essence, Hyena can be performed in two steps: 
1. Compute a set of N+1 linear projections similarly of attention (it can be more than 3 projections).
2. Mixing up the projections: The matrix \\(H(u)\\) is defined by a combination of matrix multiplications.

## Why Hyena Matters
 
The H3 mechanism proposition went close to the perplexity of multi-headed attention mechanisms, but there was still a narrow gap in terms of perplexity that had to be bridged. 

A variety of attention replacements have been proposed over the last few years, and evaluating the quality of a new architecture during the exploratory phase remains challenging. 
Creating a versatile layer that can effectively process N-Dimensional data within deep neural networks while maintaining good expressiveness is a significant area of ongoing research.

Empirically, Hyena operators are able to significantly shrink the quality gap with attention at scale, reaching similar perplexity and downstream performance with a smaller computational budget and without hybridization of attention. 
It has already achieved a state-of-the-art status for [DNA sequence modeling](https://arxiv.org/abs/2306.15794) and shows great promise in the field of large language models with Stripped-Hyena-7B. 

Similarly to Attention, Hyena can be used in computer vision tasks. In image classification, Hyena is able to match attention in accuracy when training on ImageNet-1k from scratch.

![hyena_vision_benchmarks.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/hyena_vision_benchmarks.png)
Hyena has been applied to N-Dimensional data with the Hyena N-D layer and can be used as direct drop-in replacement within the ViT, Swin, DeiT backbones. 

![vit_vs_hyenavit.png](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/outlook_hyena_images/vit_vs_hyenavit.png)
here is a noticeable enhancement in GPU memory efficiency with the increase in the number of image patches.

Hyena Hierarchy facilitates the development of larger, more efficient convolution models for long sequences. 
The potential for Hyena type models for computer vision would be a more efficient GPU memory consumption of patches, that would allow: 
- The processing of larger, higher-resolution images
- The use of smaller patches, allowing a fine-graine feature representation 

These qualities would be particularly beneficial in areas such as Medical Imaging and Remote Sensing.

## Towards Transformers Alternatives 
Building new layers from simple design principles is an emerging research field that is progressing very quickly. 

The H3 mechanism serves as the foundation for many State Space Model based (SSM) architectures, typically featuring a structure that alternates between a block inspired by linear attention and a multi-layer perceptron (MLP) block. 
Hyena, as an enhancement of this approach, has paved the way for even more efficient architectures such as Mamba and its derivatives for vision (Vision Mamba, VMamba etc...).

## Further Reading
- Hyena offical repo: [Convolutions for Sequence Modeling](https://github.com/HazyResearch/safari)
- On the landscape of subquadratic models: [The Safari of Deep Signal Processing: Hyena and Beyond · Hazy Research (stanford.edu)](https://hazyresearch.stanford.edu/blog/2023-06-08-hyena-safari)
- On speeding up the FFT algorithm: [FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores · Hazy Research (stanford.edu)](https://hazyresearch.stanford.edu/blog/2023-11-13-flashfftconv)
- On the subquadratic model landscape: [Zoology (Blogpost 1): Measuring and Improving Recall in Efficient Language Models · Hazy Research (stanford.edu)](https://hazyresearch.stanford.edu/blog/2023-12-11-zoology1-analysis)
- Hyena applied to computer vision: [[2309.13600] Multi-Dimensional Hyena for Spatial Inductive Bias (arxiv.org)](https://arxiv.org/abs/2309.13600)
- An improved approach: [[2401.09417] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model (arxiv.org)](https://arxiv.org/abs/2401.09417)

### Image-based Joint-Embedding Predictive Architecture (I-JEPA)
https://huggingface.co/learn/computer-vision-course/unit13/i-jepa.md

# Image-based Joint-Embedding Predictive Architecture (I-JEPA)

## Overview

The Image-based Joint-Embedding Predictive Architecture (I-JEPA) is a groundbreaking self-supervised learning model [introduced by Meta AI in 2023](https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/). It tackles the challenge of understanding images without relying on traditional labels or hand-crafted data augmentations.
To get to know I-JEPA better, let’s first discuss a few concepts.

### Invariance-based vs. Generative Pretraining Methods

We can say that there are broadly two main approaches for self-supervised learning from images: invariance-based methods and generative methods. Both approaches have their strengths and weaknesses.

- **Invariance-based methods**: In these methods, the model tries to reproduce similar embeddings for different views of the same image. And, of course, these different views are hand-crafted, the image augmentations we’re all familiar with. For example, rotating, scaling, and cropping. These methods are good at producing representations at high semantic levels, but the problem is that they introduce strong biases that may be detrimental to certain downstream tasks. For example, image classification and instance segmentation do not require data augmentations.

- **Generative methods**: The model tries to reconstruct the input image using these methods. That’s why these methods are sometimes called reconstruction-based self-supervised learning. Masks hide patches of the input image, and the model tries to reconstruct these corrupted patches at the pixel or token level (let’s keep this point in mind). This masked approach can easily generalize beyond image modality but doesn’t produce representations at the quality level of invariance-based methods. Also, these methods are computationally expensive and require large datasets for robust training.

Now let’s talk about Joint-Embedding Architectures.

### Joint-Embedding Architectures

This is a recent and popular approach for self-supervised learning from images in which two networks are trained to produce similar embeddings for different views of the same image. Basically, they train two networks to "speak the same language" about different views of the same picture. A common choice is the Siamese network architecture where the two networks share the same weights. But like everything else, it has its own problems:

- **Representation collapse**: A case in which the model produces the same representation regardless of the input.

- **Inputs compatibility criteria**: Finding good and appropriate compatibility measures can be challenging sometimes.

An example of a Joint-Embedding Architecture is [VICReg](https://arxiv.org/abs/2105.04906)

Different training methods can be employed to train Joint-Embedding Architectures, for example:

- Contrastive methods
- Non-Contrastive methods
- Clustering methods

So far so good, now to I-JEPA. As a start, the picture below from the I-JEPA paper shows the difference between Joint-Embedding methods, generative methods, and I-JEPA.

![I-JEPA Comparisons](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/i-jepa-1.png)

### Image-based Joint-Embedding Predictive Architecture (I-JEPA)

I-JEPA tries to improve on both generative and joint-embedding methods. Conceptually, it is similar to generative methods but with the following key differences:

1. **Abstract prediction**: This is the most fascinating aspect of I-JEPA in my opinion. Remember when we mentioned generative methods and how they try to reconstruct the corrupted input at the pixel level? Now, unlike generative methods, I-JEPA tries to predict it in representation space using its introduced predictor, that’s why they call it abstract prediction. This leads to the model learning more powerful semantic features.

2. **Multi-block masking**: Another design choice that improves the semantic features produced by I-JEPA is masking sufficiently large blocks of the input image.

### I-JEPA Components

The previous diagrams show and compare the I-JEPA architecture, below is a brief description of its main components:

1. **Target Encoder (y-encoder)**: Encodes target images and the target blocks are produced by masking its output.

2. **Context Encoder (x-encoder)**: Encodes a randomly sampled context block from the image to obtain a corresponding patch-level representation.

3. **Predictor**: Takes as input the output of the context encoder and a mask token for each patch we wish to predict and tries to predict the masked target blocks.

The target-encoder, context-encoder, and predictor all use a Vision Transformer (ViT) architecture. You have a refresher about them in unit 3 of this course.

The image below from the paper illustrates how I-JEPA works.

![I-JEPA method](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/i-jepa-2.png)

## Why It Matters

So, why I-JEPA? I-JEPA introduced many new design features while still being a simple and efficient method for learning semantic image representations without relying on hand-crafted data augmentations. Briefly,

1. I-JEPA outperforms pixel-reconstruction methods such as Masked autoencoders (MAE) on ImageNet-1K linear probing, semi-supervised 1% ImageNet-1K, and semantic transfer tasks.

2. I-JEPA is competitive with view-invariant pretraining approaches on semantic tasks and achieves better performance on low-level vision tasks such as object counting and depth prediction.

3. By using a simpler model with less rigid inductive bias, I-JEPA is applicable to a wider set of tasks.

4. I-JEPA is also scalable and efficient. Pretraining on ImageNet requires *less than 1200 GPU hours*.

## References

- [I-JEPA paper](https://arxiv.org/abs/2301.08243)

- [Meta's blog post about I-JEPA](https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/)

- [I-JEPA official GitHub repository](https://github.com/facebookresearch/ijepa)

### Overview
https://huggingface.co/learn/computer-vision-course/unit13/hiera.md

## Overview

### What is Hiera?

[Hiera](https://arxiv.org/abs/2306.00989) (Hierarchical Vision Transformer) is an architecture that achieves high accuracy without the need for specialized components found in other vision models. The authors propose pretraining Hiera with a strong visual pretext task to remove unnecessary complexity and create a faster and more accurate model.

![Hiera Architecture](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/hiera_images/hiera_architecture.png)

### From CNNs to ViTs

CNNs and hierarchical models are well-suited for computer vision tasks because they can effectively capture the hierarchical and spatial structure of visual data. These models use fewer channels but higher spatial resolution in the early stages to extract simpler features and more channels but lower spatial resolution in the later stages to extract more complex features.

![CNNs](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/hiera_images/CNN_architecture.webp)

On the other hand, Vision Transformers (ViTs) are more accurate, scalable, and architecturally simple models that took computer vision by storm when they were introduced. However, this simplicity comes at a cost: they lack this “vision inductive bias” (their architecture is not designed to work specifically with visual data).

Many efforts have been made to adapt ViTs, generally by adding hierarchical components to compensate for this lack of inductive bias in their architecture. Unfortunately, all of the resulting models turned out to be slower, bigger and more difficult to scale.

### Hiera's Approach: Pretraining Task is All You Need

Authors of the Hiera paper argue that a ViT model can learn spatial reasoning and perform well on computer vision tasks by using a strong visual pretext task called MAE and thus, they can remove unnecessary components and complexity from state-of-the-art multi-stage vision transformers to achieve greater accuracy and speed.

What components are the paper authors actually removing? To understand this we first have to introduce [MViTv2](https://arxiv.org/abs/2112.01526) which is the base hierarchical architecture from which Hiera is derived. MViTv2 learns multi-scale representations over its four stages: it starts by modeling low level features with a small channel capacity but high spatial resolution, and then in each stage trades channel capacity for spatial resolution to model more complex high-level features in deeper layers.

![MViTv2](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/hiera_images/mvitv2.png)

Instead of digging deep into MViTv2's key features (since it's not our main topic), we will breifly explain them in the next section to illustrate how researchers create Hiera by simplyfing this base architecture.

### Simplifying MViTv2

![Simplifying MViTv2](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/hiera_images/hiera_changes.png)

This table lists all the changes that authors made to MViTv2 in order to create Hiera and how each change affects accuracy and speed on images and video.

- **Replacing relative with absolute position embeddings**: MViTv2 swaps the absolute position embeddings from the original [ViT](https://arxiv.org/abs/2010.11929) paper with relative ones added to attention in each block. Authors undo this change because it added more complexity to the model and, as it can be seen in the table, these relative position embeddings are not necessary when training with MAE (both accuracy and speed improve with this change).
- **Removing convolutions**: Since the key idea of the paper is that a model can learn spatial biases by pretraining with a strong visual pretext task, removing convolutions, which are vision specific modules and add potentially unnecessary overhead seems to be an important change. Authors first replace every conv with a max pooling layer which decreases accuracy at first because of the huge impact it has on the image features. However, they realize that they can remove some of these extra max pooling layers, specifically the ones with a stride of 1 since they are basically applying a ReLU on every feature map. By doing so, authors nearly returned to the accuracy they had before, while speeding up the model by 22% for images and 27% for video.

### Masked Autoencoder
Masked Autoencoder (MAE) is an unsupervised training paradigm. As with any other autoencoder, it consists of encoding high dimensional data (images) into a lower dimension representation (embeddings) in such a way that this data can be decoded into the original high dimensional data again. However, the visual MAE technique consists of dropping a certain amount of patches (around 75%), encoding the rest of the patches, and then trying to predict the missing ones. This idea has been used extensively in recent years as a pre-training task for image encoders.

![MAE](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/hiera_images/mae.png)

## Why Does Hiera Matter?

In the era dominated by transformer models, there are still a lot of attempts to improve this simple architecture, adding the complexity of CNN to convert them into hierarchical models again. Although hierarchical models excel in computer vision, this study demonstrates that achieving hierarchical transformers doesn't necessitate intricate architectural modifications. Instead, a concentrated emphasis on the training task alone can yield simple, fast, and precise models.

### Retention In Vision
https://huggingface.co/learn/computer-vision-course/unit13/retention.md

# Retention In Vision

## What are Retention Networks
Retentive Network (RetNet) is a foundational architecture proposed for large language models in the paper [Retentive Network: A Successor to Transformer for Large Language Models](https://arxiv.org/abs/2307.08621). This architecture is designed to address key challenges in the realm of large-scale language modeling: training parallelism, low-cost inference, and good performance.

![LLM Challenges](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/LLM%20Challenges.png)
RetNet is able to tackle these challenges by introducing the Multi-Scale Retention (MSR) mechanism, which is an alternative to the multi-head attention mechanism commonly used in Transformer models. 
MSR has a dual form of recurrence and parallelism, so it is possible to train the models in a parallel way while recurrently conducting inference. We will explore RetNet in detail in the later chapter.

The Multi-Scale Retention mechanism operates under three computation paradigms:
- **Parallel Representation:** This aspect of RetNet is designed similar to self-attention in Transformer, where it enables us train the models with GPUs efficiently.

- **Recurrent Representation:** This representation facilitates efficient inference with O(1) complexity in terms of memory and computational requirements. It significantly reduces deployment costs and latency, and simplifies implementation by eliminating the need for key-value cache strategies often used in traditional models.

- **Chunkwise Recurrent Representation:** This third paradigm addresses the challenge of long-sequence modeling. It achieves this by encoding each local block in parallel for computational speed while simultaneously and recurrently encoding global blocks to optimize GPU memory usage.

During the training phase, the approach incorporates both parallel and chunkwise recurrent representations, optimizing GPU usage for fast computation and being particularly effective for long sequences in terms of computational efficiency and memory use. 
For the inference phase, the recurrent representation is used, favoring autoregressive decoding. This method efficiently reduces memory usage and latency, maintaining equivalent performance outcomes.

## From Language to Image
### RMT
The paper [RMT: Retentive Networks Meet Vision Transformers](https://arxiv.org/abs/2309.11523) proposes a new vision backbone inspired by the RetNet architecture. The authors propose RMT to enhance the Vision Transformer (ViT) by introducing explicit spatial priors and reducing computational complexity, drawing inspiration from the RetNet's parallel representation. 
This includes adapting the RetNet’s temporal decay to spatial domains and using a [Manhattan distance-based](https://en.wikipedia.org/wiki/Taxicab_geometry) spatial decay matrix, along with an attention decomposition form, to improve efficiency and scalability in vision tasks.

- **Manhattan Self-Attention (MaSA)**
![Attention Comparison](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Attention%20Comparison.png)
MaSA incorporates Self-Attention mechanism with a two-dimensional bidirectional spatial decay matrix based on the Manhattan distance among the tokens. This matrix decreases attention scores for tokens further away from a target token, allowing it to perceive global information while varying attention based on distance.

- **Decomposed Manhattan Self-Attention (MaSAD)**
![MaSAD](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MaSAD.png)
This mechanism decomposes Self-Attention in images along horizontal and vertical axes of the image, maintaining the spatial decay matrix without losing prior information. This decomposition allows the Manhattan Self-Attention (MaSA) to model global information efficiently with linear complexity, while preserving the original MaSA's receptive field shape.

However, unlike the original RetNet, which performs training with parallel representation and inference with recurrent representation, RMT does both processes with the MaSA mechanism. The authors have made comparisons between MaSA and other RetNet's representations, and they show that MaSA has the best throughput with the highest accuracy.
![MaSA vs Retention](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/MaSA%20vs%20Retention.png)

### ViR
![ViR](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/ViR.png)

Another work inspired by the RetNet architecture is the ViR, as discussed in the paper [ViR: Vision Retention Networks](http://arxiv.org/abs/2310.19731). In this architecture, the authors propose a general vision backbone with a redesigned retention mechanism. They demonstrate that ViR can scale favorably to larger image resolutions in terms of image throughput and memory consumption by leveraging the dual parallel and recurrent properties of the retentive network.

The overall architecture of ViR is quite similar to that of ViT, except that it replaces the Multi-Head Attention (MHA) with Multi-Head Retention (MHR). This MHR mechanism is free of any gating function and can be switched between parallel, recurrent, or chunkwise (a hybrid between parallel and recurrent) modes. Another difference in ViR is that the positional embedding is first added to the patch embedding, and then the [class] token is appended.

## Further Reading

- [RetNet's official repo](https://github.com/microsoft/torchscale/blob/main/torchscale/architecture/retnet.py)
- [RetNet's Multi-Scale Retention official repo](https://github.com/microsoft/torchscale/blob/main/torchscale/component/multiscale_retention.py)
- [Retentive Networks (RetNet) Explained: The much-awaited Transformers-killer is here](https://medium.com/ai-fusion-labs/retentive-networks-retnet-explained-the-much-awaited-transformers-killer-is-here-6c17e3e8add8)
- [Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)](https://www.youtube.com/watch?v=ec56a8wmfRk)
- [RMT's official repo](https://github.com/qhfan/RMT)
- [ViR's official repo](https://github.com/NVlabs/ViR)

### MobileNet
https://huggingface.co/learn/computer-vision-course/unit2/cnns/mobilenet.md

# MobileNet

MobileNet is a type of neural network architecture designed for mobile devices. It was developed by Google's research team and first introduced in 2017. The primary goal of MobileNet is to provide high-performance, low-latency image classification and object detection on smartphones, tablets, and other resource-constrained devices.  
  
MobileNet achieves this by using depthwise separable convolutions, which are a more efficient alternative to standard convolutions. Depthwise separable convolutions break down the computation into two separate steps: depthwise convolution followed by pointwise convolution. This results in a significant reduction of parameters and computational complexity, allowing MobileNet to run efficiently on mobile devices.  

## Convolution Types on MobileNet

By replacing regular convolutional layers with these depthwise separable convolutions and pointwise convolutions, MobileNet achieves high accuracy while minimizing computational overhead, making it well-suited for mobile devices and other resource-limited platforms. There are two types of convolutions used in MobileNet:

### Depthwise Seperable Convolutions

In traditional convolutional layers, each filter applies its weight across all input channels simultaneously. Depthwise separable convolutions break this down into two steps: depthwise convolution followed by pointwise convolution.  

This step performs a convolution separately for each channel (a single color or feature) in the input image using a small filter (usually 3x3). The output of this step is the same size as the input but with fewer channels.

### Pointwise Seperable Convolutions

This type of convolution applies a single filter (usually 1x1) across all channels in both input and output layers. It has fewer parameters than regular convolution and can be seen as an alternative to fully connected layers, making it suitable for mobile devices that have limited computational resources.

After depthwise convolution, this step combines the filtered outputs from previous steps using another 1x1 convolutional layer. This operation effectively aggregates the features learned by the depthwise convolutions into a smaller set of features, reducing the overall complexity while retaining important information.

### Why Do We Use These Convolutions Instead of Regular Convolutions?
To better understand, let's simplify it and explain:

#### Regular Convolutions, a big, All-in-one Filter

Imagine you have a big, thick filter (like a sponge with many layers). This filter is applied over the entire image. It processes all parts of the image and all its features (like colors) at once. This requires a lot of work (computation) and a big filter (memory).

#### Depthwise Separable Convolutions - Two-Step, Lighter Process:
MobileNet does this process basic. It splits the big filter into two smaller, simpler steps:

 - **Step 1 - Depthwise Convolution:** First, it uses a thin filter (like a single layer of a sponge) for each feature of the image
   separately (like processing each color individually). This is less work because each filter is small and simple.
   
 - **Step 2 - Pointwise Convolution:** Then, it uses another small filter (just a tiny dot) to mix these features back together. This step is like taking a summary of what the first filters found.

#### What are these steps about?

MobileNet uses these two smaller steps instead of one big step, it's like doing a lighter version of the work needed in regular convolutions. It's more efficient, especially on devices that aren't very powerful, like smartphones.

With smaller filters, MobileNet doesn't need as much memory. It's like needing a smaller box to store all your tools, making it easier to fit into smaller devices.

### How Do 1x1 Convolutions Work Compared to Normal Convolutions?

#### Normal Convolutions

-   Normal convolutions use a larger kernel (like 3x3 or 5x5) to look at a group of pixels in the image at once. It's like observing a small patch of a picture to understand a part of the scene.
-   These convolutions are good at understanding features like edges, corners, and textures by analyzing how pixels are arranged next to each other.

#### 1x1 Convolutions

-  A 1x1 convolution, looks at one pixel at a time. It doesn't try to understand the arrangement of neighboring pixels.
-  Even though it's looking at one pixel, it considers all the information from different channels (like the RGB channels in a color image). It combines these channels to create a new version of that pixel.
-  The 1x1 convolution can either increase or decrease the number of channels. This means it can simplify the information (by reducing channels) or create more complex information (by increasing channels).

#### Key Differences

-   **Area of Focus:** Normal convolutions analyze a set of pixels together to understand patterns, whereas 1x1 convolutions focus on individual pixels, combining the information from different channels.
-   **Purpose:** Normal convolutions are used for detecting patterns and features in an image. In contrast, 1x1 convolutions are mainly used to alter the channel depth, helping in adjusting the complexity of the information for subsequent layers in a neural network for efficiency in weak devices.
 
MobileNet also employs techniques like channel-wise linear bottleneck layers, which improve model accuracy while reducing the number of parameters. The architecture is designed with optimizations for various hardware platforms, including CPUs, GPUs, and even specialized hardware such as Google's Tensor Processing Units (TPUs).

### Channel-wise Linear Bottleneck Layers
Channel-wise linear bottleneck layers help to further reduce the number of parameters and computational cost while maintaining high accuracy in image classification tasks.

A channel-wise linear bottleneck layer consists of three main operations applied sequentially:  
1. **Depthwise convolution:** This step performs a convolution separately for each channel (a single color or feature) in the input image using a small filter (usually 3x3). The output of this step is the same size as the input but with fewer channels.  
2. **Batch normalization:** This operation normalizes the activation values across each channel, helping to stabilize the training process and improve generalization performance.  
3. **Activation function:** Typically, a ReLU (Rectified Linear Unit) activation function is used after batch normalization to introduce non-linearity in the network.

### What does ReLU do?

Some problems may occur during training. We will explain them first, then we explain what ReLU does to this problems. 

#### Vanishing Gradient Problem

In deep neural networks, especially during backpropagation, the vanishing gradient problem can occur. This happens when gradients (which are used to update the network's weights) become very small as they are passed back through the network's layers. If these gradients become too small, they "vanish," making it hard for the network to learn and adjust its weights effectively.

ReLU has a linear, non-saturating form for positive values (where it simply outputs the input if it's positive), it ensures that the gradients do not become too small, allowing for faster learning and more effective weight adjustments in the network.

#### Non-Linearity

Without non-linearity, a neural network, no matter how many layers it has, would function as a linear model, incapable of learning complex patterns. 

Non-linear functions like ReLU enable neural networks to capture and learn complex relationships in the data.

### Inference

You can use Hugging Face transformers to infer with a variation of transformers models like below:

```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# initialize processor and model
preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v2_1.0_224")
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.0_224")

# preprocess the inputs
inputs = preprocessor(images=image, return_tensors="pt")

# get the output and the class labels
outputs = model(**inputs)
logits = outputs.logits

predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```

```
Predicted class: tabby, tabby cat
```

### Example Implementation using PyTorch

You can find an example implementation of MobileNet for PyTorch below:

```python  
import torch
import torch.nn as nn
import torch.nn.functional as F

class DepthwiseSeparableConv(nn.Module):
    def __init__(self, in_channels, out_channels, stride):
        super().__init__()
        self.depthwise = nn.Conv2d(
            in_channels,
            in_channels,
            kernel_size=3,
            stride=stride,
            padding=1,
            groups=in_channels,
        )
        self.pointwise = nn.Conv2d(
            in_channels, out_channels, kernel_size=1, stride=1, padding=0
        )

    def forward(self, x):
        x = self.depthwise(x)
        x = self.pointwise(x)
        return x

class MobileNet(nn.Module):
    def __init__(self, num_classes=1000):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1)

        # MobileNet body
        self.dw_conv2 = DepthwiseSeparableConv(32, 64, 1)
        self.dw_conv3 = DepthwiseSeparableConv(64, 128, 2)
        self.dw_conv4 = DepthwiseSeparableConv(128, 128, 1)
        self.dw_conv5 = DepthwiseSeparableConv(128, 256, 2)
        self.dw_conv6 = DepthwiseSeparableConv(256, 256, 1)
        self.dw_conv7 = DepthwiseSeparableConv(256, 512, 2)

        # 5 depthwise separable convolutions with stride 1
        self.dw_conv8 = DepthwiseSeparableConv(512, 512, 1)
        self.dw_conv9 = DepthwiseSeparableConv(512, 512, 1)
        self.dw_conv10 = DepthwiseSeparableConv(512, 512, 1)
        self.dw_conv11 = DepthwiseSeparableConv(512, 512, 1)
        self.dw_conv12 = DepthwiseSeparableConv(512, 512, 1)

        self.dw_conv13 = DepthwiseSeparableConv(512, 1024, 2)
        self.dw_conv14 = DepthwiseSeparableConv(1024, 1024, 1)

        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.fc = nn.Linear(1024, num_classes)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)

        x = self.dw_conv2(x)
        x = F.relu(x)
        x = self.dw_conv3(x)
        x = F.relu(x)
        x = self.dw_conv4(x)
        x = F.relu(x)
        x = self.dw_conv5(x)
        x = F.relu(x)
        x = self.dw_conv6(x)
        x = F.relu(x)
        x = self.dw_conv7(x)
        x = F.relu(x)

        x = self.dw_conv8(x)
        x = F.relu(x)
        x = self.dw_conv9(x)
        x = F.relu(x)
        x = self.dw_conv10(x)
        x = F.relu(x)
        x = self.dw_conv11(x)
        x = F.relu(x)
        x = self.dw_conv12(x)
        x = F.relu(x)

        x = self.dw_conv13(x)
        x = F.relu(x)
        x = self.dw_conv14(x)
        x = F.relu(x)

        x = self.avg_pool(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)

        return x

# Create the model
mobilenet = MobileNet(num_classes=1000)
print(mobilenet)
```

You can also find a pretrained MobileNet model checkpoint on this HuggingFace [link](https://huggingface.co/google/mobilenet_v2_1.0_224).

### GoogLeNet
https://huggingface.co/learn/computer-vision-course/unit2/cnns/googlenet.md

# GoogLeNet

In this chapter we will go through a convolutional architecture called GoogleNet. 

## Overview

The Inception architecture, a convolutional neural network (CNN) designed for tasks in computer vision such as classification and detection, stands out due to its efficiency. It contains fewer than 7 million parameters and is significantly more compact than its predecessors, being 9 times smaller than AlexNet and 22 times smaller than VGG16. This architecture gained recognition in the ImageNet 2014 challenge, where Google's adaptation, named GoogLeNet (a tribute to LeNet), set new benchmarks in performance while utilizing fewer parameters compared to previous leading methods.

### Architectural Innovations

Before the advent of the Inception architecture, models like AlexNet and VGG demonstrated the benefits of deeper network structures. However, deeper networks typically entail more computational steps and can lead to issues such as overfitting and the vanishing gradient problem. The Inception architecture offers a solution, enabling the training of complex CNNs with a reduced count of floating-point parameters.

#### The Inception "Network In Network" Module 

In prior networks, such as AlexNet or VGG, the fundamental block is the convolution layer itself. However, Lin et al. 2013, introduced the concept of Network In Network, arguing that a single convolution is not necessarily a correct fundamental building block. It ought to be more complex. So, inspired by that, the Inception model authors decided to have a more complex building block called the Inception Module, aptly named after the famous movie - "The Inception" (dream in dream).

The Inception Module insists on applying convolution filters of different kernel sizes for feature extraction at multiple scales. For any input feature map, it applies a \\(1 \times 1\\) convolution, a 3x3 convolution, and a 5x5 convolution in parallel. In addition to convolution a max pooling operation is also applied. All four operations have padding and stride in such a way as to have the same spatial dimension. These features are concatenated and form the input to the next stage. See Figure 1.

![inception_naive](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/inception_naive.png)

Figure 1: Naive Inception Module

As we can see applying multiple convolutions at multiple scales with bigger kernel sizes, like 5x5, can increase the number of parameters drastically. This problem is pronounced as the input feature size (channel size) increases. So as we go deep in the network stacking these "Inception Modules", the computation will increase drastically. The simple solution is to reduce the number of features wherever computational requirements seem to increase. The major pain points of high computation are the convolution layers. The feature dimension is reduced by a computationally inexpensive \\(1 \times 1\\) convolution just before the 3x3 and 5x5 convolution. Let's see it with an example.

We want to convert a feature map of \\( S \times S \times 128 \\) to \\( S \times S \times 256 \\) via a 5x5 convolution. The number of parameters (excluding biases) is 5\*5\*128\*256 = 819,200. However, if we reduce the feature dimension first by a \\(1 \times 1\\) convolution to 64, then the number of parameters(excluding biases) is \\( 1\times 1\times 128\times 64 + 5\times 5\times 64\times 256 = 8,192 + 409,600 = 417,792 \\). That means the number of parameters was reduced by almost half!

We would also want to reduce the output features of max pooling before concatenating with the output feature map. So, we add one more \\( 1\times 1 \\) convolution after the max-pooling layer. We also add a ReLU activation after each \\( 1\times 1 \\) convolution increasing non-linearity and complexity of the module. See Figure 2.

![inception_reduced](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/inception_reduced.png)

Figure 2: Inception Module

Also, because of the parallel operations of convolutions at multiple scales, we are ensuring more operations without going deeper into the network, essentially mitigating the vanishing gradient problem.

#### Average Pooling

In prior networks, like AlexNet or VGG, the final layers would be a few fully connected layers. These fully connected layers, due to their large number of units would contribute to most of the parameters in a network. For example, 89% of the parameters of VGG16 are in the final three fully connected layers. 95% of parameters in AlexNet are in the final fully connected layers. This need can be attributed to the premise that a Convolutional layer is not necessarily complex enough. 

However, with an Inception block at our disposal, we do not need fully connected layers and a simple average pooling along the spatial dimensions should be enough. This was also derived from the Network in Network paper. However, GoogLeNet included one fully connected layer. They reported an increase of 0.6% in top-1 accuracy. 

GoogLeNet has only 15% of the parameters in the fully connected layers.

#### Auxiliary Classifiers

With the introduction of compute saving \\( 1 \times 1 \\) convolution and the replacement of multiple fully connected layers with average pooling, the parameters of this network are reduced significantly, which means we can add more layers and go deeper into the network. However, stacking layers can cause the problem of vanishing gradient, where the gradients get smaller and close to zero while propagating back to the initial layers of the network.

The paper introduces the auxiliary classifiers - branch out a few small classifiers from the layers in between and add the loss from these classifiers to the total loss(with less weightage). This ensures that the layers close to the input also receive gradients of decent magnitude.

The auxiliary classifier consists of 
- An average pooling layer with \\( 5 \times 5 \\) filter size and stride 3.
- A \\( 1 \times 1 \\) convolution with 128 filters for dimension reduction and rectified linear activation.
- A fully connected layer with 1024 units and rectified linear activation.
- A dropout layer with 70% ratio of dropped outputs.
- A linear layer with softmax loss as the classifier.

These auxiliary classifiers are removed at inference time. However, minimal gains are achieved from using auxiliary classifiers (0.5%).

![googlenet_aux_clf](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/googlenet_auxiliary_classifier.jpg)

Figure 3: An Auxiliary Classifier

### Architecture - GoogLeNet

The complete architecture of GoogLeNet is shown in Figure below. All convolutions, including inside the inception block, use ReLU activation. It starts with two convolution(s) and max-pooling blocks. This is followed by a block of two inception modules (3a and 3b) and a max pooling. This follows a block of 5 inception blocks (4a, 4b, 4c, 4d, 4e) and a max pooling after. The auxiliary classifiers are taken out from outputs of 4a and 4d. Two inception blocks follow (5a and 5b). After this, an average pooling and a fully connected layer of 128 units are used. 

![googlenet_arch](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/googlenet_architecture.png)
Figure 4: Complete GoogLeNet Architecture

### Code 

```python
import torch
import torch.nn as nn

class BaseConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, **kwargs):
        super(BaseConv2d, self).__init__()
        self.conv = nn.Conv2d(in_channels, out_channels, **kwargs)
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.conv(x)
        x = self.relu(x)
        return x

class InceptionModule(nn.Module):
    def __init__(self, in_channels, n1x1, n3x3red, n3x3, n5x5red, n5x5, pool_proj):
        super(InceptionModule, self).__init__()

        self.b1 = nn.Sequential(
            nn.Conv2d(in_channels, n1x1, kernel_size=1),
            nn.ReLU(True),
        )

        self.b2 = nn.Sequential(
            BaseConv2d(in_channels, n3x3red, kernel_size=1),
            BaseConv2d(n3x3red, n3x3, kernel_size=3, padding=1),
        )

        self.b3 = nn.Sequential(
            BaseConv2d(in_channels, n5x5red, kernel_size=1),
            BaseConv2d(n5x5red, n5x5, kernel_size=5, padding=2),
        )

        self.b4 = nn.Sequential(
            nn.MaxPool2d(3, stride=1, padding=1),
            BaseConv2d(in_channels, pool_proj, kernel_size=1),
        )

    def forward(self, x):
        y1 = self.b1(x)
        y2 = self.b2(x)
        y3 = self.b3(x)
        y4 = self.b4(x)
        return torch.cat([y1, y2, y3, y4], 1)

class AuxiliaryClassifier(nn.Module):
    def __init__(self, in_channels, num_classes, dropout=0.7):
        super(AuxiliaryClassifier, self).__init__()
        self.pool = nn.AvgPool2d(5, stride=3)
        self.conv = BaseConv2d(in_channels, 128, kernel_size=1)
        self.relu = nn.ReLU(True)
        self.flatten = nn.Flatten()
        self.fc1 = nn.Linear(2048, 1024)
        self.dropout = nn.Dropout(dropout)
        self.fc2 = nn.Linear(1024, num_classes)

    def forward(self, x):
        x = self.pool(x)
        x = self.conv(x)
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.relu(x)
        x = self.dropout(x)
        x = self.fc2(x)
        return x

class GoogLeNet(nn.Module):
    def __init__(self, use_aux=True):
        super(GoogLeNet, self).__init__()

        self.use_aux = use_aux
        ## block 1
        self.conv1 = BaseConv2d(3, 64, kernel_size=7, stride=2, padding=3)
        self.lrn1 = nn.LocalResponseNorm(5, alpha=0.0001, beta=0.75)
        self.maxpool1 = nn.MaxPool2d(3, stride=2, padding=1)

        ## block 2
        self.conv2 = BaseConv2d(64, 64, kernel_size=1)
        self.conv3 = BaseConv2d(64, 192, kernel_size=3, padding=1)
        self.lrn2 = nn.LocalResponseNorm(5, alpha=0.0001, beta=0.75)
        self.maxpool2 = nn.MaxPool2d(3, stride=2, padding=1)

        ## block 3
        self.inception3a = InceptionModule(192, 64, 96, 128, 16, 32, 32)
        self.inception3b = InceptionModule(256, 128, 128, 192, 32, 96, 64)
        self.maxpool3 = nn.MaxPool2d(3, stride=2, padding=1)

        ## block 4
        self.inception4a = InceptionModule(480, 192, 96, 208, 16, 48, 64)
        self.inception4b = InceptionModule(512, 160, 112, 224, 24, 64, 64)
        self.inception4c = InceptionModule(512, 128, 128, 256, 24, 64, 64)
        self.inception4d = InceptionModule(512, 112, 144, 288, 32, 64, 64)
        self.inception4e = InceptionModule(528, 256, 160, 320, 32, 128, 128)
        self.maxpool4 = nn.MaxPool2d(3, stride=2, padding=1)

        ## block 5
        self.inception5a = InceptionModule(832, 256, 160, 320, 32, 128, 128)
        self.inception5b = InceptionModule(832, 384, 192, 384, 48, 128, 128)

        ## auxiliary classifier
        if self.use_aux:
            self.aux1 = AuxiliaryClassifier(512, 1000)
            self.aux2 = AuxiliaryClassifier(528, 1000)

        ## block 6
        self.avgpool = nn.AvgPool2d(7, stride=1)
        self.dropout = nn.Dropout(0.4)
        self.fc = nn.Linear(1024, 1000)

    def forward(self, x):
        ## block 1
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.lrn1(x)

        ## block 2
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.lrn2(x)
        x = self.maxpool2(x)

        ## block 3
        x = self.inception3a(x)
        x = self.inception3b(x)
        x = self.maxpool3(x)

        ## block 4
        x = self.inception4a(x)
        if self.use_aux:
            aux1 = self.aux1(x)
        x = self.inception4b(x)
        x = self.inception4c(x)
        x = self.inception4d(x)
        if self.use_aux:
            aux2 = self.aux2(x)
        x = self.inception4e(x)
        x = self.maxpool4(x)

        ## block 5
        x = self.inception5a(x)
        x = self.inception5b(x)

        ## block 6
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.dropout(x)
        x = self.fc(x)

        if self.use_aux:
            return x, aux1, aux2
        else:
            return x
```

### Transfer Learning
https://huggingface.co/learn/computer-vision-course/unit2/cnns/intro-transfer-learning.md

# Transfer Learning
Before we can dig into the details of what transfer learning and fine-tuning mean for neural networks, let’s take musical instruments as an example. The theremin is an electronic musical instrument that makes an eerie sound, commonly associated with thrillers and horror movies. It is very hard to play because it requires you to move both your hands in the air between two antennae to control the pitch and volume. So hard, that someone invented an instrument called the Tannerin (it is also known as slide Theremin or Eletric-Theremin) that makes a similar sound but it is easier to play.  The player moves the slides at the side of the box to the desired frequency to create a pitch. There is still a learning curve to play it... Well, except if you play the trombone. When you play the trombone, you already know how to use the tannerin slide because it is the same as the telescope slide mechanisms on the trombone. Below, you see from left to right: the theremin, the tannerin, and the trombone.

![Theremin, Tannerin, and the Trombone](
https://huggingface.co/datasets/hf-vision/course-assets/resolve/d0096005da7fe2eb3bfbd3d0047e9ac7bd499cf0/transfer_learning.png)

In this case, the trombone player has effectively used what he learned by playing the trombone to play the tannerin. He transfers what he learned from one instrument to another. We can use this concept in neural networks as well.  What a neural network learns while classifying dogs or cats can be used to recognize other animals. The explanation for why this works is due to the way networks learn features in the model. That is, the learned feature used to classify a dog, also classify a horse.  We exploit what the model already knows to do different tasks.

Transfer learning requires that the previous knowledge is “useful” for the other task. Thus, the features we are trying to explore need to be general enough for the new application. If we go back to our musical instrument example, playing the saxophone instead of the trombone is not as helpful in learning how to play the tannerin. The main skill that gives the trombone player its head start is the intuitive understanding of where the slide should be. 

Yet, the saxophone player is not starting from zero. He is familiar with things like music theory, rhythm, and timing. These general skills give them an edge over someone who never played any instrument at all.  The act of playing an instrument gives all players a general set of skills that are useful across instruments. This generalization across domains (in our example, musical instruments) is what makes the model learn much faster as opposed to training from zero.

## Transfer Learning and Fine-tuning

Let's make a distinction between the concepts we are talking about. The trombone player needs no training to play the tannerin. He already knows how to do it unbeknownst to him. The saxophone player needs some training to fine-tune his skills to play the tannerin. In deep learning terms, the trombone player uses a model off-the-shelf. This is called transfer learning. The training of a model that needs more time to learn, like our saxophone player, is called fine-tuning. 

When fine-tuning a model, we do not need to train all parts. We can train just the underperforming ones. Let’s take the example of a computer vision model that has three parts: [feature extraction, feature enhancement, and a final task](https://huggingface.co/docs/transformers/main/main_classes/backbones). In this case, you can use the same feature extraction and feature enhancement without any retraining.  So, we focus on retraining only the final task. 

If the results after fine-tuning the final task are not satisfactory, we still do not need to retrain the entire feature extraction part.  A good compromise is to retrain only the weights of the top layers. In convolutional networks, the higher up a layer is, the more characteristic its features are to the task and dataset. In other words, the features in the first convolutional layers are more generic, while the last layers are more specific. With our player example, this is the equivalent of not wasting time trying to explain music theory to a seasoned saxophonist, but instead just teaching him how to change pitch in the tannerin.

## Considerations on Transfer Learning

Our example also gives us an interesting nuance. The theremin was too hard to play, so they invented an easier instrument that produced the same sound. The output is nearly the same but needs a lot less training time. For computer vision, we might first do object detection to see where a dog is within an image, and then build a classifier to tell us which breed of dog instead of trying to build a classifier right away.

Finally, transfer learning is not a universal performance enhancer. In our example, playing an instrument might help us learn another one, but it might also hinder progress. There are patterns and vices from one instrument that might slow down the progress of another one. If these vices are deeply entrenched within the player, a novice player might surpass the new player with the same amount of training. If your players are stuck to their vices, it might be time to hire new ones.

## Transfer learning and Self-training

Transfer learning shines especially when there is not enough labeled data to retrain a model from scratch. Using our example, we can think that given enough time, a player who attends just a few lessons can learn on their own by playing the instrument without the constant supervision of their professor. Learning, partially or entirely, on your own in deep learning is called self-training. It allows us to train the model using both labeled (the lessons) and unlabeled (the players on their own) data to learn the task.  

Although we will not discuss the concept of self-training in this section, we mention it here as a resource to you because when transfer learning does not work and labeled data is scarce, [self-training can be incredibly helpful](https://doi.org/10.48550/arXiv.2006.06882). These concepts are also not mutually exclusive, a seasoned player might need just a couple of lessons to become autonomous in a new instrument training without supervision and, as it turns out, so do our deep learning models.

## Resources

- To understand why transfer learning is cheaper, faster, and greener, [you can go check part one of the NLP on the course.](https://huggingface.co/learn/nlp-course/chapter1/4?fw=pt#transfer-learning)

- [To check out the list of available pre-trained models.](https://huggingface.co/models)

### ConvNext - A ConvNet for the 2020s (2022)
https://huggingface.co/learn/computer-vision-course/unit2/cnns/convnext.md

# ConvNext - A ConvNet for the 2020s (2022)

## Introduction
Recently, the breakthrough of Vision Transformers (ViTs) quickly superseded pure CNN models as the new state-of-the-art for image recognition.
Intriguingly, research has found that CNNs could adopt a significant portion of the design choices in Vision Transformers.
ConvNext represents a significant improvement to pure convolution models by incorporating techniques inspired by ViTs and achieving results comparable to ViTs in accuracy and scalability.

## Key Improvements
The author of the ConvNeXT paper starts building the model with a regular ResNet (ResNet-50), then modernizes and improves the architecture step-by-step to imitate the hierarchical structure of Vision Transformers.
The key improvements are:
- Training techniques
- Macro design
- ResNeXt-ify
- Inverted Bottleneck
- Large Kernel Sizes
- Micro Design

We will go through each of the key improvements.
These designs are not novel in itself. However, you can learn how researchers adapt and modify designs systematically to improve existing models.
To show the effectiveness of each improvement, we will compare the model's accuracy before and after the modification on ImageNet-1K.

![Block Comparison](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/block_comparison.png)

## Training Techniques
The researchers first discerned that, while architectural design choices are crucial, the quality of the training procedure also plays a pivotal role in influencing performance outcomes.
Inspired by DeiT and Swin Transformers, ConvNext closely adapts their training techniques. Some of the notable changes are:
- Epochs: Extending the epochs from the original 90 epochs to 300 epochs.
- Optimizer: Using AdamW optimizer instead of Adam optimizer, which differs in how it handles weight decay.
- Mixup (generates a weighted combination of random image pairs), Cutmix (cuts part of an image and replace it with a patch from another image), RandAugment (applies a series of random augmentations such as rotation, translation, and shear), and Random Erasing (randomly selects a rectangle region in an image and erases its pixels with random values) to increase training data.
- Regularization: Using Stochastic Depth and Label Smoothing as regularization techniques.

Modifying these training procedures has improved ResNet-50's accuracy from 76.1% to 78.8%.

## Macro Design
Macro design refers to the high-level structural decisions and considerations made in a system or model, such as the arrangement of layers, the distribution of computational load across different stages, and the overall structure.
Examining Swin Transformers' macro network, the authors have identified two noteworthy design considerations that benefit the performance of ConvNext.

### The stage compute ratio
The stage compute ratio refers to the distribution of computational load among the stages of a neural network model.
ResNet-50 has four main stages with (3, 4, 6, 3) blocks, which means it has a compute ratio of 3:4:6:3.
To follow Swin Transformer's compute ratio of 1:1:3:1, the researchers adjusted the number of blocks on each stage of the ResNet from (3, 4, 6, 3) to (3, 3, 9, 3) instead. 
Changing the stage compute ratio improves the model accuracy from 78.8% to 79.4%.

### Changing stem to Patchify
Typically, at the start of ResNet's architecture, the input is fed to a stem 7×7 convolution layer with stride 2, followed by a max pool, used to downsample the image by a factor of 4.
However, the authors discovered that substituting the stem with a convolutional layer featuring a 4×4 kernel size and a stride of 4 is more effective, effectively convolving them through non-overlapping 4x4 patches. 
Patchify serves the same purpose of downsampling the image by a factor of 4 while reducing the number of layers.
This Patchifying step slightly improves the model accuracy from 79.4% to 79.5%.

## ResNeXt-ify
ConvNext also adopts the idea of ResNeXt, explained in the previous section. 
ResNeXt demonstrates an improved trade-off between the number of floating-point operations (FLOPs) and accuracy compared to a standard ResNet.
By using depthwise convolution and 1 × 1 convolutions, we would have the separation of spatial and channel mixing - a characteristic also found in vision Transformers.
Using depthwise convolutions reduces the number of FLOPS and the accuracy.
However, by increasing the channels from 64 to 96, the accuracy is higher than the original ResNet-50 while maintaining a similar number of FLOPs.
This modification improves the model accuracy from 79.5% to 80.5%.

## Inverted Bottleneck
One common idea in every Transformer block is the usage of an inverted bottleneck, where the hidden layers are much bigger than the input dimension. 
This idea has also been used and popularized in Computer Vision by MobileNetV2.
ConvNext adopts this idea, having input layers with 96 channels and increasing the hidden layers to 384 channels.
By using this technique, it improves the model accuracy from 80.5% to 80.6%.

![Inverted Bottleneck Comparison](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/inverted_bottleneck.png)

## Large Kernel Sizes
A key factor contributing to the exceptional performance of Vision Transformer is its non-local self-attention, allowing for a broader receptive field of image features.
In Swin Transformers, the attention block window size is set to at least 7×7, surpassing the 3x3 kernel size of ResNext. 
However, before adjusting the kernel size, it is necessary to reposition the depthwise convolution layer, as shown in the image below. 
This repositioning enables the 1x1 layers to efficiently handle computational tasks, while the depthwise convolution layer functions as a more non-local receptor.
With this, the network can harness the advantages of incorporating bigger kernel-sized convolutions. 
Implementing a 7x7 kernel size maintains the accuracy at 80.6% but reduces the overall FLOPs efficiency of the model.

![Moving up the Depth Conv Layer](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/depthwise_moveup.png)

## Micro Design
In addition to the modifications stated, the author adds some micro-design changes to the model.
Mico design refers to low-level structural decisions such as the choice of activation functions and layer details.
Some of the notable micro changes are:
- Activation: Replacing ReLU activation with GELU (Gaussian Error Linear Unit) and eliminating all GELU layers from the residual block except for one between two 1×1 layers.
- Normalization: Fewer normalization layers by removing two BatchNorm layers and substituting BatchNorm with LayerNorm, leaving only one LayerNorm layer before the conv 1 × 1 layers.
- Downsampling Layer: Add a separate downsample layer in between ResNet stages
These final modifications improve the ConvNext accuracy from 80.6% to 82.0%.
The final ConvNext model exceeds Swin Transformer's accuracy of 81.3%.

## Model Code
You can go to [this HuggingFace documentation](https://huggingface.co/docs/transformers/model_doc/convnext) to learn how to integrate the ConvNext pipeline into your code.

## References
The paper "A ConvNet for the 2020s" proposed the ConvNext architecture in 2022 by a team of researchers from Facebook AI Research, including Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie.
The paper can be found [here](https://arxiv.org/abs/2201.03545) and the GitHub repo can be found [here](https://github.com/facebookresearch/ConvNeXt).

### ResNet (Residual Network)
https://huggingface.co/learn/computer-vision-course/unit2/cnns/resnet.md

# ResNet (Residual Network)

Neural networks with more layers were assumed to be more effective because adding more layers improves the model performance.

As the networks became deeper, the extracted features could be further enriched, such as seen with VGG16 and VGG19.

A question arose: "Is learning networks as easy as stacking more layers"?
An obstacle to answering this question, the gradient vanishing problem, was addressed by normalized initializations and intermediate normalization layers.

However, a new issue emerged: the degradation problem. As the neural networks became deeper, accuracy saturated and degraded rapidly. An experiment comparing shallow and deep plain networks revealed that deeper models exhibited higher training and test errors, suggesting a fundamental challenge in training deeper architectures effectively. This degradation was not because of overfitting but because the training error increased when the network became deeper. The added layers did not approximate the identity function.

ResNet’s residual connections unlocked the potential of the extreme depth, propelling the accuracy upwards compared to the previous architectures.

## ResNet Architecture

- A Residual Block. Source: ResNet Paper
![residual](https://huggingface.co/datasets/hf-vision/course-assets/blob/main/ResnetBlock.png)

ResNets introduce a concept called residual learning, which allows the network to learn the residuals (i.e., the difference between the learned representation and the input), instead of trying to directly map inputs to outputs. This is achieved through skip connections (or shortcut connections). 

Let's break this down:

#### 1. Basic Building Block: Residual Block

In a typical neural network layer, we aim to learn a mapping F(x), where x is the input and F is the transformation the network applies. Without residual learning, the transformation at a layer is: y = F(x). In ResNets, instead of learning F(x) directly, the network is designed to learn the residual R(x), where: R(x) = F(x) − x. Thus, the transformation in a residual block is written as: y = F(x) + x.

Here, x is the input to the residual block, and F(x) is the output of the block's stacked layers (usually convolutions, batch normalization, and ReLU). The identity mapping x is directly added to the output of F(x) (through the skip connection). So the block is learning the residual R(x) = F(x), and the final output is F(x) + x. This residual function is easier to optimize compared to the original mapping. If the optimal transformation is close to the identity (i.e., no transformation is needed), the network can easily set F(x) ≈ 0 to pass through the input as it is.

* The building block of ResNet can be shown in the picture, source ResNet paper. 

![resnet_building_block](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/ResnetBlock.png)

#### 2. Learning Residuals and Gradient Flow

The primary reason ResNets work well for very deep networks is due to improved gradient flow during backpropagation. Let’s analyze the backward pass mathematically. Let's say you have the residual block y = F(x) + x. When calculating gradients via the chain rule, we compute the derivative of the loss 𝐿 with respect to the input x. For a standard block (without residuals), the gradient is:

`∂x/∂L = ∂F(x)/∂L * ∂F(x)/∂x`

However, for the residual block, where y = F(x) + x, the gradient becomes:

`∂L/∂x = ∂L/∂F(x) * ∂F(x)/∂x + ∂L/∂F(x) ⋅1`

Notice that we now have an additional term 1 in the gradient calculation. This means that the gradient at each layer has a direct path back to earlier layers, improving gradient flow and reducing the chance of vanishing gradients. The gradients flow more easily through the network, allowing deeper networks to train without degradation.

#### 3. Why Deeper Networks are Now Possible:

ResNets made it feasible to train networks with hundreds or even thousands of layers. Here’s why deeper networks benefit from this:

* Identity Shortcut Connections: Shortcut connections perform identity mapping and their output is added to the output of the stacked layers. Identity shortcut connections add neither extra parameters nor
computational complexity, these connections bypass layers, creating direct paths for information flow, and they enable neural networks to learn the residual function (F).
* Better Gradient Flow: As explained earlier, the residuals help gradients propagate better during backpropagation, addressing the vanishing gradient problem in very deep networks.
* Easier to Optimize: By learning residuals, the network is essentially breaking down the learning process into easier, incremental steps. It’s easier for the network to learn the residual R(x) = F(x) − x than it is to learn F(x) directly, especially in very deep networks.

### Summarizing:

We can conclude that ResNet Network ->  Plain Network + Shortcuts!

For operations  \(F(x) + x\), \(F(x)\) and \(x\) should have identical dimensions.
ResNet employs two techniques to achieve this:

- Zero-padding shortcuts that add channels with zero values, maintaining dimensions without introducing extra parameters to be learned.
- Projection shortcuts that use 1x1 convolutions to adjust dimensions when necessary, involving some additional learnable parameters.

In deeper ResNet architectures like ResNet 50, 101, and 152, a specialized "bottleneck building block" is employed to manage parameter complexity and maintain efficiency while enabling even deeper learning.

## ResNet Code

### Deep Residual Networks Pre-trained on ImageNet
Below you can see how to load pre-trained ResNet with an image classification head using transformers library.
```python
from transformers import ResNetForImageClassification

model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50")

model.eval()
```
All pre-trained models expect input images normalized similarly, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded into a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].

Here's a sample execution. This example is available in the [Hugging Face documentation](https://huggingface.co/docs/transformers/v4.18.0/en/model_doc/resnet).

```python
from transformers import AutoFeatureExtractor, ResNetForImageClassification
import torch
from datasets import load_dataset

dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]

feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-50")

inputs = feature_extractor(image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```

## References

- [PyTorch docs](https://pytorch.org/hub/pytorch_vision_resnet/)
- [ResNet: Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)

- [Resnet Architecture Source: ResNet Paper](https://arxiv.org/abs/1512.03385)
- [Hugging Face Documentation on ResNet](https://huggingface.co/docs/transformers/en/model_doc/resnet)

### YOLO
https://huggingface.co/learn/computer-vision-course/unit2/cnns/yolo.md

# YOLO 

## A short introduction to Object Detection

Convolutional Neural Networks have taken a big step towards solving the image classification problem.
But there remains another big task to solve: object detection. Object detection not only requires categorizing the object from the image but also accurately predicting its location (in this case, the coordinates of the bounding boxes of the object) in the image. This is where the big breakthrough of YOLO came in. Before delving deeper into YOLO, let's go through the history of object detection algorithms using CNNs.

### RCNN, Fast RCNN, Faster RCNN

#### R- CNN (Region-based Convolutional Neural Networks)
RCNN is one of the simplest ways possible to use convolutional neural networks for object detection. In simple terms, the basic idea is to
detect a "region" and then use CNN to classify the region. So this is a multi-step process. Based on this idea the RCNN paper was shared in 2012[1]

The RCNN uses following steps, 

1. Use Selective Search algorithm to select a region. 
2. Use CNN based classifier to classify an object from the region. 

For training purpose, the paper proposed the following steps 

1. Make a dataset of regions detected from the Object detection dataset. 
2. Fine tune Alexnet model on the regions dataset. 
3. Then use the fine tuned model on the object detection dataset. 

The following is a basic pipeline of RCNN 
![rcnn](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/RCNN.png)

#### Fast RCNN 

Fast RCNN is focused on improvisation over the original RCNN. They added these four improvements

- Training in a single-stage instead of multi-stage like RCNN. Uses multi-task loss
- No disk storage required.
- Introduces ROI pooling layer to only get the features from the  Region of Interests.
- Trains an end-to-end model in contrary to multi-step RCNN / SPPnet models using multi-task loss.

![fast_rcnn](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/Fast%20R-CNN.png)

#### Faster RCNN 

The Faster R-CNN completely removes the need for the Selective Search algorithm! These features resulted in an improvement of the inference time by 90% compared to that of Fast R-CNN!!
- It introduces RPN, Regional Proposal Network. The RPN is an attention based model which trains the model to give "attention" to the region of the image containing the object. 
- It merges RPN with Fast RCNN, making it an End-to-End object detection model. 

![Faster RCNN](https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/n8eDqnlEvDS5SIKGoSUpz.png)

#### Feature Pyramid Network (FPN)
- A Feature Pyramid network is a kind of Inception model for Object detection.
- It first downscales the image into lower dimensional embeddings.
- Then it upscalses them again. 
- From every upscaled images it tries to predict the output (in this case the categories.)
- But there are also skip connections between the similar dimensional features as well! 

Please refer to the following images which are taken from the paper.[20]

   [FPN](https://huggingface.co/datasets/hf-vision/course-assets/blob/main/FPN.png)
   [FPN2](https://huggingface.co/datasets/hf-vision/course-assets/blob/main/FPN_2.png)

## YOLO architecture

YOLO was a ground breaking innovation of its time. A real time object detector, end to end trainable with a single network. 

### Before YOLO

The detection systems before consisted of utilizing image classifiers on patches of images. Systems like deformable parts models (DPM) used a sliding window approach where the classifier is run at evenly spaced locations over the entire image.

Other works like RCNN used a two-step detection. First, they detected many possible regions of interest, generated as bounding boxes by a Region Proposal Network. Then a classifier is run through all proposed regions to make final predictions. Post-processing, such as refining the bounding boxes, eliminating duplicate detections, and re-scoring the boxes based on other objects in the scene, needs to be done.

These complex pipelines are slow and hard to optimise because each individual component must be trained separately.

### YOLO
YOLO is a single step detector where the bounding box and the class of the object is predicted in the same pass, simultaneously. This makes the system super fast - 45-frames-per-second fast.

#### Reframing Object Detection
YOLO reframes the Object Detection task as a single regression problem, which predicts bounding box coordinates and class probabilities. 

In this design, we divide the image into an $S \times S$ grid. If the center of the object falls into a grid cell, that grid cell is responsible to detect that object. We can define $B$ as the maximum number of objects to be detected in each cell. So each grid cell predicts $B$ bounding boxes including confidence scores for each box. 

#### Confidence
The confidence score of a bounding box should reflect how accurately the box was predicted. It should be close to the IOU (Intersection over Union) of the ground truth box versus the predicted box. If the grid was not supposed to predict a box, then it should be zero. So this should encode the probability of the center of the box being present in the grid and the correctness of the bounding box.

Formally, 

$$\text{confidence} := P(\text{Object}) \times \text{IOU}_{\text{pred}}^{\text{truth}}$$

#### Coordinates
The coordinates of a bounding box are encoded in 4 numbers $(x, y, w, h)$. The $(x, y)$ coordinates represent the center of the box relative to the bounds of the grid cell. The width and height are normalized to image dimensions.

#### Class
The class probabilities is a $C$ long vecto,r representing conditional class probabilities of each class given an object existed in a cell. Each grid cell only predicts one vector, i.e a single class will be assigned to each grid cell and so all the $B$ bounding boxes predicted by that grid cell will have the same class.

Formally,
$$C_i = P(\text{class}_i \mid \text{Object})$$

At test time, we multiply the conditional class probabilities and the individual box confidence predictions, which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.

$$\begin{align}
C_i \times \text{confidence} &= P(\text{class}_i \mid \text{Object}) \times P(\text{Object}) \times \text{IOU}_{\text{pred}}^{\text{truth}}\\
&=P(\text{class}_i) \times \text{IOU}_{\text{pred}}^{\text{truth}}
\end{align}
$$

To recap, we have an image, divided into $S \times S$ grid. Each grid cell contains $B$ bounding boxes consisting of 5 values - confidence + 4 coordinates and $C$ long vector containing conditional probabilities of each class. So, each grid cell is a $B \times 5 + C$ long vector. The whole grid is $S \times S \times (B \times 5 + C)$.

So if we have a learnable system which converts an image to an $S \times S \times (B \times 5 + C)$ feature map, we are one step closer to the task.

#### Network Architecture
In the original YOLOv1 design the input is an RGB image of size $448 \times 448$. The image is divided into a $S \times S = 7 \times 7$ grid, where each grid cell is responsible for detecting $B=2$ bounding boxes and $C=20$ classes. 

The network architecture is a simple convolutional neural network. The input image is passed through a series of convolutional layers, followed by a fully connected layer. The final layer outputs are reshaped to $7 \times 7 \times (2 \times 5 + 20) = 7 \times 7 \times 30$.

The YOLOv1 design took inspiration from GoogLeNet, which used 1x1 convolutions to reduce the depth of the feature maps. This was done to reduce the number of parameters and the amount of computation in the network. The network has 24 convolutional layers followed by 2 fully connected layers. It uses a linear activation function for the final layer, and all other layers use the leaky rectified linear activation:

$$\text{LeakyReLU}(x) = \begin{cases}
x & \text{if } x > 0\\
0.1x & \text{otherwise}
\end{cases}$$

See figure below for the architecture of YOLOv1.

![v1_arch](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/yolov1_arch.png)

#### Training
The network is trained end-to-end on the image and the ground truth bounding boxes. The loss function is a sum of squared error loss. The loss function is designed to penalize the network for incorrect predictions of bounding box coordinates, confidence and class probabilities. We will discuss the loss function in the next section.

YOLO predicts multiple bounding boxes per grid cell. At training time, we only want one bounding box predictor to be responsible for each object. We assign one predictor to be “responsible” for predicting an object based on which prediction has the highest current IOU with the ground truth. This leads to specialization between the bounding box predictors. Each predictor gets better at predicting certain sizes, aspect ratios, or classes of objects, improving overall recall. We will encode this information in the loss function for grid cell $i$ and bounding box $b$ using $\mathbb{1}{ib}^{\text{obj}}$. $\mathbb{1}{ib}^{\text{noobj}}$ is the opposite of $\mathbb{1}_{ib}^{\text{obj}}$.

##### Loss Function
Now that we have a learnable system which converts an image to a $S \times S \times (B\times5 + C)$ feature map, we need to train it.

A simple function to train such a system is to use a sum of squared error loss. We can use the squared error between the predicted values and the true values, i.e for bounding box coordinates, confidence and class probabilities. 

The loss for each grid cell $(i)$ can look like this:

$$
\mathcal{L}^{i} = \mathcal{L}^{i}_{\text{coord}} + \mathcal{L}^{i}_{\text{conf}} + \mathcal{L}^{i}_{\text{class}}\\
$$

$$\begin{align*}
\mathcal{L}^{i}_{\text{coord}} &= \sum_{b=0}^{B} \mathbb{1}_{ib}^{\text{obj}} \left[ \left( \hat{x}_{ib} - x_{ib} \right)^2 + \left( \hat{y}_{ib} - y_{ib} \right)^2 + 
\left( 
    \hat{w}_{ib} - w_{ib}
\right)^2 + 
\left( 
    \hat{h}_{ib} - h_{ib}
    \right)^2
\right]\\
\mathcal{L}^{i}_{\text{conf}} &= \sum_{b=0}^{B} (\hat{\text{conf}}_{i} - \text{conf}_{i})^2\\
\mathcal{L}^{i}_{\text{class}} &= \mathbb{1}_i^\text{obj}\sum_{c=0}^{C} (\hat{P}_{i} - P_{i})^2
\end{align*}
$$

where
- $\mathbb{1}_{ib}^{\text{obj}}$ is 1 if the $b$-th bounding box in the $i$-th grid cell is responsible for detecting the object, 0 otherwise.
- $\mathbb{1}_i^\text{obj}$ is 1 if the $i$-th grid cell contains an object, 0 otherwise.

But this loss function does not necessarily align well with the task of object detection. The simple addition of losses for both tasks (classification and localization) weights the loss equally. 

To rectify, YOLOv1 uses a weighted sum of squared error loss. First, we assign a separate weight to localization error called $\lambda_{\text{coord}}$. It is usually set to 5.

So the loss for each grid cell $(i)$ can look like this:
$$
\mathcal{L}^{i} = \lambda_{\text{coord}}
    \mathcal{L}^{i}_{\text{coord}} + \mathcal{L}^{i}_{\text{conf}} + \mathcal{L}^{i}_{\text{class}}\\
$$

In addition, many grid cells do not contain objects. The confidence is close to zero and thus the grid cells containing the objects often overpower the gradients. This makes the network unstable during training.

To rectify, we also weigh the loss from the confidence predictions in the grid cells that do not contain objects lower than in the grid cells that contain objects. We use a separate weight for the confidence loss called $\lambda_{\text{noobj}}$, which is usually set to 0.5.

So the confidence loss for each grid cell $(i)$ can look like this:
$$
\mathcal{L}^{i}_{\text{conf}} = \sum_{b=0}^{B} \left[
    \mathbb{1}_{ib}^{\text{obj}} \left( \hat{\text{conf}}_{i} - \text{conf}_{i} \right)^2 +
    \lambda_{\text{noobj}} \mathbb{1}_{ib}^{\text{noobj}} \left( \hat{\text{conf}}_{i} - \text{conf}_{i} \right)^2
\right]
$$

The sum of squared error for bounding box coordinates can be problematic. It equally weighs errors in large boxes and small boxes. The small deviations in large boxes should not be penalized as much as small deviations in small boxes.

To rectify, YOLOv1 uses a sum of squared error loss for the **square root** of the bounding box width and height. This makes the loss function scale invariant.

So the localization loss for each grid cell $(i)$ can look like this:

$$
\mathcal{L}^{i}_{\text{coord}} = \sum_{b=0}^{B} \mathbb{1}_{ib}^{\text{obj}} \left[ \left( \hat{x}_{ib} - x_{ib} \right)^2 + \left( \hat{y}_{ib} - y_{ib} \right)^2 +
\left(
    \sqrt{\hat{w}_{ib}} - \sqrt{w_{ib}}
\right)^2 +
\left(
    \sqrt{\hat{h}_{ib}} - \sqrt{h_{ib}}
\right)^2
\right]
$$

#### Inference
Inference is simple. We pass the image through the network and get the $S \times S \times (B \times 5 + C)$ feature map. We then filter out the boxes which have confidence scores less than a threshold. 

##### Non-Maximum Suppression
In rare cases, for large objects, the network tends to predict multiple boxes from multiple grid cells. To eliminate duplicate detections, we use a technique called Non-Maximum Suppression (NMS). NMS works by selecting the box with the highest confidence score and eliminating all other boxes with an IOU greater than a threshold. This is done iteratively until no overlapping boxes remain.

The end to end flow looks like this:
![nms](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/object-detection-gif.gif)

## Evolution of YOLO
So far, we have seen the basic characteristics of YOLO and how it allows for highly accurate and fast predictions. This was actually the first version of YOLO, called YOLOv1. YOLOv1 was released in 2015, and since then, multiple versions have been released. It was groundbreaking in terms of accuracy and speed, since it introduced the concept of using a single convolutional neural network (CNN) that processes the entire image at once, dividing it into an S × S grid. Each grid cell predicts bounding boxes and class probabilities directly. However, it suffered from poor localization and detection of multiple objects in small image areas. In the following years, many new versions were released by different teams to gradually improve the accuracy, speed, and robustness.

### **YOLOv2 (2016)** 
A year after releasing the first version, YOLOv2[5] came out. The improvements focused on both accuracy and speed, but also dealt with the localization problem. First, YOLOv2 replaced YOLOv1’s backbone architecture with Darknet-19, which is a variant of the Darknet architecture. Darknet-19 is lighter than the previous version’s backbone, and it consists of 19 convolutional layers followed by max-pooling layers. This led to YOLOv2 having the ability to capture more information. Also, it applied batch normalization to all convolutional layers and hence removed the dropout layers, dealing with the overfitting problem and increasing the mAP. It also introduced the concept of anchor boxes, which added prior knowledge to the width and height of the detected boxes (specifically, they used anchor boxes). Additionally, to deal with the poor localization, YOLOv2 predicted the class and objects for every anchor box and grid cell (now 13x13). So we have a maximum (for 5 anchor boxes) of 13x13x5 = 845 boxes.

### **YOLOv3 (2018)**
YOLOv3[6] again significantly improved the detection speed and accuracy by replacing the Darknet-19 architecture with the more complex but efficient Darknet-53. Also, it dealt with the localization problem better by using three different scales for object detection (13x13, 26x26, and 52x52 grids). This helped find objects of different sizes in the same area. It increased the bounding boxes to: 13 x 13 x 3 + 26 x 26 x 3 + 52 x 52 x 3 = 10,647. Non-Maximum Suppression (NMS) was still used to filter out redundant overlapping boxes.

### **YOLOv4 (2020)**
Back in 2020, YOLOv4[7] became one of the best detection models in terms of speed and accuracy, achieving state-of-the-art results on object detection benchmarks. The authors changed the backbone architecture again, opting for the faster and more accurate CSPDarknet53[8]. An important improvement of this version was the optimization for efficient resource utilization, making it suitable for deployment on various hardware platforms, including edge devices. Also, it included a number of augmentations before training that further improved the model's generalization. The authors included this improvement in a set of methodologies called bag-of-freebies. Bag-of-freebies are optimization methods that have a cost to the training process but aim to increase the model's accuracy in real-time detection without increasing the inference time.

### **YOLOv5 (2020)**
YOLOv5[9] translated the Darknet framework (written in C) to the more flexible and easy-to-use PyTorch framework. This version automated the previous anchor detection mechanism, introducing the auto-anchors. Auto-anchors train the model anchors automatically to match your data. During training, YOLO automatically uses k-means initially and genetic methods to evolve the new better-matched anchors and places them back into the YOLO model. Also, it offers different types of models that depend on the hardware constraints, with names similar to today’s YOLOv8 models: YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x.

### **YOLOv6 (2022)**
The next version, YOLOv6[10][11], was released by the Meituan Vision AI Department under the article title: "YOLOv6: A Single-Stage Object Detection Framework for Industry." This team made further improvements in terms of speed and accuracy by focusing on five aspects:
1) Reparameterization using the RepVGG technique, which is a modified version of VGG with skip connections. During inference, these connections are fused to improve the speed.
2) Quantization of reparameterization-based detectors. Which added blocks that are called Rep-PANs.
3) Recognition of the importance of considering different hardware costs and capabilities for model deployment. Specifically, the authors tested the latency with lowpower GPUs (like Tesla T4) compared to the previous works which mostly used high-cost machines (like V100).
4) Introduction to new types of loss functions such as, Varifocal Loss for Classification. IoU Series Loss  for Bounding Box Regression, and Distribution Focal Loss.
5) Accuracy improvements during training using knowledge distillation.
In 2023, YOLOv6 v3[12] was released with the title "YOLOv6 v3.0: A Full-Scale Reload,", which introduced enhancements to the network architecture and training scheme, once again advancing speed and accuracy (evaluated on the COCO dataset) compared to previously released versions.

### **YOLOv7 (2022)**
YOLOv7 was released with the paper titled “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”[13][14] by the authors of YOLOv4. Specifically, this version of bag-of-freebies includes a new label assignment method called coarse-to-fine lead-guided label assignment and uses gradient flow propagation paths to analyze how re-parameterized convolution should be combined with different networks. They also proposed “extend” and “compound scaling” methods for the real-time object detector that can effectively utilize parameters and computation. Again, all these improvements took real-time object detection to a new state-of-the-art, outperforming previous releases.

### **YOLOv8 (2023)**
YOLOv8[15], developed by Ultralytics in 2023, became again the new SOTA. It introduced improvements on the backbone and neck alongside an anchor-free approach, which eliminates the need for predefined anchor boxes. Instead, predictions are made directly. This version supports a wide range of vision tasks, including classification, segmentation, and pose estimation. Additionally, YOLOv8 has scaling capabilities with pre-trained models available in various sizes: nano, small, medium, large, and extra-large, and can be easily fine-tuned on custom datasets.

### **YOLOv9 (2024)**
YOLOv9 was released with the paper titled “YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information”[16][17] by the same authors as YOLOv7 and YOLOv4. This paper highlights the issue of information loss that existing methods and architectures have during layer-by-layer feature extraction and spatial transformation. To address this issue, the authors proposed:

* Concept of programmable gradient information (PGI) to cope with the various changes required by deep networks to achieve multiple objectives.
* Generalized Efficient Layer Aggregation Network (GELAN), a new lightweight network architecture that achieves better parameter utilization than the current methods without sacrificing computational efficiency.

With these changes, YOLOv9 set new benchmarks on the MS COCO challenge.

Taking into consideration the timeline and the different licensing of the models we can create the following figure:
![yolo_evolution](https://huggingface.co/datasets/hf-vision/course-assets/resolve/main/yolo_evolution.png)

### **Note about the different versions**
This short chapter presented the history/evolution of YOLO in a linear way. However, this is not the case — many other versions of YOLO were released in parallel. Notice the release of YOLOv4 and YOLOv5 in the same year. Other versions that we did not cover include YOLOvX (2021), which was based on YOLOv3 (2018), and YOLOR (2021), which was based on YOLOv4 (2020), and many others.
Also, it is important to understand that the selection of the ‘best’ model version depends on the user requirements, such as speed, accuracy, hardware limitations, and user-friendliness. For example, YOLOv2 is very good at speed. YOLOv3 provides a balance between accuracy and speed. YOLOv4 has the best ability for adapting or being compatible across different hardware.

## Reference 
[1] [Rich feature hierarchies for accurate object detection and semantic segmentation](https://arxiv.org/abs/1311.2524v5) 
[2] [Fast R-CNN](https://arxiv.org/abs/1504.08083) 
[3] [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) 
[4] [Feature Pyramid Network](https://arxiv.org/pdf/1612.03144.pdf) 
[5] [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) 
[6] [YOLOv3: An Incremental Improvement](https://arxiv.org/abs/1804.02767) 
[7] [YOLOv4: Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934) 
[8] [YOLOv4 GitHub repo](https://github.com/AlexeyAB/darknet) 
[9] [Ultralytics YOLOv5](https://docs.ultralytics.com/models/yolov5/) 
[10] [YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications](https://arxiv.org/abs/2209.02976) 
[11] [YOLOv6 GitHub repo](https://github.com/meituan/YOLOv6) 
[12] [YOLOv6 v3.0: A Full-Scale Reloading](https://arxiv.org/abs/2301.05586) 
[13] [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696) 
[14] [YOLOv7 GitHub repo](https://github.com/WongKinYiu/yolov7) 
[15] [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) 
[16] [YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information](https://arxiv.org/abs/2402.13616) 
[17] [YOLOv9 GitHub repo](https://github.com/WongKinYiu/yolov9) 
[18] [YOLOvX](https://yolovx.com/) 
[19] [You Only Learn One Representation: Unified Network for Multiple Tasks](https://arxiv.org/abs/2105.04206)
[20] [Feature Pyramid network Paper](https://arxiv.org/abs/1612.03144)

### Very Deep Convolutional Networks for Large Scale Image Recognition (2014)
https://huggingface.co/learn/computer-vision-course/unit2/cnns/vgg.md

# Very Deep Convolutional Networks for Large Scale Image Recognition (2014)

## Introduction

The VGG architecture was developed in 2014 by Karen Simonyan and Andrew Zisserman from the Visual Geometry Group -and hence named VGG- at Oxford University. The model demonstrated significant improvements over the past models at that time- to be specific 2014 Imagenet challange also known as ILSVRC.

## VGG Network Architechture
  
- Inputs are 224x224 images. 
- Convolution kernel shape is (3,3) and max pooling window shape is (2,2).
- Number of channels for each convolutional layer 64 -> 128 -> 256 -> 512 -> 512. 
- VGG16 has 16 hidden layers (13 convolutional layers and 3 fully connected layers).
- VGG19 has 19 hidden layers (16 convolutional layers and 3 fully connected layers).

## Key Comparisons 

- VGG (16 or 19 layers) was relatively deeper than other SOTA networks at the time. AlexNet, the winning model for ILSVRC 2012 only has 8 layers.
- Multiple small (3X3) receptive field filters with ReLU activation instead of one large (7X7 or 11X11) filter lead to better learning of complex features. Smaller filters also mean fewer parameters per layer, with additional nonlinearity introduced in between.
- Multiscale training and inference. Each image was trained in multiple rounds with varying scales to ensure similar characteristics were captured at different sizes.  
- Consistency and simplicity of the VGG network make it easier to scale or modify for future improvements.

## PyTorch Example

Below you can find the PyTorch implementation of VGG19. 

```python
import torch.nn as nn

class VGG19(nn.Module):
    def __init__(self, num_classes=1000):
        super(VGG19, self).__init__()

        # Feature extraction layers: Convolutional and pooling layers
        self.feature_extractor = nn.Sequential(
            nn.Conv2d(
                3, 64, kernel_size=3, padding=1
            ),  # 3 input channels, 64 output channels, 3x3 kernel, 1 padding
            nn.ReLU(),
            nn.Conv2d(64, 64, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(
                kernel_size=2, stride=2
            ),  # Max pooling with 2x2 kernel and stride 2
            nn.Conv2d(64, 128, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(128, 128, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(128, 256, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(256, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2),
        )

        # Pooling Layer
        self.avgpool = nn.AdaptiveAvgPool2d(output_size=(7, 7))

        # Fully connected layers for classification
        self.classifier = nn.Sequential(
            nn.Linear(
                512 * 7 * 7, 4096
            ),  # 512 channels, 7x7 spatial dimensions after max pooling
            nn.ReLU(),
            nn.Dropout(0.5),  # Dropout layer with 0.5 dropout probability
            nn.Linear(4096, 4096),
            nn.ReLU(),
            nn.Dropout(0.5),
            nn.Linear(4096, num_classes),  # Output layer with 'num_classes' output units
        )

    def forward(self, x):
        x = self.feature_extractor(x)  # Pass input through the feature extractor layers
        x = self.avgpool(x)  # Pass Data through a pooling layer
        x = x.view(x.size(0), -1)  # Flatten the output for the fully connected layers
        x = self.classifier(x)  # Pass flattened output through the classifier layers
        return x
```

### Introduction to Convolutional Neural Networks
https://huggingface.co/learn/computer-vision-course/unit2/cnns/introduction.md

# Introduction to Convolutional Neural Networks

In the last unit we learned about the fundamentals of vision, images and Computer Vision. We also explored visual features as a crucial part of analyzing images with the help of computers.

The approaches we discussed are today often referred to as "classical" Computer Vision. While working fine on many small and restrained datasets and settings, classical methods have their limits that come to light when looking at bigger scale real-world datasets. 

In this unit, we will learn about Convolutional Neural Networks, an important step forward in terms of scale and performance of Computer Vision.

## Convolution: Basic Ideas

Convolution is an operation used to extract features from data. The data can be 1D, 2D or 3D. We'll explain the operation with a solid example. All you need to know now is that the operation simply takes a matrix made of numbers, moves it through the data, and takes the sum of products between the data and that matrix. This matrix is called kernel or filter. You might say, "What does it have to do with the feature extraction, and how am I supposed to apply it?"
Don’t panic! We’re getting to it.

To illustrate the intuition, let's take a look at this example. We have this 1D data, and we visualize it. Visualization will help understand the effects of convolution operation.

    

We have this kernel [-1, 1]. We’ll start from the left-most element, put the kernel, multiply the overlapping numbers, and sum them up. Kernels have centers; it’s one of the elements. Here, we pick the center as 1 (the element on the right). We picked the center here as 1, assuming an imaginary zero on the left, it is called a pad, you will see later on. Now, the kernel’s center has to touch every single element, so we put a zero to the left of the element for convenience. If we don’t pad it, I’ll have to start multiplying -1 with the left-most element, and 1 will not touch the left-most element, so we apply padding. Let’s see what it looks like.

    

I’m multiplying the left-most element (that is currently a pad) with -1, and the first element (zero) with 1 and sum them up, get a 0, and note it down. Now, we’ll move the kernel by one position and do the same. Note it down again, this movement is called striding, this is usually done by moving the kernel by one pixel. You can also move it with more pixels. The result (convolved data) is currently an array [0, 0].

    

We will repeat it until the right element of the kernel touches every element, which yields the below result.

    

Notice anything? The filter gives the rate of change in the data (the derivatives!). This is one characteristic we could extract from our data. Let’s visualize it.

    

The convolved data (the result of the convolution) is called a feature map. And it makes sense, as it shows the features we can extract, the characteristics related to the data, and the rate of change.

This is exactly what edge detection filters do! Let’s see it in 2-dimensional data. This time, our kernel will be different. It will be a 3x3 kernel (just so you know it could’ve been a 2x2 too).

    

This filter is actually quite famous, but we won’t spoil it for you now :). The previous filter was [-1 1]. Meanwhile, this one is [-1 0 1]. It’s just of shape 3x3 and nothing different, and it shows increments and decrements on the horizontal axis. Let’s see an example and apply convolution. Below is our 2D data.

    

Think of this as an image, and we want to extract the horizontal changes. Now, the center of the filter has to touch every single pixel, so we pad the image.

    

The feature map will be the same size as the original data. The result of the convolution will be written to the same position that the center of the kernel touched in the original matrix, meaning, for this one, it will touch the leftmost and the top positions.

    

If we keep applying the convolution, we get the following feature map.

    

Which shows us the horizontal changes (the edges). This filter is actually called the Prewitt Filter.

    

You can flip [the Prewitt filter](https://en.wikipedia.org/wiki/Prewitt_operator) to get the changes in vertical direction. [The Sobel filter](https://en.wikipedia.org/wiki/Sobel_operator) is another famous filter for edge detection.

## Convolutional Neural Networks

Fine, but what does it have to do with deep learning? Well, brute forcing filters to extract features does not work well with every image. Imagine if we could somehow find the optimal filters to extract important information or even detect objects in the images. That’s where convolutional neural networks come into play. We convolve images with various filters, and these pixels in the feature maps will eventually become the parameters that we will optimize, and in the end, we will find the best filters for our problem.

The idea is, that we will use filters to extract information. We will randomly initialize multiple filters, create our feature maps, feed them to a classifier, and do backpropagation. Before diving into it, I’d like to introduce you to something we call “pooling”.

As you can see above, there are many pixels that show the change in the feature map. To know that there’s an edge, we only need to see that there’s a change (an edge, a corner, anything), and that’s it. 

    

In the above example, we could have got only one of the two, and that would be enough. This way, we would store fewer parameters and still have the features. This operation of getting the most important element in the feature map is called pooling. With pooling, we lose the exact pixel location of where there’s an edge but we store fewer parameters. Also, this way, our feature extraction mechanism will be more robust to small changes, e.g., we only need to know that there are two eyes, a nose, and a mouth to know that there’s a face in an image, the distance between those elements and the size of those elements tend to change from face to face, and pooling enables the model to be more robust against these changes. Another good thing about pooling is that it helps us handle varying input sizes. Below is the max pooling operation, where every four pixels, we get the maximum pixel. There are various types of pooling, e.g., average pooling, weighted pooling, or L2 pooling.

Let’s build a simple CNN architecture. We will use a Keras example (for the sake of illustration) and we will walk you through what’s happening. Below is our model (again, don’t panic, we will walk you through what’s happening).

If you don’t know what Keras Sequential API is doing, it stacks layers like lego bricks and connects them. Each layer has different hyperparameters, the Conv2D layer takes a number of convolution filters, kernel size, and activation function, while MaxPooling2D takes pooling size, and the dense layer takes a number of output units (again, don’t panic).

Most of the convnet implementations don’t do padding for the sake of letting the kernel touch every pixel in an image processing fashion. Padding with zeroes comes with an assumption that we might have features in borders, and it adds complexity for calculation on top. That’s why you see that the first input size is (26,26), we lose information along the borders.

```python
model = keras.Sequential(
    [
        keras.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)
model.summary()
```
```
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1600)              0         
_________________________________________________________________
dropout (Dropout)            (None, 1600)              0         
_________________________________________________________________
dense (Dense)                (None, 10)                16010     
=================================================================
Total params: 34,826
Trainable params: 34,826
Non-trainable params: 0
_________________________________________________________________
```
Convolutional neural networks start with an input layer and a convolutional layer. Keras Conv2D layers take a number of kernels and the size of the kernel as parameters. What’s happening is illustrated below. Here, we convolve the image with 32 kernels and end up with 32 feature maps, each having the size of the image.

    

After convolutional layers, we put a max pooling layer to reduce the number of parameters stored and make the model robust to the changes, as discussed above. This will reduce the number of parameters calculated.

    

Then, these feature maps are concatenated together and flattened.

    

Later on, we use something called dropout to drop a portion of parameters to avoid overfitting. Finally, the final form of weights will go through a dense layer to get classified and backpropagation will take place.

### Backpropagation in Convolutional Neural Networks in Theory

How does the backpropagation work here? We want to optimize for the best kernel values here, so they’re our weights. In the end, we expect the classifier to figure out the relationship between pixel values, kernels, and classes. Thus, we have a very long flattened array consisting of elements that are pooled and activated versions of pixels convolved with initial weights (the kernel elements). We update those weights such that we answer the question “which kernels should I apply to make a distinction between cat and a dog photo?”. The point of training CNNs is to come up with the optimal kernels, and these are found using backpropagation. Prior to CNNs, people would try to try a lot of filters on an image to extract features themselves, meanwhile most generic filters (as we’ve seen above, e.g., Prewitt or Sobel) do not necessarily work for all images given images can be very different, even in the same dataset. This is why CNNs outperform traditional image processing techniques.

There are a couple of advantages by means of storage when we use convolutional neural networks.

### Parameter sharing

In convolutional neural networks, we convolve with the same filter across all pixels, all channels, and all images which provides an advantage over storing parameters, this is much more efficient than going through an image with a dense neural network. This is called “weight tying” and those weights are called “tied weights”. This is also seen in autoencoders.

### Sparse Interactions

In densely connected neural networks, we input the whole piece of data at once -which is very overwhelming due to how images have hundreds or thousands of pixels-, meanwhile in convnets, we have smaller kernels that we use to extract features. This is called sparse interaction, and it helps us use less memory.

### Let's Dive Further with MobileNet
https://huggingface.co/learn/computer-vision-course/unit2/cnns/mobilenetextra.md

# Let's Dive Further with MobileNet
## Can We Use Vision Transformers with MobileNet?
### Not directly, but we can!
MobileNet can be integrated with transformer models in various ways to enhance image processing tasks. 

One approach is to use MobileNet as a feature extractor, where its convolutional layers process images and the resultant features are fed into a transformer model for further analysis.

Another approach is training MobileNet and a Vision Transformer separately and then combining their predictions through ensemble techniques, potentially boosting performance as each model may capture distinct facets of the data. This multifaceted integration showcases the flexibility and potential of combining convolutional and transformer architectures in image processing.

There is an implementation of this concept, called Mobile-Former.

### Mobile-Former
Mobile-Former is a neural network architecture that aims to combine both MobileNet and Transformers for effective image processing tasks. It's designed to leverage MobileNet for local feature extraction, and Transformers for context understanding.

![Mobile-Former Architecture](https://www.researchgate.net/publication/370058769/figure/fig1/AS:11431281148324026@1681702186116/The-overall-architecture-of-Dynamic-Mobile-FormerDMF-and-details-of-DMF-block.png)

You can find other detailed explanations from [Mobile-Former's paper](https://arxiv.org/abs/2108.05895).

## MobileNet with Timm
### What is Timm?
`timm` (or Py**T**orch **Im**age **M**odels) is a Python library that provides a collection of pre-trained deep learning models, primarily focused on computer vision tasks, along with utilities for training, fine-tuning, and inference. 

Using MobileNet through the `timm` library in PyTorch is straightforward, as `timm` provides an easy way to access a wide range of pre-trained models, including various versions of MobileNet.
Here's a basic implementation on how to use MobileNet with `timm`.

You must install `timm` with `pip` first:
```bash
pip install timm
```
Here is the basic code:
```python
import timm
import torch

# Load a pre-trained MobileNet model
model_name = "mobilenetv3_large_100"

model = timm.create_model(model_name, pretrained=True)

# If you want to use the model for inference
model.eval()

# Forward pass with a dummy input
# Batch size 1, 3 color channels, 224x224 image
input_tensor = torch.rand(1, 3, 224, 224)

output = model(input_tensor)
print(output)
```
You can go to [Timm's Hugging Face Page](https://huggingface.co/timm) and find other pretrained models and datasets for various tasks.

### Video Processing Basics
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/video-processing-basics.md

# Video Processing Basics
With the rise of transformers, vision transformers have become essential tools for various computer vision tasks. Vision transformers are great at computer vision tasks involving both images and videos.

However, understanding how these models handle images and videos differently is crucial for optimal performance and accurate results.

## Understanding Image Processing for Vision Transformers
When it comes to image data, vision transformers typically process individual still images by splitting them into non-overlapping patches and then transforming these patches individually.
Suppose we have a 224x224 image, split the image into 16x16 patches where each patch consists of 14x14 pixels. This patch-based processing not only reduces computation but it also allows the model to capture local features in the image effectively.
Each patch is then fed through a series of self-attention layers and feed-forward neural networks to extract semantic information. Thanks to this hierarchical processing technique, vision transformers are able to capture both high-level and low-level features in the image.
Since vision transformers process patches individually and transformers by default don't have any mechanism to track position of inputs, the spatial context of the image can be lost.
To address this, vision transformers often include positional encodings that capture the relative position of each patch within the image. By incorporating positional information, the model can better understand the spatial relationships between different patches and enhance its ability to recognize objects and patterns.

*Note:* CNNs are designed to learn spatial features, while vision transformers are designed to learn both spatial and contextual features.

## Key Differences Between Image and Video Processing
Videos are essentially a sequence of frames, and processing them requires techniques to capture and incorporate motion information. In image processing, the transformer ignores the temporal (time) relations between frames, i.e., it only focuses on a frame's spatial (space) information.

Temporal relations are the main factors for developing a strong understanding of content in a video, thus we require a separate algorithm for videos. One of the main differences between image and video processing is the inclusion of an additional axis, time to the input. 
There are two main approaches for extracting tokens from a video or embedding a video clip.

### Uniform Frame Sampling

It is a straightforward method of tokenizing the input video in which we uniformly sample \\(n_t\\) frames from the input video clip, embed each 2D frame independently using the same method as used in image processing, and concatenate all these tokens together.

If \\(n_h*n_w\\) non-overlapping image patches are extracted from each frame, then a total of \\(n_t*n_h*n_w'\\) tokens will be forwarded through the transformer encoder. Uniform frame sampling is a tokenization scheme in which we sample frames from the video clip and perform simple ViT tokenization.

### Tubelet Embedding

This method extends the vision transformer's image embedding to 3D and corresponds to a 3D convolution. It is an alternative method in which non-overlapping, spatiotemporal "tubes" from input volume are extracted and linearly projected.

First, we extract tubes from the video. These tubes contain patches of the frame and the temporal information as well. The tubes are then flattened to build video tokens. Intuitively, this method fuses spatiotemporal information during tokenization, in contrast to "uniform frame sampling", where temporal information from different frames is fused by the transformer.

## Importance of Temporal Information in Video Processing
The inclusion of temporal information in video processing is crucial for several computer vision tasks. One such task is action recognition, which aims to classify the action in a video. Temporal information is also essential for tasks like video captioning, where the goal is to generate a textual description of the content in a video.

By considering the temporal relationships between frames, vision transformers can generate more contextually relevant captions. For example, if a person is shown running in one frame and then jumping in the next, the model can generate a caption that reflects this sequence of action. Furthermore, temporal processing is important for tasks like video object detection and tracking. 

In conclusion, the presence of temporal information and the particular difficulties posed by video data, such as higher memory and storage needs, are the main processing distinctions between video and image. The choice between image and video processing depends on the specific computer vision task and the characteristics of the data.

### Introduction
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/introduction-to-video.md

# Introduction

Welcome to the Video and Video Processing unit. Maybe you have realized that in our course content so far, we have mainly focused on standard, static 2D images.
Of course, the real world of Computer Vision has a lot more to offer. Videos are definitely one of the most used mediums in our world due to applications like Social Media, broadcasts, or surveillance cameras.

Given their importance in our society and research, we also want to talk about them here in our course. In this introduction chapter, you will learn some very basic theory behind videos before going on to have a closer look at video processing.

## What is a Video?

An image is a binary, two-dimensional (2D) representation of visual data. A video is a multimedia format that sequentially displays these frames or images.

Technically speaking, the frames are separate pictures. As a result, storing and playing these frames sequentially at a conventional speed results in the creation of a video, thus giving the illusion of motion (just like a flipbook).
It is a popular and widely used medium for communicating information, entertainment, and conversation. Videos and photos are obtained via image-acquisition equipment such as video cameras, smartphones, and so on.

### Aspects of a Video

- **Resolution:**
The resolution of a video refers to the number of pixels in each frame or we can also refer to it as the size of each frame in the video. It doesn't need to be a standard size, but there are common sizes for video. Common video resolutions include HD (1280x720 pixels), Full HD (1920x1080 pixels), Ultra HD or 4K (3840x2160 pixels), and so on.
When a video is said to have a resolution of 1920x1080 pixels, it essentially means the video has a width of 1920 pixels and a height of 1080 pixels.
Higher resolution videos have more detail but also require more storage space and processing power.

- **Frame Rate:**
A video is composed of multiple separate frames, or images. In order to give the impression of motion, these frames are displayed quickly one after the other.
 The number of frames displayed per second is called the "frame rate." Common frame rates include 24, 30, and 60 frames per second (fps) or hertz (general unit for frequency). Higher frame rates result in smoother motion.

- **Bitrate:**
The quantity of data needed to describe audio and video is called bitrate. Better quality is achieved at higher bitrates, but streaming requires more storage and bandwidth.

Bitrates for videos are commonly expressed in megabytes per second (mbps) or kilobytes per second (kbps).

- **Codecs:**
Codecs, short for “compressor-decompressor” are software or hardware components that compress and decompress digital media to reduce the size of media files, making them more manageable for storage and transmission while maintaining an acceptable level of quality.
There are two main types of codecs; "lossless codecs" and "lossy codecs". Lossless codecs are designed to compress data without any loss of quality, while lossy codecs are more designed to compress by removing some of the data resulting in a loss of quality.

In summary, a video is a dynamic multimedia format that combines a series of individual frames, audio, and often additional metadata. It is used in a wide range of applications and can be tailored for different purposes, whether for entertainment, education, communication, or analysis.

## What is Video Processing?

In the research field of Computer Vision (CV) and Artificial Intelligence (AI), video processing involves automatically analyzing video data to understand and interpret both temporal and spatial features. Video data is simply a sequence of time-varying images, where the information is digitized both spatially and temporally. This allows us to perform detailed analysis and manipulation of the content within each frame of the video.

Video processing has become increasingly important in today's technology-driven world, thanks to the rapid advancements in Deep Learning (DL) and AI. Traditionally, DL research has focused on images, speech, and text, but video data offers a unique and valuable opportunity for research due to its extensive size and complexity. With millions of videos uploaded daily on platforms like YouTube, video data has become a rich resource, driving AI research and enabling groundbreaking applications.

### Applications of Video Processing

- **Surveillance Systems:**
Video processing plays a critical role in public safety, crime prevention, and traffic monitoring. It enables the automated detection of suspicious activities, helps identify individuals, and enhances the efficiency of surveillance systems.  
  
- **Autonomous Driving:**
In the realm of autonomous driving, video processing is essential for navigation, obstacle detection, and decision-making processes. It allows self-driving cars to understand their surroundings, recognize road signs, and react to changing environments, ensuring safe and efficient transportation. 

- **Healthcare:**
Video processing has significant applications in healthcare, including medical diagnostics, surgery, and patient monitoring. It helps analyze medical images, provides real-time feedback during surgical procedures, and continuously monitors patients to detect any abnormalities or emergencies.  

### Challenges in Video Processing

- **Computational Demands:**
Real-time video analysis requires substantial processing power, which poses a significant challenge in developing and deploying efficient video processing systems. High-performance computing resources are essential to meet these demands.

- **Storage Requirements:**
High-resolution videos generate large volumes of data, leading to storage challenges. Efficient data compression and management techniques are necessary to handle the vast amounts of video data.

- **Privacy and Ethical Concerns:**
Video processing, especially in surveillance and healthcare, involves handling sensitive information. Ensuring privacy and addressing ethical concerns related to the misuse of video data are crucial considerations that must be carefully managed.

## Conclusion

Video processing is a dynamic and vital area within AI and CV, offering numerous applications and presenting unique challenges. Its importance in modern technology continues to grow, fueled by advancements in deep learning and the increasing availability of video data. In the following sections, we will dive deeper into deep learning for video processing. You'll explore state-of-the-art models including 3D CNNs and Transformers.  

Additionally, we'll cover various tasks such as object tracking, action recognition, video stabilization, captioning, summarization, and background subtraction. These topics will provide you with a comprehensive understanding of how deep learning models are applied to different video processing challenges and applications.

Let's go! 🤓

### CNN Based Video Models
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/cnn-based-video-model.md

# CNN Based Video Models

### General Trend:

The success of Deep Learning, particularly CNNs trained on massive datasets like ImageNet, revolutionized image recognition. This trend continues in video processing. However, video data introduces another dimension compared to static images: time. This simple change introduced a new set of challenges that CNNs trained in static images were not built to deal with.

# Previous SOTA Models in Video Processing

## Two-Stream Network(2014)

This paper extended Deep Convolutional Networks(ConvNets) to perform action-recognition in video data.

The proposed architecture is called Two-Stream Network. It uses two separate pathways within a neural network:

- **Spatial Stream:** A standard 2D CNN processes individual frames to capture appearance information.
- **Temporal Stream:** A 2D CNN, or another network, that processes several frame sequences (optical flow) to capture motion information.
- **Fusion:** The outputs from both streams are then combined to leverage both appearance and motion cues for tasks like action recognition.

## 3D ResNets(2017)

Standard 3D CNNs extend the concept to simultaneously capture spatial and temporal information using 3D kernels (2D spatial information + temporal information). A drawback of this model is that the large number of parameters result in the training being more computationally intensive and hence slower than the 2D version. Therefore, the 3D version of the ConvNets typically has fewer layers than the deeper architectures of 2D CNNs.

In this paper, the authors applied the ResNet architecture to the 3D CNNs. This approach introduces deeper models for 3D CNNs and achieves higher accuracy.

Experiments showed that the 3D ResNets (especially deeper ones like the ResNet-34) outperform models like the [C3D](https://arxiv.org/abs/1412.0767), particularly on larger datasets. Pretrained models like Sports-1M C3D can help mitigate overfitting on smaller datasets. Overall, 3D ResNets effectively leverage deeper architectures to capture complex spatiotemporal patterns in the video data.

| Method | Validation set |  |  | Testing set |  |  |
| --- | --- | --- | --- | --- | --- | --- |
|  | Top-1 | Top-5 | Average | Top-1 | Top-5 | Average |
| 3D ResNet-34 | 58.0 | 81.3 | **69.7** | - | - | **68.9** |
| C3D* | 55.6 | 79.1 | 67.4 | 56.1 | 79.5 | 67.8 |
| C3D w/ BN | 56.1 | 79.5 | 67.8 | - | - | - |
| RGB-I3D w/o ImageNet | - | - | 68.4 | 88.0 | **78.2** |  |

## (2+1)D ResNets(2017)

(2+1)D ResNets are inspired by the 3D ResNets. However, a key difference lies in how the layers are structured. This architecture introduces a combination of 2D convolution and 1D convolution:

- The 2D convolution captures the spatial features within a frame.
- The 1D convolution captures the motion information across the consecutive frames.

This model can learn spatiotemporal features directly from video data, potentially leading to better performance in video analysis tasks like action recognition.

- Benefits:
    - The addition of nonlinear rectification (ReLU) between two operations doubles the number of non-linearities compared to a network using full 3D convolution for the same number of parameters, thus rendering the model capable of representing more complex functions.
    - Decomposition facilitates the optimization, yielding in lower train loss and test loss in practice.

| Method | Clip@1 Accuracy | Video@1 Accuracy | Video@5 Accuracy |
| --- | --- | --- | --- |
| DeepVideo | 41.9 | 60.9 | 80.2 |
| C3D | 46.1 | 61.1 | 85.2 |
| 2D ResNet-152 | 46.5 | 64.6 | 86.4 |
| Conv pooling | - | 71.7 | 90.4 |
| P3D | 47.9 | 66.4 | 87.4 |
| R3D-RGB-8frame | 53.8 | - | - |
| R(2+1)D-RGB-8frame | 56.1 | 72.0 | 91.2 |
| R(2+1)D-Flow-8frame | 44.5 | 65.5 | 87.2 |
| R(2+1)D-Two-Stream-8frame | - | 72.2 | 91.4 |
| R(2+1)D-RGB-32frame | **57.0** | **73.0** | **91.5** |
| R(2+1)D-Flow-32frame | 46.4 | 68.4 | 88.7 |
| R(2+1)D-Two-Stream-32frame | - | **73.3** | **91.9** |

# Current Research

Currently, researchers are exploring deeper 3D CNN architectures. Another promising approach is combining 3D CNNs with other techniques like attention mechanisms. Alongside that, there is a push for developing larger video datasets like [Kinetics](https://github.com/google-deepmind/kinetics-i3d).
The Kinetics dataset is a large-scale high-quality video dataset commonly used for human action recognition research. It contains hundreds of thousands of video clips that cover a wide range of human activities.

# Current Research

### Self-Supervised Learning: **MoCo (Momentum Contrast)**

**Overview**

[MoCo](https://arxiv.org/abs/1911.05722) is a prominent model in the Self-Supervised Learning domain, using a contrastive learning approach to extract features from unlabeled video clips. By utilizing a momentum-based queue, it effectively learns from large-scale video datasets, making it ideal for tasks such as action recognition and event detection.

**Key Features**

- **Momentum Encoder**: Uses a momentum-updated encoder to maintain consistency in the representation space, enhancing training stability.
- **Dynamic Dictionary**: Employs a queue-based dictionary that provides a large and consistent set of negative samples for contrastive learning.
- **Contrastive Loss Function**: Leverages contrastive loss to learn invariant features by comparing positive and negative pairs.

### Efficient Video Models: **X3D (Expanded 3D Networks)**

**Overview**

[X3D](https://arxiv.org/abs/2004.04730) is a lightweight 3D ConvNet model designed for video recognition tasks. It builds on the concept of 3D CNNs but optimizes for fewer parameters and lower computational cost while maintaining high performance. This makes it suitable for real-time video analysis and deployment on mobile or edge devices.

**Key Features**

- **Efficiency**: Achieves high accuracy with significantly fewer parameters and reduced computational cost.
- **Progressive Expansion**: Utilizes a systematic approach to expand network dimensions (e.g., depth, width) for optimal performance.
- **Deployment-Friendly**: Designed for easy deployment on devices with limited computational resources.

### Real-time Video Processing: **ST-GCN (Spatial-Temporal Graph Convolutional Networks)**

**Overview**

[ST-GCN](https://arxiv.org/abs/1801.07455) is a model tailored for real-time action recognition, particularly in analyzing human movements in video sequences. It models spatio-temporal data using a graph structure, effectively capturing human joint positions and movements. This model is widely used in applications like surveillance and sports analysis for real-time action detection.

These cutting-edge models are playing a crucial role in advancing video processing, excelling in areas such as video classification, action recognition, and real-time processing.

**Key Features**

- **Graph-Based Modeling**: Represents human skeletal data as graphs, allowing for natural modeling of joint connections.
- **Spatio-Temporal Convolutions**: Integrates spatial and temporal graph convolutions to capture dynamic movement patterns.
- **Real-Time Performance**: Optimized for fast computation, making it suitable for real-time applications.

# Conclusion

The evolution of video analysis models has been fascinating to witness. These models were heavily influenced by other SOTA models. For example, Two-StreamNets was motivated by the ConvNets and (2+1)D ResNets were inspired by the 3D ResNets. As the research progresses, one can expect even more advanced architectures and techniques to emerge in the future.

### Multimodal Based Video Models
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/multimodal-based-video-models.md

# Multimodal Based Video Models

As discussed in previous chapters, a video can be simply defined as a sequence of images. However, unlike simple images, videos contain various modalities such as sound, text, and movement. From this perspective, to properly understand a video, we must consider multiple modalities at the same time. In this chapter, we first briefly explain what modalities can exist in a video. Then, we introduce architectures that can learn by aligning videos with different modalities.

## What Modalities Are Present in Video?

Videos encompass a variety of modalities beyond just sequences of images. Understanding these different modalities is crucial for comprehensive video analysis and processing. The primary modalities present in videos include:

1. Visual Modality(Frames/Images): The most common modality, consisting of a sequence of images that provides the visual information for the video.
2. Audio Modality(Sound): Includes dialogue, background music, and environmental sounds that can convey contextual information about the video.
3. Text Modality(Captions/Subtitles): Appears as subtitles, captions, or on-screen text, offering explicit information related to the video’s context.
4. Motion Modality(Movement Dynamics): Captures temporal changes between video frames, reflecting movement and transitions.
5. Depth Modality: Represents the 3D spatial information of the video.
6. Sensor Modality: In some applications, videos may include modalities like temperature or biometric data.
    

    

Beyond the modalities mentioned above, videos can incorporate even more diverse types of modalities. Be sure to consider which modalities are necessary for your specific work or project. In the next section, we will explore video architectures that can align and represent these modalities jointly.

## Video and Text

### VideoBERT

**Overview**

    

[VideoBERT](https://arxiv.org/abs/1904.01766) is an attempt to apply the BERT architecture directly to video data. Just like BERT in language models, the goal is to learn good visual-linguistic representation without any supervsion. For the text modality, VideoBERT uses ASR (Automatic Speech Recognition) to convert audio into text, and then obtains BERT token embeddings. For the video, it uses S3D to get token embeddings for each frame. 

**Key Features**

1. **Linguistic-visual alignment**: Classifies whether a given text and video frames are aligned or not.
2. **Masked Language Modeling**: Predicts masked tokens in the text (just like in BERT).
3. **Masked Frame Modeling**: Predicts the masked video frames (like MLM predicts masked tokens in text).

**Why It Matters**

VideoBERT was one of the first models to effectively integrate video-language understanding by learning joint representations.
Unlike previous methods, VideoBERT does not use a detection model for image-text labeling. Instead, it uses a *clustering algorithm* to enable Masked Frame modeling, allowing the model to predict masked frames without needing explicit labeled data.

### MERLOT

**Overview**

[MERLOT](https://arxiv.org/abs/2106.02636) is designed to improve multimodal reasoning by learning from large-scale video-text datasets. It focuses on understanding interactions between visual and textual information using no labeled data. By leveraging the large-scale unlabeled dataset **YT-Temporal-180M**,  **MERLOT** demonstrates strong performance in visual commonsense reasoning without relying on heavy visual supervision.
    
**Key Features**

1. Temporal Reordering Task (from [HERO](https://aclanthology.org/2020.emnlp-main.161.pdf))
2. Frame-Caption Matching Task (from [CBT](https://arxiv.org/pdf/1906.05743), [HAMMER](https://aclanthology.org/2020.emnlp-main.161.pdf))
3. Masked Language Modeling

Why It Matters

While the model architecture and training method are not entirely new, MERLOT achieves performance improvements by training on **YT-Temporal-180M**, a large-scale visual-text dataset. This extensive dataset enables the model to better understand temporal dynamics and multimodal interactions, leading to enhanced reasoning and prediction capabilities in video-language tasks.
    
Note: If you're looking to understand the detailed training process of MERLOT, make sure to refer to the MERLOT paper as well as earlier works like [HERO](https://aclanthology.org/2020.emnlp-main.161.pdf), [CBT](https://arxiv.org/pdf/1906.05743) and [HAMMER](https://aclanthology.org/2020.emnlp-main.161.pdf).

## Video and Audio, Text

### VATT(Visual-Audio-Text Transformer)

**Overview**

    

[VATT](https://arxiv.org/abs/2104.11178) is a model designed for self-supervised learning from raw video, audio, and text. Different tokenization and positional encoding methods were applied to each modality, and VATT used the Transformer Encoder to effectively integrate the representations from the raw multimodal data. As a result, it achieved strong performance in various downstream tasks such as action recognition and text-to-video retrieval.

**Key Features**

1. Modality-Specific & Modality-Agnostic: The **modality-specific** version uses separate Transformer encoders for each modality, while the modality-agnostic version integrates all modalities with a single Transformer encoder. While modality-specific demonstrated better performance, the **modality-agnostic** still showed strong performance in downstream tasks with fewer parameters.
2. Droptoken: Due to the redundancies in video (with audio and text data), sampling only a subset of tokens allows for more efficient training.
3. Multimodal Contrastive Learning: Noise Contrastive Estimation (NCE) was used for video-audio pairs, while Multiple Instance Learning NCE (MIL-NCE) was applied to video-text pairs

**Why It Matter**
    
Previous models using transformers for video multimodal tasks tended to rely heavily on visual data and required extensive training time and computational complexity. In contrast, VATT utilizes **Droptoken** and **weight sharing** to learn powerful multimodal representations from raw visual, audio, and text data with relatively lower computational complexity.

### Video-Llama

**Overview**

[Video-LLaMA](https://arxiv.org/abs/2306.02858) is a multimodal framework designed to extend Large Language Models (LLMs) to understand both visual and auditory content in videos. It integrates video, audio and text, allowing the model to process and generate meaningful responses grounded in audiovisual information. Video-LLaMA addresses two key challenges: capturing temporal changes in visual scenes and integrating audio-visual signals into a unified system.
    
**Key Features**
    
Video-LLaMA has two branches

1. Vision-Language branch for processing video frames 
2. Audio-Language branch for handling audio signals. 

These branches are trained separately, undergoing both pre-training and fine-tuning phases. In the pre-training phase, the model learns to integrate different modalities, while in the fine-tuning phase, it focuses on improving its ability to follow instructions accurately.

In the case of the vision-language branch, there is an abundance of visual-text data available. However, for the audio-language branch, there is a lack of sufficient audio-text data. To address this, the model utilizes **ImageBind**, allowing the audio-language branch to be trained using visual-text data instead.
    
**Why It Matters**
    
Previous models struggled to handle both visual and auditory content together. Video-LLaMA addresses this by integrating these modalities in a single framework, capturing temporal changes in video and aligning audio-visual signals. It overcomes the limitations of earlier research by using cross-modal pre-training and instruction fine-tuning, achieving strong performance in multimodal tasks like video-based conversations without relying on separate models.

## Video and Multiple Modalities

### ImageBind

**Overview**

    

ImageBind utilizes paired data between images and other modalities to integrate diverse modality representations, centering around image data.

**Key Features**

ImageBind unifies many kinds of modalities by utilizing pairs of images and other modalities. By leveraging *InfoNCE* as the loss function, the model aligns representations between the various inputs. Even in cases where paired data between non-image modalities are absent, ImageBind can effectively perform cross-modal retrieval and zero-shot tasks.
Additionally, the training process of ImageBind is relatively simple compared to other models and can be implemented in various ways.

**Why It Matters**

ImageBind's key contribution is its ability to integrate various modalities without the need for specific modality-paired datasets. Using images as a reference, it aligns and combines up to six different modalities — such as audio, text, depth, and more — into a unified representation space. The significance lies in its capacity to achieve this alignment across multiple modalities simultaneously, without requiring direct pairing for each combination, making it highly efficient for multimodal learning.

## Conclusion

We have briefly examined the different modalities present in videos and then explored models that integrate visual information with various other modalities. 
As time goes on, there is a growing body of research focused on integrating a wide range of modalities all at once. 

I'm excited to see what future models will emerge, integrating even more diverse modalities within the video content. The potential for advancing multimodal representation learning through videos feels limitless!

### Introduction
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/rnn-based-video-models.md

# Introduction

## Videos as Sequence Data

Videos are made up of a series of images called frames that are played one after another to create motion. Each frame captures spatial information — the objects and scenes in the image. When these frames are shown in sequence, they also provide temporal information — how things change and move over time. 
Because of this combination of space and time, videos contain more complex information than single images. To analyze videos effectively, we need models that can understand both the spatial and temporal aspects.

## The Role and Need for RNNs in Video Processing

    

Convolutional Neural Networks (CNNs) are excellent at analyzing spatial features in images. 
However, they aren't designed to handle sequences where temporal relationships matter. This is where Recurrent Neural Networks (RNNs) come in. 
RNNs are specialized for processing sequential data because they have a "memory" that captures information from previous steps. This makes them well-suited for understanding how video frames relate to each other over time.

## Understanding Spatio-Temporal Modeling

In video analysis, it's important to consider both spatial (space) and temporal (time) features together—this is called spatio-temporal modeling. Spatial modeling looks at what's in each frame, like objects or people, while temporal modeling looks at how these things change from frame to frame. 
By combining these two, we can understand the full context of a video. Techniques like combining CNNs and RNNs or using special types of convolutions that capture both space and time are ways researchers achieve this.

# RNN-Based Video Modeling Architectures

## Long-term Recurrent Convolutional Networks(LRCN)

    

**Overview**
Long-term Recurrent Convolutional Networks (LRCN) are models introduced by researchers Donahue et al. in 2015. 
They combine CNNs and Long Short-Term Memory networks (LSTMs), a type of RNN, to learn from both the spatial and temporal features in videos. 
The CNN processes each frame to extract spatial features, and the LSTM takes these features in sequence to learn how they change over time.

**Key Features**
- **Combining CNN and LSTM:** Spatial features from each frame are fed into the LSTM to model the temporal relationships.
- **Versatile Applications:** LRCNs have been used successfully in tasks like action recognition (identifying actions in videos) and video captioning (generating descriptions of videos).

**Why It Matters**
LRCN was one of the first models to effectively handle both spatial and temporal aspects of video data. It paved the way for future research by showing that combining CNNs and RNNs can be powerful for video analysis.

## Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting(ConvLSTM)

    

**Overview**

The Convolutional LSTM Network (ConvLSTM) was proposed by Shi et al. in 2015. It modifies the traditional LSTM by incorporating convolutional operations within the LSTM's structure. This means that instead of processing one-dimensional sequences, ConvLSTM can handle two-dimensional spatial data (like images) over time.

**Key Features**
- **Spatial Structure Preservation:** By using convolutions, ConvLSTM maintains the spatial layout of the data while processing temporal sequences.
- **Effective for Spatio-Temporal Prediction:** It's particularly useful for tasks that require predicting how spatial data changes over time, such as weather forecasting or video frame prediction.

**Why It Matters**
ConvLSTM introduced a new way to process spatio-temporal data by integrating convolution directly into the LSTM architecture. This has been influential in fields that need to predict future states based on spatial and temporal patterns.

## Unsupervised Learning of Video Representations using LSTMs

**Overview**
In 2015, Srivastava et al. introduced a method for learning video representations without labeled data, known as unsupervised learning. This paper utilizes a multi-layer LSTM model to learn video representations. The model consists of two main components: an Encoder LSTM and a Decoder LSTM. The Encoder maps video sequences of arbitrary length (in the time dimension) to a fixed-size representation. The Decoder then uses this representation to either reconstruct the input video sequence or predict the subsequent video sequence.

**Key Features**
- **Unsupervised Learning:** The model doesn't require labeled data, making it easier to work with large amounts of video.

**Why It Matters**
This approach showed that it's possible to learn useful video representations without the need for extensive labeling, which is time-consuming and expensive. It opened up new possibilities for video analysis and generation using unsupervised methods.

## Describing Videos by Exploiting Temporal Structure

    

**Overview**
In 2015, Yao et al. introduced attention mechanisms in video models, specifically for video captioning tasks. This approach leverages attention to selectively focus on important temporal and spatial features within the video, allowing the model to generate more accurate and contextually relevant descriptions.

**Key Features**
- **Temporal and Spatial Attention:** The attention mechanism dynamically identifies the most relevant frames and regions in a video, ensuring that both local actions (e.g., specific movements) and global context (e.g., overall activity) are considered.
- **Enhanced Representation:** By focusing on significant features, the model combines local and global temporal structures, leading to improved video representations and more precise caption generation.

**Why It Matters**
Incorporating attention mechanisms into video models has transformed how temporal data is processed. This method enhances the model’s capacity to handle the complex interactions in video sequences, making it an essential component in modern neural network architectures for video analysis and generation.

# Limitations of RNN-Based Models
- **Challenges with Long-Term Dependencies**
    
    RNNs, including LSTMs, can struggle to maintain information over long sequences. This means they might "forget" important details from earlier frames when processing long videos. This limitation can affect the model's ability to understand the full context of a video.

- **Computational Complexity and Processing Time**
    
    Because RNNs process data sequentially—one step at a time—they can be slow, especially with long sequences like videos. This sequential processing makes it difficult to take advantage of parallel computing resources, leading to longer training and inference times.

- **Emergence of Alternative Models**
    
    Newer models like Transformers have been developed to address some of the limitations of RNNs. Transformers use attention mechanisms to handle sequences and can process data in parallel, making them faster and more effective at capturing long-term dependencies.

# Conclusion

RNN-based models have significantly advanced the field of video analysis by providing tools to handle temporal sequences effectively. Models like LRCN, ConvLSTM, and those incorporating attention mechanisms have demonstrated the potential of combining spatial and temporal processing. However, limitations such as difficulty with long sequences, computational inefficiency, and high data requirements highlight the need for continued innovation.

Future research is likely to focus on overcoming these challenges, possibly by adopting newer architectures like Transformers, improving training efficiency, and enhancing model interpretability. These efforts aim to create models that are both powerful and practical for real-world video applications.

### References
1. [Long-term Recurrent Convolutional Networks paper](https://arxiv.org/pdf/1411.4389)
2. [Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting paper](https://proceedings.neurips.cc/paper_files/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf)
3. [Unsupervised Learning of Video Representations using LSTMs paper](https://arxiv.org/pdf/1502.04681)
4. [Describing Videos by Exploiting Temporal Structure paper](https://arxiv.org/pdf/1502.08029)

### Transformers in Video Processing (Part 1)
https://huggingface.co/learn/computer-vision-course/unit7/video-processing/transformers-based-models.md

# Transformers in Video Processing (Part 1)

## Introduction

In this chapter, we will cover how the Transformers model is utilized in video processing. In particular, we will introduce the Vision Transformer, a successful application of the Transformers model in the field of vision. We will then explain the additional considerations made for the Video Vision Transformer (ViViT) model used in video, as opposed to the Vision Transformer model used in images. Finally, we will briefly discuss about the TimeSFormer model. 

**Materials that would be helpful to review before reading this document**:

- [computer vision course / unit3 / vision transformers for image classification](https://huggingface.co/learn/computer-vision-course/unit3/vision-transformers/vision-transformers-for-image-classification)
- [transformers / model documentation: ViT](https://huggingface.co/docs/transformers/main/en/model_doc/vit)

## Recap about ViT

First, let's take a quick look at Vision Transformers: [An image is worth 16x16 words: Transformers for image recognition at scale](https://arxiv.org/abs/2010.11929), the most basic of the successful applications of Transformers to vision.

The abstract from the paper is as follows;

*Inspired by the Transformer scaling successes in NLP, we experiment with applying a standard Transformer directly to images, with the fewest possible modifications. To do so, we split an image into patches and provide the sequence of linear embeddings of these patches as an input to a Transformer. Image patches are treated the same way as tokens (words) in an NLP application. We train the model on image classification in supervised fashion.*

ViT architecture. Taken from the  original paper.

The key techniques proposed in the ViT paper are as follows:

- Images are divided into small patches, and each patch is used as input to a Transformer model, replacing CNNs with a Transformer-based approach.  

- Each image patch is linearly mapped, and positional embeddings are added to allow the Transformer to recognize the order of the patches.  

- The model is pre-trained on large-scale datasets and fine-tuned for downstream vision tasks, achieving high performance.

### Performance & Limitation

 Comparision with SOTA models. Taken from the original paper.

Although ViT outperformed other state-of-the-art models, training the ViT model required a large amount of computational power. Training the ViT model took 2,500 days on TPU-v3. Assuming a TPU-v3 core costs approximately $2 per hour (you can find more detailed pricing information [here](https://cloud.google.com/tpu/pricing)), it would cost $2 x 24 hours x 2,500 days = $120,000 to train the model once.

## Video Vision Transformer (ViViT)

As mentioned earlier, the important issue for ViViT, which extends the image processing of ViT to video classification task, was how to train the model more quickly and efficiently. Also, unlike images, video contains not only spatial information, but also temporal information, and how to handle this “temporal information” is a key consideration and exploration.

The abstract from the [paper](https://arxiv.org/abs/2103.15691) is as follows:

*We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatiotemporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks. To facilitate further research, we release code at https://github.com/google-research/scenic.*

ViViT architecture. Taken from the original paper.

### Embedding Video Clips

#### What is embedding?
Before diving into specific techniques, it's important to understand what embeddings are. In machine learning, embeddings are dense vector representations that capture meaningful features of input data in a format that neural networks can process. For videos, we need to convert the raw pixel data into these mathematical representations while preserving both spatial information (what's in each frame) and temporal information (how things change over time).

#### Why Video Embeddings Matter
Processing videos is computationally intensive due to their size and complexity. Good embedding techniques help by:

- Reducing dimensionality while preserving important features
- Capturing temporal relationships between frames
- Making it feasible for neural networks to process video data efficiently

#### Why Focus on Uniform Frame Sampling and Tubelet Embeddings?
These two techniques represent fundamental approaches in video processing that have become building blocks for more advanced methods:

1. They balance computational efficiency with information preservation, offering a range of options for different video processing tasks.
2. They serve as baseline methods, providing a comparison point against which newer techniques can demonstrate improvement.
3. Learning these approaches establishes a strong foundation in spatio-temporal processing, which is crucial for grasping more advanced video embedding methods.

#### Uniform Frame Sampling

Uniform Frame Sampling. Taken from the original paper.

In this mapping method, the model uniformly samples some frames across the time domain,
e. g. one frame per every 2 frames. 

#### Tubelet Embedding

Tubelet embedding. Taken from the original paper.

An alternate method, extracting spatio-temporal "tubes" from the input volume and linearly projecting this. This method fuses spatio-temporal information during tokenization.

The previously introduced methods, such as Uniform Frame Sampling and Tubelet Embedding, are effective but relatively simple approaches. The upcoming methods to be introduced are more advanced.

### Transformer Models for Video in ViViT

The original ViViT paper proposes multiple transformer-based architectures, which we will now explore sequentially.

#### Model 1 : Spatio-Temporal Attention

The first model naturally extends the idea of ViT to the video classification task. Each frame in the video is split into  n_w(number of columns) x n_h(number of rows) image patches, resulting in a total of n_t(number of frames) x n_w x n_h patches. Each of these patches is then embedded as a “spatio-temporal token”—essentially a small unit representing both spatial(image) and temporal(video sequence) information. The model forwards all spatio-temporal tokens extracted from the video through the transformer encoder. This means each patch, or token, is processed to understand not only its individual features but also its relationship with other patches across time and space. Through this process, called “contextualizing,” the encoder learns how each patch relates to others by capturing patterns in position, color, and movement, thus building a rich, comprehensive understanding of the video’s overall context. 

**complexity : O(n_h^2 x n_w^2 x n_t^2)**

However, using attention on all spatio-temporal tokens can lead to heavy computational costs. To make this process more efficient, methods like Uniform Frame Sampling and Tubelet Embedding, as explained earlier, are used to help reduce these costs.

#### Model 2 : Factorised encoder

The approach in Model 1 was somewhat inefficient, as it contextualized all patches simultaneously. To improve upon this, Model 2 separates the spatial and temporal encoders sequentially.

Factorised encoder (Model 2). Taken from the original paper.

First, only spatial interactions are contextualized through a Spatial Transformer Encoder(=ViT). Then, each frame is encoded to a single embedding and fed into the Temporal Transformer Encoder(=general transformer).

**complexity : O(n_h^2 x n_w^2 + n_t^2)**

#### Model 3 : Factorised Self-Attention

Factorised Self-Attention (Model 3). Taken from the original paper.

In model 3, instead of computing multi-headed self-attention across all pairs of tokens, we first only compute self-attention spatially (among all tokens extracted from the same temporal index). Next, we compute self-attention temporally (among all tokens extracted from the same spatial index). Because of the ambiguities, no CLS (classification) token is used.

**complexity : same as model 2**

#### Model 4 : Factorized dot-product attention

Factorised Dot-Product Attention (Model 4). Taken from the original paper.

In model 4, half of the attention heads are designed to operate with keys and values from spatial indices, the other half operate with keys and values from same temporal indices.

**complexity : same as model 2, 3**

### Experiments and Discussion

Comparison of model architectures (Top 1 accuracy). Taken from the original paper.

After comparing Models 1, 2, 3, and 4, it is evident that Model 1 achieved the best performance but required the longest training time. In contrast, Model 2 demonstrated relatively high performance with shorter training times compared to Models 3 and 4, making it the most efficient model overall.

 The ViViT model fundamentally faces the issue of dataset sparsity. Like the Vision Transformer(ViT), ViViT requires an extremely large dataset to achieve good performance. However, such a scale of dataset is often unavailable for videos. Given that the learning task is more complex, the approach is to first pre-train on a large image dataset using ViT to initialize the model. 

## TimeSFormer

TimeSFormer is a concurrent work with ViViT, applying Transformer on video classification. The following sections are explanations of each type of attention.

Visualization of the five space-time self-attention schemes. Taken from the original paper.

- **Sparse Attention** is the same as ViT; the blue patch is the query and contextualizes other patches within one frame.
- **Joint Space-Time Attention** is the same as ViViT Model 1; the blue patch is the query and contextualizes other patches across multiple frames.
- **Divided Space-Time Attention** is similar to ViViT Model 3; the blue patch first contextualizes temporally with the green patches at the same position, and then spatially contextualizes with other image patches at the same time index.
- **Sparse Local Global Attention**: selectively combines local and global information.
- **Axial Attention**: processes spatial and temporal dimensions separately along their axes.

### Performance Discussion

The **Divided Space-Time Attention** mechanism shows the most effective performance, providing the best balance of parameter efficiency and accuracy on both K400 and SSv2 datasets.

## Conclusion

ViViT expanded upon the ViT model to handle video data more effectively by introducing various models such as the Factorized Encoder, Factorized Self-Attention, and Factorized Dot-Product Attention, all aimed at managing the space-time dimensions efficiently. Similarly, TimeSFormer evolved from the ViT architecture and utilized diverse attention mechanisms to handle space-time dimensions, much like ViViT. A key takeaway from this progression is the focus on reducing the significant computational costs associated with applying transformer architectures to video analysis. By leveraging different optimization techniques, these models improve efficiency and enable learning with fewer computational resources.

## Additional Resources

- [Video Transformers: A Survey](https://arxiv.org/abs/2201.05991)
