You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: packages/tasks/src/tasks/depth-estimation/about.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,5 @@
1
-
## Use Cases
1
+
## Use Cases
2
+
2
3
Depth estimation models can be used to estimate the depth of different objects present in an image.
3
4
4
5
### Estimation of Volumetric Information
@@ -8,6 +9,14 @@ Depth estimation models are widely used to study volumetric formation of objects
8
9
9
10
Depth estimation models can also be used to develop a 3D representation from a 2D image.
10
11
12
+
## Depth Estimation Subtasks
13
+
14
+
There are two depth estimation subtasks.
15
+
16
+
-**Absolute depth estimation**: Absolute (or metric) depth estimation aims to provide exact depth measurements from the camera. Absolute depth estimation models output depth maps with real-world distances in meter or feet.
17
+
18
+
-**Relative depth estimation**: Relative depth estimation aims to predict the depth order of objects or points in a scene without providing the precise measurements.
19
+
11
20
## Inference
12
21
13
22
With the `transformers` library, you can use the `depth-estimation` pipeline to infer with image classification models. You can initialize the pipeline with a model id from the Hub. If you do not provide a model id it will initialize with [Intel/dpt-large](https://huggingface.co/Intel/dpt-large) by default. When calling the pipeline you just need to specify a path, http link or an image loaded in PIL. Additionally, you can find a comprehensive list of various depth estimation models at [this link](https://huggingface.co/models?pipeline_tag=depth-estimation).
Copy file name to clipboardExpand all lines: packages/tasks/src/tasks/feature-extraction/about.md
+46-1Lines changed: 46 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,21 @@
1
1
## Use Cases
2
2
3
+
### Transfer Learning
4
+
3
5
Models trained on a specific dataset can learn features about the data. For instance, a model trained on an English poetry dataset learns English grammar at a very high level. This information can be transferred to a new model that is going to be trained on tweets. This process of extracting features and transferring to another model is called transfer learning. One can pass their dataset through a feature extraction pipeline and feed the result to a classifier.
4
6
7
+
### Retrieval and Reranking
8
+
9
+
Retrieval is the process of obtaining relevant documents or information based on a user's search query. In the context of NLP, retrieval systems aim to find relevant text passages or documents from a large corpus of data that match the user's query. The goal is to return a set of results that are likely to be useful to the user. On the other hand, reranking is a technique used to improve the quality of retrieval results by reordering them based on their relevance to the query.
10
+
11
+
### Retrieval Augmented Generation
12
+
13
+
Retrieval-augmented generation (RAG) is a technique in which user inputs to generative models are first queried through a knowledge base, and the most relevant information from the knowledge base is used to augment the prompt to reduce hallucinations during generation. Feature extraction models (primarily retrieval and reranking models) can be used in RAG to reduce model hallucinations and ground the model.
14
+
5
15
## Inference
6
16
17
+
You can infer feature extraction models using `pipeline` of transformers library.
[Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) is a toolkit to easily serve feature extraction models using few lines of code.
66
+
25
67
## Useful resources
26
68
27
-
-[Documentation for feature extractor of 🤗Transformers](https://huggingface.co/docs/transformers/main_classes/feature_extractor)
69
+
-[Documentation for feature extraction task in 🤗Transformers](https://huggingface.co/docs/transformers/main_classes/feature_extractor)
70
+
-[Introduction to MTEB Benchmark](https://huggingface.co/blog/mteb)
71
+
-[Cookbook: Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain](https://huggingface.co/learn/cookbook/rag_zephyr_langchain)
72
+
-[sentence-transformers organization on Hugging Face Hub](https://huggingface.co/sentence-transformers)
description: "An object tracking, segmentation and inpainting application.",
70
73
id: "VIPLab/Track-Anything",
71
74
},
75
+
{
76
+
description: "Very fast object tracking application based on object detection.",
77
+
id: "merve/RT-DETR-tracking-coco",
78
+
},
72
79
],
73
80
summary:
74
81
"Object Detection models allow users to identify objects of certain defined classes. Object detection models receive an image as input and output the images with bounding boxes and labels on detected objects.",
Copy file name to clipboardExpand all lines: packages/tasks/src/tasks/zero-shot-image-classification/about.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,9 +68,8 @@ The highest probability is 0.995 for the label cat and dog
68
68
69
69
## Useful Resources
70
70
71
-
You can contribute useful resources about this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md).
72
-
73
-
Check out [Zero-shot image classification task guide](https://huggingface.co/docs/transformers/tasks/zero_shot_image_classification).
description: "A demo to try the state-of-the-art zero-shot object detection model, OWLv2.",
53
53
id: "merve/owlv2",
54
54
},
55
+
{
56
+
description:
57
+
"A demo that combines a zero-shot object detection and mask generation model for zero-shot segmentation.",
58
+
id: "merve/OWLSAM",
59
+
},
55
60
],
56
61
summary:
57
62
"Zero-shot object detection is a computer vision task to detect objects and their classes in images, without any prior training or knowledge of the classes. Zero-shot object detection models receive an image as input, as well as a list of candidate classes, and output the bounding boxes and labels where the objects have been detected.",
0 commit comments