From 7c8be1e999a44c71cea7b3924ba0a293a778e340 Mon Sep 17 00:00:00 2001
From: ptresson <paul.tresson@ird.fr>
Date: Thu, 5 Dec 2024 18:44:24 +0100
Subject: [PATCH] document huggingface and sklearn support

---
 docs/source/faq.md    | 31 +++++++++++++++++++++++++++++++
 docs/source/issues.md |  2 +-
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/docs/source/faq.md b/docs/source/faq.md
index 261bcbd..fbc0474 100644
--- a/docs/source/faq.md
+++ b/docs/source/faq.md
@@ -17,3 +17,34 @@ Using a model with smaller patch size will also in the end lead to a better reso
 
 We've selected some state of the art models that seem to work well on our usecases so far. If you are short in RAM, prefer using the ``ViT Tiny model``, that is almost ten times smaller than the others (but can provide a less nuanced map).
 
+## Can I use any model from HuggingFace ?
+
+[HuggingFace](https://huggingface.co/) is one of the largest hub where pre-trained deep learning models are published (at of time of writing, they have over 1 million models available).
+It is an essential tool to follow the deep learning state of the art and one of the *de facto* standards for sharing models. 
+They have now partnered with the `timm` library, that was one of the most used libraries used by the community to initialize models and load pre-trained weights.
+
+While this opens a very wide range of model usable with the plugin, not all models can be used out of the box. 
+Indeed, our code is optimized for ViT-like models that output patch-level features. If you use another type of architecture, it may work but it probably won't.
+It is best to use the models provided directly by `timm`, you can find the list [there](https://huggingface.co/timm).
+
+However, you can also use other pre-trained models available on HuggingFace. [As stated in their docs](https://huggingface.co/docs/hub/timm), timm can recognise some. 
+Then, when you input the name of the model in the plugin, you have to add the `hf-hub:` prefix before the model card name.
+
+```
+'hf-hub:nateraw/resnet50-oxford-iiit-pet'
+```
+
+If a model is unavailable, you can try to update `timm`.
+
+
+## Why doesn't this projection, clustering or classification algorithm work ?
+
+All algorithms provided by `sklearn` used in this plugin are not yet extensively tested.
+We select all algorithms that share a common API and should a priori work correctly.
+However, some algorithms may expect other data format and might be unusable out of the box. 
+In this case, your feedback is more than welcome, do not hesitate to [fill an issue on github](https://github.com/umr-amap/iamap/issues).
+
+The algorithms that are proprely tested for now are PCA, UMAP, Kmeans, HDBSCAN and RandomForestClassifier.
+However, it is in our roadmap to test more extensively.
+
+The similarity tool however, is coded in pure pytorch and is tested automatically.
diff --git a/docs/source/issues.md b/docs/source/issues.md
index b5a6430..fa9b2fe 100644
--- a/docs/source/issues.md
+++ b/docs/source/issues.md
@@ -86,5 +86,5 @@ gdal_edit.py -stats your_file.tif
 ## Conflicting requirements for UMAP
 
 `umap` has `numba` as a dependdencie, which may require `numpy < 2.0` and conflict with other librairies depending on how you installed them. According to `numba` developpers, this
-(should be resolved in comming numba releases)[https://github.com/numba/numba/issues/9708].
+[should be resolved in comming numba releases](https://github.com/numba/numba/issues/9708).
 In the mean time, you can use a conda environement or uninstall and reinstall numpy at a previous version.
-- 
GitLab