There are also client libs and sample code for pretty much any language you might want to use it in. You can also run any of the models directly in a browser with WebGL, in a native mobile app, or on an edge device.
I see references to API keys, so I'm a little confused when you say that it's self hosted. If it makes calls to your servers then it's not really self hosted. Navigating to their site for pricing seems to indicate that this is definitely not something that you can run in a self-contained manner but I would love to be proven wrong.
I also really think that you should've at least put a disclaimer that you're affiliated with them.
EDIT: All the deployment options (post model training) seem to indicate that there is pricing involved.
Usually this is done in three steps. The first step is using a neural network to create a bounding box around the object, then generating vector embeddings of the object, and then using similarity search on vector embeddings.
The first step is accomplished by training a detection model to generate the bounding box around your object, this can usually be done by finetuning an already trained detection model. For this step the data you would need is all the images of the object you have with a bounding box created around it, the version of the object doesnt matter here.
The second step involves using a generalized image classification model thats been pretrained on generalized data (VGG, etc.) and a vector search engine/vector database. You would start by using the image classification model to generate vector embeddings (https://frankzliu.com/blog/understanding-neural-network-embe...) of all the different versions of the object. The more ground truth images you have, the better, but it doesn't require the same amount as training a classifier model. Once you have your versions of the object as embeddings, you would store them in a vector database (for example Milvus: https://github.com/milvus-io/milvus).
Now whenever you want to detect the object in an image you can run the image through the detection model to find the object in the image, then run the sliced out image of the object through the vector embedding model. With this vector embedding you can then perform a search in the vector database, and the closest results will most likely be the version of the object.
Use that site to capture images from your web camera to find examples of each class of object and see if this tool can work for you.
Liner.ai and Lobe.ai are both solid GUI-based platforms that run well on desktops. Your hardware will of course play a role in training speed, but it still runs well (albeit a bit slow) on an 2014-era HP tower I use for Windows. Heads up that Lobe is a Microsoft project and doesn't support the newer M-series Apple chips.
I've been looking for tools that make it possible to overlay graphics onto someone's face in real-time. Basically facebook's AR suite but something I can a) use at 1080p and b) actually record from or pipe into OBS.