CLIP (Contrastive Language-Image Pre-training) is a powerful model developed by OpenAI that learns visual concepts from natural language descriptions. This project focuses on fine-tuning the CLIP model on a custom dataset of sketches, each labeled with a specific category or prompt. The objective is to adapt the pre-trained CLIP model to better recognize and understand sketches, enhancing its performance on this specific type of visual data.
-
Notifications
You must be signed in to change notification settings - Fork 0
AABBCCDKG/SketchTune
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published