In this paper, researchers explore using CLIP for image retrieval.
In this paper, researchers consider the ability of CLIP features to support text-driven image retrieval and find that there is a sweet-spot of detail in the text that gives best results and find that words describing the "tone" of a scene (such as messy, dingy) are quite important in maximizing text-image similarity. Traditional image-based queries sometimes misalign with user intentions due to their focus on irrelevant image components. To overcome this, the researchers explore the potential of text-based image retrieval, specifically using Contrastive Language-Image Pretraining (CLIP) models. CLIP models, trained on large datasets of image-caption pairs, offer a promising approach by allowing natural language descriptions for more targeted queries. The authors explore the effectiveness of text-driven image retrieval based on CLIP features by evaluating the image similarity for progressively more detailed queries. (Published Abstract Provided)
Downloads
Related Datasets
Similar Publications
- Application of Silica-based Hyper-crosslinked Sulfonate-modified Reversed Stationary Phases for Separating Highly Hydrophilic Basic Compounds
- Association of Commingled Human Skeletal Remains by Their Elemental Profile Using Handheld Laser-induced Breakdown Spectroscopy
- Sex Offender Supervision Communication, Training, and Mutual Respect Are Necessary for Effective Collaboration Between Probation Officers and Therapists