Skip to content

πŸ·πŸ€– AI-powered wine recommendation system using vector embeddings, Qdrant, and GPT-4 Turbo. Provides personalized wine suggestions based on semantic search of tasting notes.

Notifications You must be signed in to change notification settings

scouring/RAG-LLM-GenAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Wine Recommendation with Embeddings and LLMs πŸ·πŸ€–

This project demonstrates a modern AI-powered wine recommendation system using vector embeddings, Qdrant vector database, and a large language model (LLM) to deliver personalized suggestions based on user queries.

Problem Statement

This project demonstrates how an LLM can make suggestions like a sommalier. It could be deployed on an Edge device, like a digital kiosk, on dining tables in restaurants, making recommendations to consumers. This type of LLM could increase sales via personalization, enhance the customer experience, differentiate brands, and efficiently move inventory with scalable knowledge capture from a sommelier in vector embeddings, making it accessible 24/7 without requiring human staff.


πŸš€ Features

  • Vector Embeddings: Uses Sentence-Transformers all-MiniLM-L6-v2 to encode wine tasting notes into numerical vectors.
  • Vector Database: Leverages Qdrant as an in-memory vector database to store and search embeddings efficiently.
  • Semantic Search: Finds wines most relevant to a user query, e.g., "Suggest an amazing Malbec from Argentina."
  • LLM Integration: Connects search results to an OpenAI GPT-4 Turbo model to generate natural language recommendations.
  • Data Handling: Cleans and samples wine dataset to ensure smooth embedding and indexing.

πŸ“‚ Dataset

  • Uses a CSV of top-rated wines.
  • Filters out entries with missing varieties (NaN) to avoid errors in embeddings.
  • Samples 700 records for efficient indexing and search.

πŸ›  Tech Stack


🧠 System Architecture

RAG Architecture

⚑ How It Works

  1. Load and clean the wine dataset.
  2. Sample a subset for efficient processing.
  3. Encode wine tasting notes into embeddings using SentenceTransformer.
  4. Create an in-memory Qdrant collection to store vectors and metadata.
  5. Perform semantic search to find wines matching the user prompt.
  6. Pass the search results to an LLM to generate user-friendly recommendations.

πŸ“ Example Usage

user_prompt = "Suggest an amazing Malbec wine from Argentina"

# Perform vector search
hits = qdrant.search(
    collection_name="top_wines",
    query_vector=encoder.encode(user_prompt).tolist(),
    limit=3
)

# Collect search results
search_results = [hit.payload for hit in hits]

# Generate natural language recommendation using GPT-4
completion = client.chat.completions.create(
    model="gpt-4-turbo",
    messages=[
        {"role": "system", "content": "You are a wine specialist chatbot."},
        {"role": "user", "content": user_prompt},
        {"role": "assistant", "content": str(search_results)}
    ]
)
print(completion.choices[0].message)

About

πŸ·πŸ€– AI-powered wine recommendation system using vector embeddings, Qdrant, and GPT-4 Turbo. Provides personalized wine suggestions based on semantic search of tasting notes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages