LLM-Based Recommendation Systems: From Embeddings to Real Personalization

Özge Çinko

Machine Learning & Deep Learning & Statistics
Python Skill Intermediate
Domain Expertise Intermediate

Recommendation systems are a core component of many data-driven products, yet most practitioners are still navigating how and when to incorporate Large Language Models into these systems effectively.

This talk presents a practical, end-to-end view of LLM-based recommendation systems. We start by revisiting classical recommendation architectures and then move into modern approaches built around embeddings, vector similarity search, and retrieval-augmented generation (RAG).

Topics covered include: Using LLM embeddings for user and item representation Hybrid retrieval pipelines combining vector search and traditional ranking models Prompt-driven personalization and context-aware recommendations Offline and online evaluation strategies for LLM-based recommenders Trade-offs around latency, cost, and system complexity

The focus is on real-world applicability rather than theoretical novelty. Examples and design patterns are drawn from production-like systems and practical experimentation. This session is aimed at data scientists, ML engineers, and practitioners who want to move beyond hype and build recommendation systems that deliver meaningful personalization using LLMs.

Özge Çinko

I’m Özge Çinko, a curious soul with a computer engineering degree and a heart full of ideas.

I’m currently shaping the future as an AI Research Engineer at Huawei. I work in AI research, but I’m just as passionate about blending creativity with code.

Whether it’s turning emotions into visuals, building fictional chatbots, or crafting data stories, I love making tech feel personal.

I write, build, explore, and sometimes get beautifully lost in too many ideas, but always with Python by my side.