Many investment strategies look great because they performed well in the past. However, it is often unclear why they work or whether they would still work in the future. Strong backtest results are frequently driven by hidden assumptions, unclear data handling, or unrealistic rules rather than real skill or insight.
In this talk, I show how Tidy Finance principles help people better understand what is actually happening inside a financial backtest. Tidy Finance has become a popular open-source teaching and learning platform for empirical financial research. Its core idea is simple: financial analyses should be built from clear, well-structured data that makes assumptions easy to see and results easy to reproduce.
Using explicit examples from Tidy Finance with Python during the talk, I go through a real backtesting workflow and show how it changes when assumptions are written down clearly instead of being hidden inside the code. I demonstrate how small, often overlooked choices can have a large impact on results, and how these effects become visible when the analysis is structured cleanly. The focus is on learning how to read and question backtests, not on presenting new models or strategies.