Every week, AI brings us another groundbreaking release, another model version, another must-have integration. Among these developments, agentic systems have emerged as a key component. Introduced at the end of 2024, the Model Context Protocol (MCP) has become an important enabler of this change and has established itself as the standard for connecting AI agents with external data sources and tools. In this rapidly shifting landscape, how does one build production systems that won't be obsolete by the time you deploy them?
This talk shares practical lessons from building two real-world MCP applications with FastMCP and PydanticAI: JobmonitorMCP, which leverages the jobmonitor.de API to create intelligent regional labor market reports, and a tool for an international non-profit combining multiple agents into a powerful question and answer application.
During development, we faced multiple challenges: MCP clients and models that interpret the same protocol differently, emerging features with limited documentation and trying to evaluate non-deterministic outputs. Stakeholders repeatedly asked "Why does it behave differently today?" and "Are we using the newest model yet?"
What we learned: The antidote to AI hype isn't avoiding new technology, it's anchoring development in trusted engineering principles. Separation of concerns and focused components helped us design for the protocol rather than specific clients. Rigorous evaluation approaches combined LLM-as-Judge with manual review and user feedback. Transparent communication helped us manage expectations around AI capabilities without undermining confidence.
This session targets intermediate Python developers building or planning to build AI-powered applications. You'll leave with concrete strategies for building AI systems that adapt to new models while maintaining production stability, reflection questions for your own projects, and perhaps a little more confidence in your existing knowledge.