Free with Any Paid ODSC AI East 2026 Conference Pass; Find more details on the Conference on https://odsc.ai/east
$2,500+ in credits to multiple AI platforms . Full access to all 3 weeks of expert-led training, interactive workshops, and business talks. Live and on-demand access to all sessions. Participation in community channels (Slack/Discord). Certification or digital badge upon completion.
Free access to Invited speaker sessions, Talks & Office hours. No access to Training or Hands-On Sessions & Recordings
Week 1: Foundations of Agentic AI – Learn the building blocks of autonomous agents, including core architectures, planning methods, memory systems, and leading development frameworks.
Week 2: Advanced Techniques and Industry Solutions – Dive into advanced reasoning, multi-agent coordination, tool chaining, self-healing workflows, and emerging security challenges.
Week 3: Production Deployment and Business Transformation – Focus on real-world applications with sessions on agent evaluation, reliability, deployment strategies, and cumulative demo showcases.
Learn step-by-step, guided by experts.
Interactive Workshops: Participate in live, expert-led sessions where you’ll build real agentic systems.
Cutting-Edge Tools:
Experiment with the latest agent frameworks, memory systems, and orchestration platforms.
Exclusive Builder Credits: Receive free credits from partners to immediately start building and scaling your own autonomous AI agents.
Dive into live, expert-led sessions where you’ll work on real-world AI challenges and learn by doing
Expert-Led Sessions
Learn directly from pioneers in autonomous agents, multi-agent coordination, and agentic workflows.
Real-World Insights:
Engage in live Q&A, hands-on demos, and hear from innovators behind today’s leading agent technologies.
From Research to Deployment: Understand both cutting-edge research and practical deployment strategies for agentic AI.
Immediate Impact: Build, Deploy & Scale AI Agents with Credits
✔ Deploy real autonomous agents with no upfront cost
✔ Access premium APIs, frameworks, and cloud tools
✔ Experiment, build, and test in real-world environments
✔ Compete in build challenges and mini-hackathons
✔ Start building with industry-grade platforms — immediately and for free
Featured Topics
TECHNICAL TRACK
AI Agent Architecture Fundamentals
Popular AI Agent Frameworks
Agent Memory Systems
Multi-Agent Systems
Advanced RAG for Agents
Agent Evaluation & Testing
Tool Use & API Integration
Multimodal Agents
AI Agent Observability
Production-Ready Agent Systems
Real-Time Agents & Live Context Pipelines
Autonomous Coding Agents & AI Engineering Assistants
BUSINESS TRACK
Getting Started with Agentic AI in the Enterprise
Governance, Risk & Responsible Agent Deployment
AI Agents in Operations
Designing Agentic Workflows for Operational Efficiency
Enterprise Adoption Roadmaps for Agentic AI
Measuring Business Impact & ROI of Agentic AI
Building AI-Augmented Teams & Redefining Roles in the Agent Era
Scaling AI Agent Adoption
Future Trends & Strategy
We are accepting hands-on workshop submissions for a limited time. Alternatively, If you are interested in sponsorship please contact partners@odsc.com
Who Should Attend
✔ AI Engineer/ Developer
✔ Machine Learning Engineer
✔ Data Scientist
✔ Tech Lead Architect
✔ or any technical professional building AI-powered systems
✔ Startup Founder/ Product Leader
✔ Business Process Analyst
✔Executive / Non-Technical Founder
✔ AI Enthusiasts
✔ or any leader focused on driving business impact with intelligent automation
Previous Partners



Previous Media Partners



%20(1).png)


.png)
Reach Thousands of AI Professionals: Extend your influence to our vast network through our global conferences, newsletter, meetups, webinars, and more.
Alternatively, if you are interested in sponsorship, please contact partners@odsc.com
✔ Expanded Training: Over 40+ Hands-on Tutorials, Workshops, and training sessions
✔ Leading Experts: Taught by top instructors and experienced practitioners in AI and ML
✔ Breadth and depth: Beginner to Expert season ensures we have all levels covered
Discover a dynamic space where connections fuel innovation. The AI Builders Summit offers unparalleled networking opportunities with a global community of engineers, data scientists, and AI leaders. Engage in insightful conversations, exchange expertise, and collaborate with professionals who share your vision for advancing AI. This is more than networking—it’s your chance to join a thriving space of creators and problem-solvers driving the future of technology.
Sessions
Sessions
Sessions
Sessions
Sessions
Sessions
Theme: Build Your First Agent – Tools, Prompts, and Use Cases
Theme: Build Your First Agent – Tools, Prompts, and Use Cases
Theme: Agent Collaboration and Real-World Task Automation
Shane Murray | Field CTO & Head of AI Strategy | Monte Carlo
Abstract
Nearly every data organization is under pressure to deploy Agentic AI initiatives, but where do they start? What do data teams actually need to know before they begin?The engineering team at Monte Carlo, a data + AI observability company, not only works with hundreds of enterprise customers to help them build reliable AI products every day, but the team has also developed their own Agentic AI applications within the product.
After a year of deploying GenAI in production, including the world's first-ever Observability Agents, the team has learned 10 key lessons that can serve as guidance for others currently in the process of or planning for their Agentic AI investment. Join Shane Murray, Field CTO & Head of AI at Monte Carlo, as he shares the takeaways his team has learned in the Agentic AI development process.
Lessons include:
- 1: Separate AI research from product research
- 2: Employ observability into model output
- 3: Validate results with data or human interactions
- 4: Divide and aggregate small agentic tasks
- 5: Split tasks horizontally to reduce runtime
- 6: Some tasks will have to wait for better models
- 7: AI solutions can be valuable for fast development
- 8: Establish cooperative workflow between product engineers and data scientists
- 9: Use LLM clone models to keep data secure
- 10: Leverage structured output response formats
Attendees will walk away with a deeper understanding of best practices for Agentic AI product research, validations, task splitting, workflows, security, and perhaps the most difficult, knowing when to be patient.
Session Outline:
Lesson 1: Set the right foundations for the early stages of Agentic AI development. This means establishing the right research workflows and monitoring output quality from the jump. Decide how your team will validate responses.
Lesson 2: Learn how to split Agentic tasks correctly. This means dividing and aggregating small tasks, as well as splitting tasks horizontally to effectively reduce runtime.
Lesson 3. Establish a cooperative workflow between internal teams, from product engineers to data scientists, for continued development and optimization. Learn how to maintain data security with LLM clone models, as well as how to leverage structured output response formats.
I'd focus on lessons around building the Troubleshooting Agent here at Monte Carlo, including:
- The problem definition (and why this problem was suited for AI) - Getting to production / value
- How it's built – a review of the multi-agent architecture built with Claude, LangGraph
- What it does – a demo or overview of the user interaction
- How we evaluate model quality
- Tips for how to get started building agents
Attendees will walk away with a deeper understanding of best practices for product research, validations, task splitting, workflows, security, and perhaps the most difficult, knowing when to be patient when it comes to effective Agentic AI development.
Background Knowledge:
Basic understanding of data engineering, AI development, and data + AI infrastructure.
Shane Murray, Field CTO & Head of AI Strategy | Monte Carlo
Nearly every data organization is under pressure to deploy Agentic AI initiatives, but where do they start? What do data teams actually need to know before they begin?
Join Shane Murray, Field CTO & Head of AI at Monte Carlo, as he shares the takeaways his team has learned in the Agentic AI development process. Attendees will walk away with a deeper understanding of best practices for Agentic AI product research, validations, task splitting, workflows, security, and perhaps the most difficult, knowing when to be patient.
David Parry, Developer Advocate | Qodo
In this hands-on lab, we will explore the process of creating intelligent agents that integrate with your IDE’s AI-powered plugin. These agents will assist you during your normal development workflow, solving everyday problems that are specific to your projects and teams.
By the end of this session, you will have a customized AI assistant embedded in your IDE, capable of providing context-aware suggestions, automating repetitive tasks, and enhancing your overall coding experience.
Theme: Designing Scalable and Intelligent Agent Systems
Ivan Nardini, Developer Relations Engineer | Google Cloud; Annie Wang, Developer Relations Engineer | Google Cloud
Description Coming Soon!
Josh Reini, Developer Advocate at Snowflake
As enterprise AI adoption accelerates, data agents that can plan, retrieve, reason, and act across structured and unstructured sources are becoming foundational. But building agents that work is no longer enough, you need to build agents you can trust.This 60-minute workshop walks through how to design, deploy, and evaluate multi-agent systems using Snowflake Cortex. You’ll build agents that connect to enterprise data sources (structured and unstructured) and perform intelligent, multi-step operations with Cortex Analyst and Cortex Search.Then we’ll go beyond functionality and focus on reliability. You’ll learn how to instrument your agent with inline, reference-free evaluation to measure goal progress, detect failure modes, and adapt plans dynamically. Using trace-based observability tools like TruLens and Cortex eval APIs, we’ll show how to identify inefficiencies and refine agent behavior iteratively.
By the end of this workshop, you’ll:
Build a data agent capable of answering complex queries across multiple data sources
Integrate inline evaluation to guide and assess agent behavior in real time
Debug and optimize execution flows using trace-level observability
Leave with a repeatable framework for deploying trustworthy agentic systems in production
Micheal Lanham, Principle AI Engineer | Brilliant Harvest
In this two-hour course, participants will explore the design and implementation of behavior-driven conversational agents using Agent Flows—a flexible and powerful framework for modeling intelligent, context-aware, and autonomous interactions. Agent Flows extend traditional behavior modeling by embedding agency, enabling agents to reason, act, and communicate dynamically within multi-agent systems.
Through conceptual deep dives and practical hands-on activities, attendees will gain the skills to architect conversational agents that not only manage dialogue but also coordinate tools, access Model Context Protocol (MCP) servers, and engage in agent-to-agent communication.
The course will also cover debugging, tracing, and evaluation techniques critical for building reliable and adaptable agentic systems.
David Hughes, Principal Data & AI Solution Architect | Enterprise Knowledge
You've built your Graph RAG system with agentic workflows in the last session—now comes the critical question: How do you know it's actually working? Traditional LLM evaluation metrics fall short when measuring Graph RAG performance in production environments where agents make autonomous decisions based on retrieved knowledge.This advanced session tackles the unique challenges of evaluating agentic Graph RAG systems. Unlike simple question-answering scenarios, agentic workflows require evaluation frameworks that measure not just accuracy, but decision quality, reasoning consistency, and operational reliability under real-world conditions.
Lesson 1: What Makes Graph RAG Evaluation DifferentWe'll explore why standard retrieval metrics miss the mark for agentic systems. When agents use Graph RAG to make decisions, you need to evaluate retrieval quality, reasoning coherence, and downstream task performance simultaneously. We'll define evaluation frameworks that capture how well your Graph RAG system enables agents to achieve their goals, focusing on metrics that directly impact business outcomes rather than traditional accuracy measures.
Lesson 2: Practical Benchmarking with OPIKYou'll get hands-on experience setting up comprehensive evaluation pipelines using OPIK, an open-source platform designed for production LLM evaluation. We'll demonstrate how to create evaluation datasets that reflect real agentic scenarios, implement automated benchmarking workflows, and interpret results to identify system weaknesses. You'll learn to benchmark not just individual queries, but entire agentic decision sequences that reveal how well your Graph RAG supports agent reasoning.
Lesson 3: Production Monitoring and Continuous ImprovementWe'll conclude by looking forward and investigating strategies for ongoing system health monitoring in production environments. You'll learn techniques for detecting performance degradation, handling dynamic data environments, and implementing feedback loops that help your workflows improve over time. We'll cover essential aspects like monitoring data freshness, query performance, and agent decision quality.
We'll also explore multi-agent coordination strategies and how frameworks like Google ADK and A2Acan enhance your agentic workflows to ensure your Graph RAG system maintains reliability as it scales.By the end of this session, you'll have the tools and methodologies to confidently deploy and maintain Graph RAG systems that your agents can rely on for critical decision-making.
Philipp Schmid , AI Developer Experience | Google DeepMind
As AI continues to evolve, we will a shift from static chatbots to dynamic agentic AI systems capable of autonomous reasoning, tool integration, and multi-step problem-solving.
This hands-on workshop teaches you how to design and build agentic systems that go from basic chatbots to leveraging structured outputs, function calling, and workflow orchestration to solve enterprise-scale challenges. It bridges theoretical concepts with hands-on implementation strategies using models like Gemini 2.5 Pro.
Chi Wang, Founder of AutoGen (Now AG2) and Senior Staff Research Scientist | Google DeepMind
In this workshop, you’ll learn about AG2, a powerful open-source AgentOS for building AI agents capable of reaching high level of agentic freedom in terms of communicating, acting, and adapting in dynamic environments.Participants will gain hands-on experience using AG2 and Waldiez, a drag-and-drop UI for rapid agent workflow design.
From automating research tasks used in University of Cambridge to deploying agents in real-world production environments, this workshop equips you with the skills to harness agentic AI for rapid prototyping, research and business use cases.By the end of the session, you’ll understand the core principles of agentic systems, how to create and control AI agents, and how organizations like BetterFutureLabs use them for deep analysis, decision-making, and beyond.
Chris Alexiuk, Co-Founder & CTO | AI Makerspace and DL | NVIDIA
In this event, we’ll explore the latest on agent evaluation from the leading LLM application evaluation framework: RAG ASsessment (RAGAS).Join us live to learn about best practices for evaluating agentic workflows, including Topic Adherence, Tool call
Accuracy, and Agent Goal accuracy.
📚 You’ll learn:
- How to think about assessing your agent applications quantitatively, with leading best-practice metrics
- How agentic workflows are being assessed at the LLM edge
🤓 Who should attend the event:
- Aspiring AI Engineers who want to build and evaluate production-grade agent applications
- AI Engineering leaders who want to instrument their agent deployments with leading evaluators
.png)

