layout: single title: “Agentic AI for Oncology — Team Series” permalink: /agentic-ai/ author_profile: false — We’ve started a new series on Agentic AI for Oncology where our team shares paper summaries, practical insights, and lessons for building safe and equitable multi-agent AI systems in healthcare.
Episode 1 — AutoGen Framework 🚀
Slides (PDF):
📑 Download presentation
LinkedIn post:
🔗 View the full post on LinkedIn
Summary
This week our team dug into AutoGen, a framework for building multi-agent LLM systems where “conversable” agents (LLMs, tools, and humans) solve tasks together via conversation programming.
- Plain terms: Assign roles — researcher, coder, validator — and let them collaborate (with humans in the loop) to reach safer, more reliable outcomes. 🧠🤝
- Oncology angle — why we care 🧬
- Novel biomarkers: role-split retrieval, coding, and safety checks speed exploratory analyses without risking PHI. 🔍
- Reproducible hypothesis validation: “writer → safeguard → executor” loops test hypotheses, track provenance, and cut error cascades. 📊
- Human-in-the-loop by design: clinicians and researchers step in at decision points for oversight and accountability. 👩⚕️👨💻
Open Questions for the Community
- Privacy: on-prem deployments, differential privacy, synthetic data. 🔒
- Safety & evaluation: benchmark datasets/metrics for agent pipelines. 🛡️
- Alternatives / complements: what’s worked for you (e.g., LangGraph, CrewAI, Transformers Agents) in regulated settings? ⚙️
Gratitude & Shout-outs 🙌
- Huge thanks to Shikhar Shiromani for the walkthrough and slides. 📎
- Appreciation to the team for great questions and discussion: Twisha Shah, Zenghan Wang, Mohammad Tanvir Hasan, Suman Ghosh, Juan Francisco Pesantez Borja, Liu Jencheng, Rohan Dalal ✨
Paper Link
Next Episodes
Stay tuned for upcoming deep dives into other frameworks, safety mechanisms, and oncology-specific applications of Agentic AI.