The rise of Large Language Models (LLMs) and agentic AI systems is transforming how complex workflows are designed and managed. Unlike traditional centralized orchestration, modern workflows must support distributed, autonomous agents operating across cloud, edge, and on-premise environments. These agents collaborate with humans and other systems, adapt to evolving goals, and cross organizational and trust boundaries. This paradigm shift is especially relevant in domains like cybersecurity and healthcare emergency response, where workflows must be dynamically constructed and executed under uncertainty. In such settings, rigid automation falls short—agentic workflows require decentralized, secure, and auditable orchestration.
This thesis explores how to enable such systems, asking: How can we achieve secure, distributed orchestration in environments where agentic AI operates autonomously? It will propose a formal modeling framework for distributed agentic workflows, protocols for auditable, privacy-preserving coordination, and a reference architecture with real-world proofs of concept in cybersecurity and healthcare.