AI Trends in DevOps 2025: Agents, Open Models, and New Architectures
AI in DevOps is moving beyond simple copilots toward agentic workflows. Modern agents can reflect on their own actions, call APIs, plan multi-step tasks, and collaborate with other agents – not just autocomplete commands. Teams are already using these capabilities in production tools: AI-driven task management, documentation flows, and even financial operations that orchestrate work across systems. The key for DevOps leaders is to decide which workflows are safe to hand over to agents, define clear error budgets and interaction rules, and keep evaluation loops in place so agents don’t quietly drift.
At the same time, the balance between closed and open-source LLMs is shifting. New open models such as DeepSeek R1 and specialised tools like Mistral OCR are reducing the gap with proprietary offerings while bringing better cost control and customisation options. However, “open-source” remains a fuzzy label – licenses, datasets and access patterns vary widely – so due diligence is becoming part of standard platform work. Teams need to match models to use cases based on cost, latency, and governance requirements, and update those choices as the landscape changes.
Under the hood, research is also challenging the dominance of vanilla transformers. Hybrid approaches that mix transformers with newer architectures such as Mamba or linear-recurrence layers are promising faster inference and better resource usage. For DevOps and platform engineers, this doesn’t mean rewriting everything, but it does mean planning for more diversity in model architectures and hardware. Platform stacks that can host a mix of models – closed and open, transformer and hybrid – will be in a stronger position than those optimised for a single vendor or approach.