6 min read

AWS Kiro Powers: On‑Demand AI Expertise for Developers

AI

ThinkTools Team

AI Research Lead

Introduction

In the rapidly evolving landscape of AI‑assisted software development, the promise of intelligent coding assistants has moved from novelty to necessity. Developers now rely on large language models (LLMs) to draft code, debug, and even design architecture. Yet the practical deployment of these assistants has revealed a stubborn bottleneck: the sheer volume of context that must be loaded before the model can begin a task. AWS’s recent unveiling of Kiro powers at re:Invent addresses this issue head‑on by shifting the paradigm from “load everything at once” to “load only what you need, when you need it.” By partnering with industry leaders such as Stripe, Figma, and Datadog, Kiro powers offers a modular, on‑demand approach that promises to reduce token consumption, lower costs, and accelerate development cycles. This post delves into the mechanics of Kiro powers, its economic advantages over traditional fine‑tuning, and its place within AWS’s broader vision of autonomous AI agents.

Main Content

The Bottleneck of Context Overload

Modern AI coding assistants rely on the Model Context Protocol (MCP) to interface with external services. Each MCP server registers a suite of tool definitions that the assistant can invoke. When a developer connects multiple services—say, Stripe for payments, Figma for design, and Supabase for data storage—the assistant must ingest dozens of tool definitions into its working memory. AWS reports that merely five MCP servers can consume over 50,000 tokens, roughly 40 % of a typical model’s context window, before the developer types a single request. This phenomenon, often dubbed “context rot,” forces assistants to sift through irrelevant information, leading to slower responses, degraded output quality, and higher token‑based costs. Developers have voiced frustration that the very act of configuring an assistant can drain their token budget, undermining the efficiency gains promised by AI coding tools.

Dynamic Loading with Kiro Powers

Kiro powers tackles context rot by packaging three core components into a single, dynamically‑loaded bundle. The first is a steering file, POWER.md, which serves as a concise onboarding manual for the assistant. It enumerates available tools and, crucially, defines the conditions under which each tool should be activated. The second component is the MCP server configuration itself, establishing the actual connection to external services. The third comprises optional hooks and automation scripts that trigger specific actions when a power is engaged.

When a developer mentions “payment” or “checkout” in a conversation with Kiro, the system automatically activates the Stripe power, loading its tools and best practices into context. If the developer shifts focus to database operations, Supabase is activated while Stripe is deactivated. In practice, the baseline context usage when no powers are active approaches zero, ensuring that token consumption remains tightly coupled to the developer’s immediate needs. According to AWS, this on‑demand loading model can reduce token usage by more than 70 % compared to traditional MCP‑based approaches.

Democratizing Advanced Development Practices

Before Kiro powers, only a handful of seasoned developers could craft sophisticated steering files, fine‑tune prompts, and manually manage tool activation. AWS’s Vice President of Developer Agents, Deepak Singh, notes that many developers wanted to give their assistants “special powers” for specific domains—such as making a front‑end assistant an expert at backend‑as‑a‑service platforms. By formalizing these practices into reusable powers, AWS enables any developer to tap into the expertise that previously required deep knowledge of agent configuration. Partners like Stripe and Supabase can now publish a single power that automatically configures the assistant for their service, allowing developers worldwide to benefit from best‑in‑class integration without the overhead of manual setup.

Economics: Powers vs Fine‑Tuning

Fine‑tuning—retraining an LLM on domain‑specific data—has long been the go‑to method for achieving high performance in niche tasks. However, fine‑tuning is prohibitively expensive for most developers and is often infeasible with closed‑source models from Anthropic, OpenAI, or Google. Kiro powers sidestep this limitation by providing a lightweight, runtime augmentation that does not alter the underlying model. Singh emphasizes that the dynamic loading mechanism is “much cheaper” because it eliminates the need for costly model updates and reduces ongoing token usage. In an era where AI services charge per token, the ability to activate only the relevant subset of tools translates directly into tangible cost savings.

Positioning within AWS’s Agentic AI Vision

Kiro powers is one facet of AWS’s broader push into “agentic AI,” a term the company uses to describe systems capable of autonomous operation over extended periods. Earlier at re:Invent, AWS announced frontier agents such as the Kiro autonomous agent for software development, the AWS security agent, and the AWS DevOps agent. These frontier agents tackle large, multi‑day projects that require sustained decision‑making across multiple codebases. Kiro powers, by contrast, focuses on everyday development tasks where speed and token efficiency are paramount. Together, they form a complementary spectrum: frontier agents for complex, long‑term projects and Kiro powers for precise, on‑demand assistance.

Implications for the Future of AI‑Assisted Development

The launch of Kiro powers signals a maturation of the AI development tool market. Early entrants like GitHub Copilot introduced millions of developers to AI assistance, but the subsequent proliferation of tools—Cursor, Cline, Claude Code—has amplified complexity. While the Model Context Protocol standardized external service integration, it inadvertently created the very context overload Kiro powers now resolves. By enabling developers to “build a power once, use it anywhere,” AWS is positioning itself as the bridge between production‑grade software engineering and cutting‑edge AI. As more AI coding assistants enter the market, the efficiency gains from dynamic context loading will become increasingly valuable, potentially redefining best practices for AI‑augmented development.

Conclusion

AWS’s Kiro powers represent a decisive step toward practical, cost‑effective AI‑assisted coding. By shifting from a monolithic context model to a modular, on‑demand loading system, the platform addresses the core pain points of token waste, slow response times, and high operational costs. The partnership model further democratizes advanced integration, allowing third‑party services to publish reusable powers that any developer can deploy with a single click. In the broader context of AWS’s agentic AI strategy, Kiro powers complement frontier agents that handle complex, long‑term projects, together offering a full spectrum of autonomous assistance. As the industry moves beyond proof‑of‑concept prototypes toward production‑grade applications, solutions that intelligently manage context will likely become the new standard for AI‑enhanced software development.

Call to Action

If you’re a developer looking to streamline your workflow, experiment with Kiro powers today by installing the latest Kiro IDE (v0.7 or newer). Explore the pre‑built powers from Stripe, Figma, Datadog, and more, and consider creating your own custom power to share with the community. For teams seeking to reduce token costs and accelerate delivery, integrating Kiro powers into your CI/CD pipeline could deliver measurable ROI. Finally, keep an eye on AWS’s evolving agentic AI roadmap—future releases promise deeper cross‑compatibility with other IDEs and command‑line tools, expanding the reach of on‑demand AI expertise across the developer ecosystem.

We value your privacy

We use cookies, including Google Analytics, to improve your experience on our site. By accepting, you agree to our use of these cookies. Learn more