Welcome back to Agentic Coding Weekly. Here are the updates on agentic coding tools, models, and workflows for the week of Feb 22-28, 2026.

Executive Summary:

  • Claude Code gets remote control to continue local sessions from any browser or phone.

  • Also gets two new skills /simplify and /batch, and a /copy command to copy the last Claude response.

  • Cursor cloud agents now run in full VMs with dev environments, can test changes and produce artifacts like screenshots and videos.

  • Codex CLI gets voice dictation support.

  • Worth reading: A dog that generates 3D games via random keystrokes, Cloudflare rebuilds Next.js from scratch, and a survey of what tools Claude Code actually picks.

1. Tooling and Model Updates

Claude Code Remote Control

New feature lets you continue a local Claude Code session from your phone, tablet, or any browser. Claude keeps running on your machine the entire time, nothing moves to the cloud. More details in the docs.

This is Anthropic's native alternative to the tailscale+ssh setups people have been using to access Claude from their phone via Termius or Termux. Check the demo.

Cursor Cloud Agents

Cloud agents now get their own virtual machines with full development environments. They can test their changes and produce artifacts including videos, screenshots, and logs. Really like the detail that we can also control the agent's remote desktop and make edits ourself, without checking out the branch locally. Check the announcement.

Quick Updates

  • Claude Code added two built-in skills: /simplify for improving code quality and /batch for automating code migrations.

  • A new /copy command in Claude Code copies Claude's last message to clipboard. Useful for grabbing output without the terminal line breaks and escape characters that we get when copying directly from the TUI.

  • Codex CLI introduced built-in dictation support. Enable it by setting features.voice_transcription = true in the config, then hold the spacebar to record and transcribe voice input.

2. Workflow of the Week

RTK is a CLI proxy that filters command output before it reaches the LLM, cutting token usage by 60-90% on common operations like git status, cargo test, and file reads.

RTK sits between Claude Code and the shell. When Claude runs git status, RTK intercepts it, runs the real command, then filters the output down to essentials before sending it back.

Filtering strategies include removing noise (comments, boilerplate), grouping similar items, truncating redundancy, and deduplicating repeated log lines.

To setup it at project level, first install RTK, then run rtk init from your project root. This adds RTK instructions to ./CLAUDE.md. Claude Code will now use rtk prefixed commands automatically.

brew install rtk
# or
curl -fsSL https://raw.githubusercontent.com/rtk-ai/rtk/refs/heads/master/install.sh | sh

rtk init  # in project root

Now running rtk gain should confirm that it was installed correctly. Try running git status and rtk git status to see the difference between the original and the filtered output.

rtk init adds a lot of instructions to CLAUDE.md. Instead of that, better way would be using hooks. You can add following to the ./.claude/settings.json and all the commands will automatically be proxied to RTK.

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "command",
            "command": "~/.claude/hooks/rtk-rewrite.sh"
          }
        ]
      }
    ]
  }
}

RTK typically filters outputs for git operations, test runners (cargo, pytest, vitest), linters (eslint, ruff), file reads, directory listings, docker/kubectl output. You can check token savings with rtk gain.

Check more details about RTK on GitHub.

3. Community Picks

I Taught My Dog to Vibe Code Games

The author created a setup to convert random keystrokes from their dog into 3D games using Claude Code. The setup includes a Raspberry Pi proxy for keystroke filtering, automated feedback loops with screenshot-based visual QA, and custom linters for Godot's text-based scene files.

Author uses simple prompt engineering to ask the LLM to interpret the random input as secret cryptic commands full of genius game ideas. Here's the partial prompt:

Hello! I am an eccentric video game designer (a very creative one) who communicates in an unusual way. Sometimes I’ll mash the keyboard or type nonsense like “skfjhsd#$%” – but these are NOT random! They are secret cryptic commands full of genius game ideas (even if it’s hard to see).

Your job: You are a brilliant AI game developer who can understand my cryptic language. No matter what odd or nonsensical input I provide, you will interpret it as a meaningful instruction or idea for our video game. You will then build or update the game based on that interpretation.

What Claude Code Actually Chooses

A survey of tools picked by Claude Code over 2,340 prompts. It frequently builds custom solutions rather than recommending third-party tools.

When it does pick third-party tools, it converges to a default stack: Vercel, PostgreSQL, Stripe, Tailwind CSS, shadcn/ui, pnpm, GitHub Actions, Sentry, Resend, Zustand, plus stack-specific picks like Drizzle (JS) or SQLModel (Python) for ORMs, NextAuth.js for auth, and Vitest (JS) or pytest (Python) for testing.

How We Rebuilt Next.js with AI in One Week

Following Cursor's experiment building a browser and Anthropic building a C compiler, Cloudflare used AI to build Next.js from scratch. They implemented the Next.js API surface directly on Vite, creating a drop-in replacement called vinext available on GitHub. Read the post and discussion on HN.

That’s it for this week. I write this weekly on Mondays. If this was useful, subscribe below:

Reply

Avatar

or to participate

Keep Reading