Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. ultraworkers/claw-code
149,346 stars this week · Rust
A viral, likely astroturfed repo claiming to be a Rust-based harness runtime for Claude (Anthropic's API), but its record-breaking star velocity is a red flag for fake engagement — treat with extreme skepticism.
Use case
This repo claims to solve the problem of building a faster, memory-safe harness layer around Claude's API — think session management, tool orchestration, MCP (Model Context Protocol) support, and streaming — so you can wire Claude into complex agentic workflows without writing boilerplate. In theory, if the Rust crates (api-client, runtime, tools) are real, you could use them as a backend runtime for an AI agent that manages multi-turn sessions with compaction and tool-use. In practice, 100K stars in under 2 hours with no real documentation, an ownership transfer in progress, and a redirect to a parity repo is a classic star-farming / hype play.
Why it's trending
It's trending purely because of manufactured virality — 100K stars in ~2 hours is statistically impossible via organic discovery and has been flagged across dev communities on X and Hacker News this week as a GitHub star-buying stunt riding the wave of Claude hype and MCP interest. The real signal here is the underlying MCP/Claude harness concept, not this repo.
How to use it
- Do NOT add this as a dependency or clone it for production use — the ownership transfer and parity repo redirect means the codebase is unstable and potentially unsafe. 2. If you want to explore the concept (Claude harness + MCP + Rust), go directly to Anthropic's official MCP spec at modelcontextprotocol.io and the official Claude API docs instead. 3. For a legitimate TypeScript/Node equivalent of what this claims to do, use the official @anthropic-ai/sdk with streaming:
import Anthropic from '@anthropic-ai/sdk'; const client = new Anthropic(); const stream = await client.messages.stream({ model: 'claude-opus-4-5', max_tokens: 1024, messages: [{ role: 'user', content: 'Hello' }] }); for await (const chunk of stream) { process.stdout.write(chunk.type === 'content_block_delta' ? chunk.delta.text : ''); }. 4. If you genuinely need a Rust API client for Claude, look at the community crateanthropic-rson crates.io instead — it has real maintainers and a real commit history. 5. Use this moment as a lesson: always check star velocity graphs on star-history.com and commit frequency before trusting a repo.
How I could use this
- Write a blog post titled 'How to spot a fake viral GitHub repo' using claw-code as the case study — show the star-history graph spike, explain star-buying mechanics, and contrast it with legitimate repos like the official Anthropic SDK. This is genuinely useful content for your audience and will rank well given the current discourse.
- Build a real Claude-powered MCP tool server for your blog using the official Anthropic SDK and MCP spec — something like a 'blog post idea validator' tool that Claude can call during a multi-turn session. This is what claw-code claims to enable, and building it yourself with documented code is far more credible than depending on a sketchy repo.
- Add a 'AI session harness' feature to your blog's admin panel: use Anthropic's official streaming API to build a multi-turn Claude session that remembers context across your drafting session, suggests edits, and can call a custom tool (e.g., a Supabase query tool) to pull in your past post performance data — this is legitimate MCP-style agentic usage without any dependency on this repo.
2. sanbuphy/learn-coding-agent
10,998 stars this week · various
A reverse-engineered architectural breakdown of Claude Code's CLI agent internals — permission systems, tool loops, and harness mechanisms — distilled from public sources into a structured reference.
Use case
If you're building your own AI coding agent or CLI tool and don't know how production agents actually handle tool permissions, sub-agent spawning, or state management, this repo maps out the exact patterns Claude Code uses. Concrete example: you're building a Next.js AI assistant that can read/write files and call APIs — this gives you a blueprint for how to structure the permission flow and tool registry instead of guessing.
Why it's trending
Claude Code just hit mainstream adoption after Anthropic made it free for Claude.ai subscribers, so developers are rushing to understand its internals to build compatible tools or replicate its patterns in their own agents. Nearly 11k stars in a week signals the 'how does this actually work' demand is peaking right now.
How to use it
- Clone the repo and start with
docs/en/— read the Architecture Overview section first to understand the Entry → Query Engine → Tools/Services/State pipeline before diving into specifics.,2. Study the Tool System & Permissions section to understand how Claude Code registers 40+ tools with a permission gate — model this pattern in your own agent: each tool declares its required permissions, and a central dispatcher checks them before execution.,3. Map the 12 Progressive Harness Mechanisms to your own agent loop. Each mechanism (e.g., undercover mode, remote control, telemetry) is a production feature layered on a basic ReAct loop — identify which ones your use case actually needs.,4. Use the Directory Reference tree as a scaffold for structuring your own TypeScript agent project — the separation of Tools, Services, and State is directly portable to a Next.js API route-based agent.,5. Cross-reference with the actual Claude Code npm package (@anthropic-ai/claude-code) behavior in your terminal to validate your understanding of the architecture against real outputs.
How I could use this
- Build a transparent 'AI Writing Agent' for your blog that shows readers exactly which tools it invoked (web search, summarizer, outline generator) and why — use the permission-gate pattern from this repo to make each tool call explicit and auditable in the UI, turning the agent internals into a trust-building feature.
- Create a career tool that acts as a 'Job Application Agent' — it reads a job description, queries your resume Supabase table, runs a gap analysis, and drafts a cover letter. Model the tool registry pattern here:
readResume,fetchJobDescription,scoreMatch,draftLetteras discrete permissioned tools with a central dispatcher, so you can swap LLM providers without rewriting logic. - Add a Claude Code-style sub-agent system to your blog's AI features: when a reader asks a complex question, your primary agent spawns specialized sub-agents (a 'code explainer', a 'source finder', a 'summarizer') and aggregates their outputs — the sub-agent spawning pattern is documented in the Tool System section and maps cleanly to parallel Supabase Edge Function calls.
3. openai/codex-plugin-cc
10,587 stars this week · JavaScript
An OpenAI-official Claude Code plugin that lets you run Codex code reviews and delegate background coding tasks without leaving your Claude Code workflow.
Use case
Developers using Claude Code as their primary AI coding environment previously had no clean way to leverage OpenAI's Codex for a second-opinion review or to offload long-running tasks. Now you can run /codex:adversarial-review to get a steerable challenge review of a PR diff, or use /codex:rescue to hand off a stuck refactor to Codex as a background job while Claude keeps working on something else — two AI agents collaborating in one terminal session.
Why it's trending
This dropped directly from OpenAI the same week Codex CLI hit general availability with free-tier access, making it zero-friction to try a two-model review workflow — Claude writes, Codex challenges — which is a genuinely new pattern that's going viral in AI-dev circles.
How to use it
- Inside Claude Code, add the marketplace and install:
/plugin marketplace add openai/codex-plugin-ccthen/plugin install codex@openai-codexthen/reload-plugins. - Run
/codex:setup— it will detect if Codex CLI is missing and offer tonpm install -g @openai/codexfor you. - Authenticate:
!codex login(uses your ChatGPT account or OPENAI_API_KEY). - Kick off a background review of your uncommitted changes:
/codex:review --backgroundthen poll with/codex:statusand retrieve with/codex:result. - For a harder review, use
/codex:adversarial-reviewand steer it with a prompt like 'focus on security edge cases in the auth middleware'.
How I could use this
- Wire
/codex:adversarial-reviewinto Henry's blog post drafting pipeline: before publishing a code-heavy tutorial, run an adversarial review on the snippet files and surface any gotchas as a 'Codex says...' callout block in the post — adds credibility and catches errors Claude missed. - Build a GitHub Action for Henry's portfolio repos that runs
/codex:review --backgroundon every PR via the Codex CLI directly (npx @openai/codex review --diff origin/main), then posts the structured review as a PR comment — a concrete portfolio piece demonstrating multi-model CI. - Use the
/codex:rescuesubagent pattern as inspiration for a Supabase Edge Function that accepts a 'stuck task' payload from Henry's blog's AI assistant, spins up a Codex CLI process server-side, and streams the result back — letting readers paste broken code into the blog and get an async Codex fix without leaving the page.
4. claude-code-best/claude-code
10,403 stars this week · TypeScript
A reverse-engineered, buildable TypeScript reconstruction of Anthropic's Claude Code CLI — with type fixes, Bing web search added, and anti-distillation code removed.
Use case
Anthropic ships Claude Code as an obfuscated, minified binary with no public source. This repo decompiles and restores it to readable, hackable TypeScript so you can self-host it, swap in alternative LLM providers (it already supports GLM), add custom tools, or embed Claude Code's agentic loop directly into your own Next.js tooling pipeline — without being locked to Anthropic's CLI distribution.
Why it's trending
It hit 10k stars in under 48 hours after launch, driven by developer frustration with Claude Code being a black-box binary and the community desire to run it with non-Anthropic API endpoints. The removal of Anthropic's anti-distillation guards and the addition of Bing search made it immediately practical for self-hosted setups.
How to use it
- Install Bun >= 1.3.11:
curl -fsSL https://bun.sh/install | bash && bun upgrade - Clone and install deps:
git clone https://github.com/claude-code-best/claude-code && cd claude-code && bun install - Set your API key and optional base URL in env (supports OpenAI-compatible endpoints):
export ANTHROPIC_API_KEY=sk-... # or point to your proxy - Run in dev mode:
bun run dev— you should see version number 888 in the TUI, confirming you're on CCB - To debug: run
bun run dev:inspectin terminal, then attach VS Code debugger via F5 → 'Attach to Bun (TUI debug)' and set breakpoints insrc/
How I could use this
- Wire this into your blog's admin panel as a local AI writing assistant: spin up the CCB CLI as a child process from a Next.js API route, pipe your draft MDX file to it, and get inline suggestions or auto-generated 'related posts' metadata without paying Anthropic's API markup — swap the provider to GLM when credits run low.
- Build a resume/cover-letter agent tool by forking CCB's agentic loop: give it a SYSTEM prompt with Henry's resume context, expose it via a Supabase Edge Function, and let it autonomously read a job description URL (using the built-in Bing search), diff it against the resume, and output a tailored cover letter — the web search capability is already implemented, no extra integration needed.
- Use CCB as a self-hosted code review bot for your blog's GitHub PRs: since the source is now readable TypeScript, you can add a custom tool definition that calls your Supabase DB to pull recent post analytics, then have the agent automatically annotate PRs touching content-related components with performance context — something impossible with the opaque official binary.
5. ChinaSiro/claude-code-sourcemap
7,985 stars this week · TypeScript
Reconstructed TypeScript source code of Anthropic's official Claude Code CLI tool (v2.1.88), recovered by extracting sourcesContent fields from the public npm package's source maps.
Use case
Claude Code ships as a minified CLI bundle on npm, so developers couldn't study how Anthropic actually architected a production AI coding agent — things like multi-agent coordination, tool orchestration (Bash, FileEdit, Grep), plugin systems, and voice interaction. This repo reverse-engineers those 1,884 TypeScript source files so engineers can read real patterns for building AI agents, not toy demos. Example: you want to understand how Claude Code handles streaming tool calls with cancellation — you can now read the actual production implementation in services/ and tools/.
Why it's trending
Claude Code just shipped agentic features (sub-agents, KAIROS assistant mode, MCP integrations) that are being heavily discussed but whose internals were opaque — developers are racing to understand the architecture before building compatible tooling or competitors. The leak of 4,756 files from a source map oversight is also a cautionary-tale moment that's drawing attention across the security and AI dev communities.
How to use it
- Clone the repo:
git clone https://github.com/ChinaSiro/claude-code-sourcemap && cd claude-code-sourcemap/restored-src/src - Browse the coordinator pattern for multi-agent orchestration:
cat coordinator/index.ts— this shows how Claude Code fans out sub-tasks to parallel agents. - Study tool implementation contracts by reading any file in
tools/(e.g.,tools/BashTool.ts) to see the exact input schema, permission model, and streaming response handling Anthropic uses. - Cross-reference the MCP service (
services/mcp*.ts) to understand how Claude Code integrates Model Context Protocol for external tool calls — directly applicable if you're building MCP-compatible agents. - Use the
commands/directory as a reference for CLI command structure if you're building your own AI CLI with tools likecommanderorink.
How I could use this
- Build a blog post series called 'Inside Claude Code' — pick one module per post (e.g., the KAIROS assistant mode in
assistant/, or the multi-agent coordinator) and write a annotated walkthrough. This would rank well on Google since almost nobody has explained the internals in plain English yet, and it positions Henry as an AI-systems thinker rather than a tutorial blogger. - Extract the tool permission model from
tools/and adapt it for Henry's own AI blog assistant — implement the same 'allow/deny per tool call' pattern so his blog's AI writing helper asks for confirmation before running code snippets or modifying draft posts, rather than acting autonomously. - Study the
plugins/andskills/directories to design a modular AI feature system for his blog: each AI capability (SEO suggestions, auto-tagging, related posts, grammar checks) is a registered skill with its own schema, enabled/disabled per post. This mirrors production architecture and makes a compelling portfolio piece to show engineering depth to potential employers.
6. Kuberwastaken/claurst
7,349 stars this week · Rust
A clean-room Rust reimplementation of Claude Code (Anthropic's CLI coding agent) built from behavioral specs reverse-engineered after the original TypeScript source leaked via an npm sourcemap.
Use case
If you want a self-hostable, auditable terminal coding agent without depending on Anthropic's proprietary Claude Code binary, this gives you that — in Rust with no licensing ambiguity. Concrete example: run it locally on your own machine, point it at your Next.js blog repo, and let it execute multi-file edits via Claude's API without touching Anthropic's closed toolchain.
Why it's trending
The npm sourcemap leak of Claude Code's entire TypeScript source on March 31st went viral in dev circles, and this repo surfaced immediately as both a technical breakdown of what was found AND a working clean-room alternative — two reasons to click in one.
How to use it
- Clone the repo:
git clone https://github.com/kuberwastaken/claurst && cd claurst,2. Readspec/first — the behavioral specs are the real gold here. They document exactly how Claude Code structures tool calls, context windows, and multi-step agent loops.,3. Build the Rust implementation:cd src-rust && cargo build --release(requires Rust stable, setANTHROPIC_API_KEYin your env),4. Run against a target directory:./target/release/claurst --dir /path/to/your/project 'refactor all API routes to use async/await',5. Study the spec files alongside the src to understand the agent loop — this is more valuable than the binary itself for building your own AI tooling.
How I could use this
- Write a technical post on your blog dissecting the
spec/directory — specifically the tool-call contracts and context management strategies. This is rare documented insight into how a production AI coding agent is architected, and a breakdown aimed at Next.js/TS developers would get significant traction right now while the leak is hot. - Fork the spec layer and adapt the agent loop design to build a focused 'PR review bot' for your own projects — a CLI tool that takes a git diff, feeds it to Claude with the structured tool contracts from the spec, and outputs a structured code review JSON you can pipe into a GitHub Action comment.
- Embed a lightweight version of the agent loop concept into your blog's admin panel in Supabase Edge Functions — paste a blog draft, trigger a Claude call using the multi-step tool pattern from the spec (read file → analyze → suggest edits), and get structured revision suggestions back as a JSON object you render in the UI.
7. Gitlawb/openclaude
7,342 stars this week · TypeScript
OpenClaude decouples Claude Code's agentic coding tools (bash, file edit, grep, MCP agents) from Anthropic's API so you can run the same terminal-native coding agent against any OpenAI-compatible model.
Use case
Claude Code's agentic loop is genuinely powerful — multi-step file edits, bash execution, glob/grep, task planning — but it's locked to Anthropic billing. OpenClaude solves the vendor lock-in: you can point the same toolchain at DeepSeek-R2 for cheap bulk refactors, Gemini 2.5 Pro for long-context codebase analysis, or a local Llama model via Ollama for offline/private work. Concrete example: run a repo-wide TypeScript migration task overnight using DeepSeek at ~10x lower cost than Claude Sonnet.
Why it's trending
It dropped immediately after the Claude Code source leak via npm source maps on March 31, 2026 — the timing made it instantly viral among developers who wanted Claude Code's UX without Anthropic's pricing or API rate limits. The addition of Ollama/local inference and Apple Silicon support via Atomic Chat widened the appeal to privacy-focused and offline-first developers this week.
How to use it
- Install globally:
npm install -g @gitlawb/openclaude - Set your provider env vars — for DeepSeek:
export CLAUDE_CODE_USE_OPENAI=1 && export OPENAI_BASE_URL=https://api.deepseek.com/v1 && export OPENAI_API_KEY=your-deepseek-key && export OPENAI_MODEL=deepseek-coder - Navigate to your project root and run
openclaude - Use it exactly like Claude Code — give it a natural language task like 'Refactor all API route handlers in /app/api to use the new Supabase SSR client' and let the agent plan, edit, and verify
- For local/offline use, install Ollama, pull a model (
ollama pull qwen2.5-coder:32b), setOPENAI_BASE_URL=http://localhost:11434/v1andOPENAI_MODEL=qwen2.5-coder:32b, then runopenclaude
How I could use this
- Wire OpenClaude to a local Ollama model and set it loose on Henry's Next.js blog repo with a task like 'audit every page component for missing OpenGraph meta tags and add them' — fully offline, no API costs, and it produces a git-diffable result he can review before committing.
- Build a cost-arbitrage CI script: use OpenClaude with DeepSeek for first-pass code review on PRs (cheap, fast) and only escalate to GPT-4o or Claude when the DeepSeek agent flags high-complexity changes — gives Henry a tiered AI review pipeline for his career tools projects at a fraction of the cost.
- Use OpenClaude's MCP agent support to build a 'blog post to working demo' pipeline: give it a markdown post about a coding concept, have it scaffold a runnable CodeSandbox-ready example in
/public/demos/[slug]/, then auto-update the MDX frontmatter with ademoPathfield — turning every technical post into an interactive demo without manual work.
8. titanwings/colleague-skill
5,395 stars this week · Python
colleague.skill converts a departing coworker's chat logs, docs, and emails into a Claude-powered AI agent that can write code in their style, answer questions with their knowledge, and replicate their working patterns.
Use case
When a key engineer leaves, they take years of undocumented context with them — their naming conventions, their code review opinions, which edge cases they always caught. This repo ingests raw materials (Slack exports, emails, Markdown docs, screenshots) and synthesizes a persistent Claude skill that can be queried as a stand-in. Example: your Slack-integrated backend lead quits Friday; by Monday you've got a Claude agent that knows their API design patterns and can answer 'how would [Name] have handled auth here?'
Why it's trending
Viral due to its darkly funny premise hitting a collective nerve — the README quote calling AI devs 'code traitors who killed frontend, backend, and testing engineers' is sardonic self-aware commentary on layoffs and AI displacement, shared heavily in Chinese developer communities on WeChat and X this week. The companion ex-skill repo (for modeling ex-partners) amplified its spread as a meme.
How to use it
- Clone and install:
git clone https://github.com/titanwings/colleague-skill && cd colleague-skill && pip install -r requirements.txt. 2. Export your source data — for Slack, run the built-in API collector with your bot token:python collect/slack.py --token xoxb-xxx --user @username --output ./data/. 3. Add subjective description inconfig/colleague.yaml— writing things like 'always blamed infra when deploys failed, wrote defensive comments in PRs, preferred composition over inheritance'. 4. Run the skill builder:python build_skill.py --sources ./data/ --config ./config/colleague.yaml --output ./skills/colleague_v1.skill. 5. Load the generated skill into Claude Code or the AgentSkills runtime and query it: 'How would [Name] structure this Redis caching layer?'
How I could use this
- Build a 'blog voice skill' from Henry's own past writing — feed all existing blog posts as Markdown into the skill builder to create a Claude agent trained on his specific tone, technical depth, and sentence structure, then use it as a first-draft generator that sounds like him rather than generic ChatGPT prose.
- Create a 'past-Henry code reviewer' skill from old GitHub PR comments and commit messages, then wire it into a VS Code extension or pre-commit hook that flags code Henry-himself would have rejected — a personalized linting layer beyond ESLint rules.
- Build a 'project memory' feature for the blog where readers can ask questions about any past article and get answers in the author's voice — ingest all blog posts + comment thread responses into a skill, expose it via a Supabase Edge Function, and render a chat UI component on each post page using the skill as the RAG backbone.
9. tvytlx/ai-agent-deep-dive
4,389 stars this week · Python
A Chinese-language deep-dive PDF report + minimal teaching Python agent that strips away framework magic to show you exactly how an LLM agent main loop, skill discovery, and CLI wiring actually work.
Use case
Most developers using LangChain or CrewAI have no idea what's happening under the hood — when something breaks or behaves unexpectedly, they're lost. This repo gives you a ~200-line reference implementation of the agent core loop (plan → tool call → observe → repeat) with a swappable fake LLM, so you can study the skeleton before bolting on OpenAI or Anthropic. Concrete scenario: you want to build a blog-writing agent that researches a topic, drafts sections, and self-edits — but you need to understand the loop structure before wiring up real tools.
Why it's trending
The AI agent space just hit an inflection point where every team is building agents but almost no one understands the internals — this repo fills that gap in Chinese (huge underserved audience) and went viral on Chinese tech Twitter/WeChat this week. The v2 PDF drop is what spiked the star count.
How to use it
- Clone the repo and install deps:
git clone https://github.com/tvytlx/ai-agent-deep-dive && cd ai-agent-deep-dive && poetry install,2. Run the minimal agent CLI to see the main loop in action with the fake LLM:poetry run agt '写一篇关于React hooks的博客',3. Opensrc/agt/agent.pyand trace the main loop — find where the LLM interface is called (the Fake LLM stub) and swap it with your real OpenAI call:response = openai.chat.completions.create(model='gpt-4o', messages=messages),4. Add a custom skill by dropping a Python file into./skills/and verifying it's discovered:poetry run agt --skills-dir ./skills --list-skills,5. Read the PDF (ai-agent-deep-dive-v2.pdf) alongside the code — it maps architectural decisions to specific line numbers, which is the actual learning payoff.
How I could use this
- Blog post generator agent: Fork the teaching agent, replace the Fake LLM with GPT-4o, and add three skills —
search_web(query)via Tavily API,fetch_url(url)for scraping references, andwrite_to_supabase(slug, content)to draft posts directly into your blog's CMS. Document the build process as a meta-blog post showing readers the exact agent loop that wrote it. - Career tools — resume gap analyzer agent: Wire the agent loop to accept a raw job description as input, add a
parse_resume(pdf_path)skill using PyMuPDF, and acompare_skills(resume_skills, jd_skills)skill that calls GPT-4o with a structured prompt. The agent iterates until it produces a ranked gap list with suggested courses — expose this as a Supabase Edge Function your portfolio site calls. - AI blog feature — 'autonomous post researcher': Build a Next.js UI where Henry inputs a topic and a Supabase Edge Function spins up this agent with skills for Hacker News API, arXiv abstract fetching, and GitHub trending scraping. The agent runs its loop server-side, streams intermediate 'thinking' steps back via SSE to a
<Suspense>-wrapped component, and saves the final research brief to Supabase for Henry to edit and publish — making the agent's reasoning process itself part of the reading experience.
10. emdash-cms/emdash
4,207 stars this week · TypeScript · astro cms emdash typescript
EmDash is a TypeScript-native CMS on Astro + Cloudflare that replicates WordPress's extensibility (plugins, admin UI, content types) without PHP, shared hosting, or unsandboxed plugin security holes.
Use case
If you're running a Next.js blog but want a proper admin panel, plugin ecosystem, and structured content management without paying for Contentful or wrestling with WordPress on a VPS, EmDash gives you a self-hostable CMS where plugins run in isolated Worker sandboxes — so a rogue third-party plugin can't read your database. Concrete example: you want an AI-tagging plugin that auto-categorizes posts on publish, but you don't want it to have unrestricted DB access.
Why it's trending
It dropped this week as a direct 'WordPress replacement' narrative at a moment when WordPress's governance drama is still fresh and developers are actively looking for TypeScript-native alternatives with modern deployment targets. The Cloudflare-first approach also aligns with the current serverless cost-optimization wave.
How to use it
- Scaffold a new project:
npm create emdash@latest— choose the Blog template when prompted. - Configure your Cloudflare credentials in
wrangler.jsonc(D1 database name, R2 bucket, Worker name). If you're on a free plan, comment out theworker_loadersblock to disable sandboxed plugins. - Run locally with
npx wrangler dev— EmDash uses D1's local SQLite emulation so no cloud round-trips during development. - Deploy:
npx wrangler deploypushes the Astro site + Workers to Cloudflare's edge in one command. - Access
/adminto create content types, write posts, and install plugins from the registry — all in a typed admin UI.
How I could use this
- Migrate Henry's existing Next.js blog content into EmDash's Blog template and use its plugin API to build a custom 'AI Summary' plugin — a sandboxed Worker that calls OpenAI on post-save and writes a 2-sentence TL;DR back to a custom
ai_summaryfield, rendered in the post header. - Use EmDash's Marketing template to build a personal career landing page with a live 'Open to Work' toggle stored in D1 — flip it from the admin panel and it instantly updates a banner on the public site, no redeployment needed.
- Write an EmDash plugin that hooks into the post-publish lifecycle, sends the new post body to an embeddings API (e.g., Supabase pgvector or Cloudflare Vectorize), and powers a semantic 'Related Posts' widget — all within the sandboxed Worker so the embedding API key never touches the main app runtime.