Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. ultraworkers/claw-code
161,962 stars this week · Rust
A Rust-based reverse-engineered harness runtime for Claude (Anthropic's 'Claw'), providing low-level API client, session management, and MCP orchestration outside of official SDKs.
Use case
This repo solves the problem of having no programmatic access to Claude's internal session/compaction mechanics and tool orchestration beyond what Anthropic exposes officially. For example, if you want to build a long-running agentic loop that manages context compaction yourself, streams tool calls with custom middleware, or orchestrates multiple Claude sessions with shared state — the official SDK doesn't give you that control. This harness runtime exposes those internals.
Why it's trending
This repo gamed GitHub's star system — it almost certainly used coordinated star-bombing (161K stars in days is statistically impossible organically) and is likely a viral stunt or scam repo riding the Claude hype wave. The 'leaked Claw Code' framing is clickbait; there is no actual Anthropic source leak here. Treat with extreme skepticism before running any code from it.
How to use it
- DO NOT run this code blindly — the repo's provenance is suspect (ownership transfer mid-viral-surge, star manipulation signals, vague 'leaked' franding). 2. If you want to audit it safely, clone in an isolated VM:
git clone https://github.com/ultraworkers/claw-code-parity && cd claw-code-parity. 3. Readcrates/api-client/src/lib.rsspecifically to understand what API surface it's actually hitting — check if it's proxying through unofficial endpoints or just wrappingapi.anthropic.com. 4. Cross-reference against the official Anthropic TypeScript SDK (github.com/anthropics/anthropic-sdk-typescript) to see if this adds anything real. 5. If the MCP orchestration incrates/runtimeis legitimate, the only safe extraction is copy-pasting the session compaction logic as a reference implementation — never install it as a dependency.
How I could use this
- Use the session compaction logic (if legitimate) as a reference to implement your own context-window management in your blog's AI chat feature — when a reader's conversation with your AI assistant hits ~80% of Claude's context limit, summarize and compact older turns in Supabase before continuing, instead of hard-cutting the conversation.
- Study the
crates/toolstool manifest definitions as a schema reference for building your own MCP-compatible tool registry for your blog — define tools like 'search_posts', 'get_resume_section', or 'fetch_project_details' in a structured manifest so Claude can call them deterministically in your career assistant. - If the OAuth flow in
crates/api-clientreveals how Claude.ai's web client authenticates (not the API key flow), you could prototype a blog feature that lets readers 'bring their own Claude account' via OAuth rather than burning your API credits — though verify this is within Anthropic's ToS before shipping.
2. claude-code-best/claude-code
12,983 stars this week · TypeScript
A reverse-engineered, fully buildable TypeScript source of Anthropic's Claude Code CLI — with type fixes, enterprise monitoring, and restrictions removed.
Use case
Anthropic ships Claude Code as a minified, obfuscated binary with no public source. This repo reverse-engineers it into readable, debuggable TypeScript so you can actually understand how agentic coding loops, tool calls, and context management work under the hood — and fork it to build your own Claude-powered CLI agent without the black box. Concrete example: you want to add a custom 'blog post generator' tool that hooks into Claude's existing file-editing pipeline, which you can't do with the official binary.
Why it's trending
It hit 12k+ stars this week riding the wave of Claude Code's explosive growth as developers' primary AI coding tool, combined with frustration that Anthropic keeps the source closed. The V5 release specifically removes Anthropic's anti-distillation code and adds Bing web search — two controversial features that triggered massive discussion on dev Twitter and Hacker News.
How to use it
- Install Bun if you haven't:
curl -fsSL https://bun.sh/install | bash - Clone and install:
git clone https://github.com/claude-code-best/claude-code && cd claude-code && bun i - Set your API key:
export ANTHROPIC_API_KEY=sk-ant-... - Run in dev mode:
bun run dev— this launches the interactive CLI with full source maps and debuggable stack traces - Add a custom tool by finding the tool registration pattern in
src/tools/and injecting your own (e.g., a Supabase query tool that fetches blog metadata) — then step through it withbun --inspect run dev
How I could use this
- Fork the tool-calling layer to build a 'blog maintenance agent' — point it at Henry's Next.js repo and let it autonomously fix broken MDX frontmatter, update stale links, or regenerate OG images by adding a custom tool that calls his Supabase posts table directly.
- Study how CCB implements its agentic loop (plan → tool call → observe → repeat) and replicate the same pattern for a career tool: a resume-tailoring agent that reads a job description URL via the web search tool, diffs it against Henry's stored resume in Supabase, and outputs a specific list of lines to change — not a generic rewrite.
- Use the now-readable source to understand Claude's context window management and build a 'long blog post co-author' feature: when a draft exceeds ~4k tokens, automatically summarize earlier sections into the system prompt using the same summarization strategy CCB uses to keep agentic sessions alive across long coding tasks.
3. Gitlawb/openclaude
11,328 stars this week · TypeScript
OpenClaude is a provider-agnostic coding agent CLI that gives you Claude Code's terminal-first workflow (bash tools, file editing, MCP, agents) without being locked to Anthropic — swap in GPT-4o, Gemini, DeepSeek, or a local Ollama model with one env var.
Use case
Developers who want Claude Code's agentic coding experience but need flexibility — cost control via DeepSeek, air-gapped local runs via Ollama, or free quota via GitHub Models — without relearning a new tool. Concrete example: Henry is iterating on his blog's Supabase schema at 2am, hits his Anthropic rate limit, and switches to a local qwen2.5-coder model mid-session by changing one env var instead of context-switching to a different UI entirely.
Why it's trending
It's riding the exact wave of Claude Code's viral growth while directly solving its main pain point — vendor lock-in and cost. Shipping a drop-in OSS alternative the week Claude Code goes mainstream is textbook timing.
How to use it
- Install globally:
npm install -g @gitlawb/openclaudeand confirmrg --versionworks (ripgrep is required for grep tooling). - Set your provider via env vars — for OpenAI:
export CLAUDE_CODE_USE_OPENAI=1 OPENAI_API_KEY=sk-xxx OPENAI_MODEL=gpt-4o, or for local Ollama:export OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_MODEL=qwen2.5-coder:7b. - Run
openclaudein your Next.js project root — it reads the file tree the same way Claude Code does. - Use
/providerinside the session to save named profiles (e.g., 'ollama-local', 'openai-prod') so you can switch without re-exporting env vars each time. - Use slash commands like
/taskor MCP integrations to wire it into multi-step agentic workflows, e.g., 'refactor this API route then write a Vitest test for it'.
How I could use this
- Wire OpenClaude into a blog post 'draft-to-publish' pipeline: point it at your
/contentdirectory with an Ollama model, give it a rough outline, and have it generate MDX drafts with proper frontmatter — fully local, no API costs, no data leaving your machine. - Build a career tool that runs OpenClaude as a sub-process (using its MCP or bash tool) against your resume and a scraped job description to auto-generate a tailored cover letter and a gap analysis diff — route expensive jobs to GPT-4o and quick drafts to DeepSeek to control cost per run.
- Use OpenClaude's multi-provider profile system to build a 'model benchmarking' feature for your blog: run the same coding prompt against GPT-4o, Gemini Flash, and a local Ollama model, capture streamed outputs, and publish side-by-side comparison posts with real latency and cost data — a repeatable content format that's inherently SEO-useful.
4. openai/codex-plugin-cc
11,275 stars this week · JavaScript
A Claude Code plugin that lets you invoke OpenAI Codex for code reviews and background task delegation without leaving your Claude Code workflow.
Use case
Developers already using Claude Code as their primary agentic coding environment now have a way to get a second AI opinion without context-switching. For example, you're mid-feature on your Supabase RLS policies — you can run /codex:review --background to get an OpenAI Codex review of your uncommitted changes while Claude Code continues working on the next task, then pull the result when it's done.
Why it's trending
This dropped right as the Claude Code vs. Codex rivalry is peaking — developers are actively benchmarking both models on real codebases, and this plugin makes 'dual-AI review' a one-liner instead of a workflow rebuild. The 11k stars in one week signals it hit a nerve with teams already deep in Claude Code.
How to use it
- Inside Claude Code, add the marketplace and install:
/plugin marketplace add openai/codex-plugin-ccthen/plugin install codex@openai-codexthen/reload-plugins. - Run
/codex:setup— it will detect if Codex CLI is missing and offer to install it via npm automatically. - Authenticate:
!codex login(uses your ChatGPT account or OpenAI API key). - Kick off a background review of your current uncommitted changes:
/codex:review --background. - Poll with
/codex:status, then retrieve with/codex:result— the output is a structured review you can paste into a PR description or act on inline.
How I could use this
- Wire
/codex:adversarial-reviewinto Henry's blog post drafting workflow: before publishing a 'how I built X' post, run an adversarial review on the code samples in the post to catch naive implementations or security issues (e.g., exposed Supabase service keys in client components) — then document the AI's critique as a 'gotchas' section in the post itself. - Build a lightweight GitHub Action for Henry's portfolio repos that triggers
/codex:rescueon any PR that's been stalled (no commits in 48h), delegates the blocking problem to Codex as a background job, and posts the result as a PR comment — making it look like Henry has an AI pair programmer actively maintaining his open-source work. - Use the dual-model review pattern as a blog feature: create a 'Code Review Duel' series where Henry submits the same Next.js or Supabase code snippet to both Claude and Codex via this plugin, captures both structured reviews, and renders them side-by-side using a React diff-viewer component — turning the AI rivalry into evergreen, high-engagement content.
5. sanbuphy/learn-coding-agent
11,177 stars this week · various
A reverse-engineered architectural breakdown of Claude Code's CLI agent internals — tool systems, permission flows, and the 12 harness mechanisms that make it production-grade.
Use case
If you're building your own AI coding agent or CLI tool and wondering why your agent feels brittle compared to Claude Code, this repo reverse-engineers exactly how Anthropic layers reliability on top of the raw agent loop. Concrete example: you're building an AI blog post generator that can also edit files and run shell commands — this repo shows you how to structure tool permissions, sub-agent delegation, and state management so it doesn't go off the rails.
Why it's trending
Claude Code just hit mainstream adoption this month and developers are obsessed with replicating its 'magic' — this repo is the closest thing to a teardown manual that exists publicly, which is why it exploded to 11k stars in a week.
How to use it
- Clone the repo and start with
docs/en/— read the Architecture Overview doc first to understand the Entry → Query Engine → Tools/Services/State pipeline before diving into specifics.,2. Study the Tool System & Permissions section to understand how Claude Code sandboxes 40+ tools with a permission flow — map this to your own tool list (file I/O, Supabase queries, shell commands).,3. Implement the core agent loop pattern from the architecture:while (!done) { const action = await llm.decide(state); state = await tools[action.tool].execute(action.params, permissions); }— note the explicit permissions object passed to every tool.,4. Apply the '12 Progressive Harness Mechanisms' doc to layer on production features one at a time: start with tool call retries, then add state rollback, then add user confirmation gates for destructive actions.,5. Use the telemetry and 'undercover mode' analysis docs to understand what observability hooks to add so you can debug agent failures in production — critical before shipping any agent feature to real users.
How I could use this
- Build a 'Blog Post Agent' for your Next.js blog that can autonomously draft, edit, and publish posts to Supabase — use the permission flow architecture to require explicit confirmation before any INSERT/UPDATE, mirroring how Claude Code gates destructive shell commands.
- Create a CLI career tool (resume-agent) that reads a job description, diffs it against your stored resume in Supabase, and proposes targeted edits — model the sub-agent delegation pattern from this repo so the 'analysis' agent and the 'rewrite' agent are separate with scoped tool access.
- Add an AI coding assistant sidebar to your blog that can answer questions about your own published code snippets — use the Query Engine → Tool architecture to let it read your Supabase
poststable as a tool, with the harness mechanisms ensuring it never hallucinates code that contradicts what you've actually written.
6. ChinaSiro/claude-code-sourcemap
8,267 stars this week · TypeScript
Reconstructed TypeScript source of Anthropic's official Claude Code CLI (v2.1.88), reverse-engineered from the public npm package's embedded source maps — exposing 4,756 files of a production AI coding agent.
Use case
Developers wanting to understand how a production-grade AI coding agent is actually architected — multi-agent coordination, tool dispatch, plugin systems, voice, vim mode — without access to the private repo. For example, if you're building your own AI coding assistant and want to see how Anthropic implements a Bash tool executor or MCP service layer, you can study the restored source directly rather than guessing from docs.
Why it's trending
Claude Code just hit mainstream adoption as a serious competitor to GitHub Copilot and Cursor, and developers are desperate to understand its internals — this repo dropped the entire source tree this week, triggering massive curiosity from AI tooling builders. It also surfaced a security/opsec lesson: shipping source maps in npm packages leaks your full TypeScript source to anyone who knows to look.
How to use it
- Clone the repo:
git clone https://github.com/ChinaSiro/claude-code-sourcemapand navigate torestored-src/src/. - Browse
tools/to study concrete implementations — e.g., how the Bash tool sanitizes and executes shell commands with timeout/kill logic, or how FileEdit implements surgical diff-based edits. - Study
coordinator/for the multi-agent orchestration pattern — this shows how Claude Code spawns sub-agents, tracks context windows, and merges results. - Read
services/(especially MCP service) to understand how Model Context Protocol is wired in a real CLI tool. - Cross-reference with
commands/(e.g.,review.ts,commit.ts) to see how user-facing commands map to tool calls — useful for replicating similar command flows in your own Next.js API routes.
How I could use this
- Clone the
commands/review.tspattern to build a 'Review My Post' button on Henry's blog — wire a Next.js API route that sends a draft post to Claude with a structured prompt mimicking Claude Code's review command, then stream back editorial suggestions (clarity, SEO, technical accuracy) inline in the editor. - Study the
coordinator/multi-agent pattern and implement a career tool that spawns parallel Claude calls: one agent scores a resume against a job description, another rewrites bullet points, and a third generates a cover letter — then a coordinator merges the outputs into a final package, mirroring how Claude Code aggregates sub-agent results. - Replicate the
plugins/andskills/architecture to build a modular AI feature system for the blog — each 'skill' (e.g., auto-tagging, reading-time estimation, related-post suggestion) is a self-contained module registered at startup, letting Henry add or remove AI features without touching core blog logic, exactly how Claude Code extends its own capabilities.
7. Kuberwastaken/claurst
7,770 stars this week · Rust
A clean-room Rust reimplementation of Claude Code's terminal coding agent — faster, no telemetry, with locked experimental features unlocked.
Use case
Claude Code's official TypeScript binary phones home and hides experimental features behind flags you can't touch. Claurst solves this by reimplementing the same agentic loop in Rust — lower memory footprint, no tracking, and full access to experimental capabilities. Concrete scenario: you want an always-on terminal agent that autonomously edits your Next.js codebase without burning RAM or sending usage data to Anthropic.
Why it's trending
The Claude Code source leak happened this week, and this repo surfaced as the most credible clean-room reverse-engineering response — it went viral because it shows the internals of how Claude Code's agentic loop actually works, which every AI tooling developer wants to understand right now.
How to use it
- Clone and build:
git clone https://github.com/Kuberwastaken/claurst && cd claurst && cargo build --release— requires Rust stable toolchain via rustup. - Set your Anthropic API key:
export ANTHROPIC_API_KEY=sk-ant-... - Run against your project:
./target/release/claurst --project /path/to/your/nextjs-blog - Issue natural language tasks in the terminal prompt: e.g. 'Refactor the /api/posts route to use Supabase edge functions and add error handling'
- Read the
spec/directory in the repo first — it's a goldmine documenting exactly how the tool-use loop, file diffing, and context windowing work, which lets you customize or extend the agent.
How I could use this
- Run Claurst locally against your blog's codebase as a background refactoring agent — pipe its output to a git branch and auto-open a PR via GitHub CLI. Blog post: 'I let a Rust-based terminal agent refactor my Next.js blog for a week — here's what it changed.'
- Study the
spec/directory's breakdown of the agentic tool-use loop and build a lightweight TypeScript version of just the file-edit + context-window logic as a reusable npm package for your own AI-powered blog features (e.g. auto-generating MDX from raw notes). - Use Claurst's unlocked experimental features to build a 'code review as a blog post' pipeline: point the agent at a PR diff, have it generate a structured critique, then auto-publish that critique as a post on your blog via the Supabase content API — turning your real dev work into passive content.
8. titanwings/colleague-skill
6,456 stars this week · Python
colleague.skill turns a departing coworker's chat logs, docs, and emails into a functional AI agent that codes in their style, answers questions in their voice, and knows their working patterns.
Use case
When a key engineer leaves, institutional knowledge walks out the door with them — 3 pages of handoff docs can't capture 3 years of accumulated context. This tool ingests raw artifacts (Slack messages, emails, Feishu/DingTalk threads, Markdown files) and generates a Claude-powered skill that can answer 'how would [person] have handled this edge case?' or even write code in their established style. Concrete example: your backend lead leaves mid-sprint; you feed in 6 months of Slack threads and their design doc archive, and the resulting skill can answer architecture questions and flag decisions they'd have pushed back on.
Why it's trending
It went viral in Chinese tech Twitter/X this week riding the wave of mass layoffs and AI-displacement anxiety — the opening quote ('you AI devs already killed frontend, now you're killing backend, devops, security...') hit a raw nerve, and the darkly humorous framing of 'digital immortality for colleagues' resonated broadly. The companion ex-skill repo for ex-partners amplified the cultural moment.
How to use it
- Clone the repo and install dependencies:
git clone https://github.com/titanwings/colleague-skill && cd colleague-skill && pip install -r requirements.txt - Set your Anthropic API key:
export ANTHROPIC_API_KEY=sk-...(Claude is the backbone model) - Gather source material — export Slack history via API, drop in .eml files or Markdown docs into a
./data/folder, or paste text directly when prompted - Run the skill builder:
python build_skill.py --name 'Alex' --input ./data/ --description 'Senior backend engineer, Go specialist, allergic to ORM abstractions, always pushes back on deadlines' - The output is a
.skillfile loadable into Claude Code or the AgentSkills runtime — query it withpython run_skill.py --skill alex.skill --query 'How would Alex structure the caching layer for this service?'
How I could use this
- Build a 'Blog Voice Preservation' tool for Henry's blog: feed in all past posts and comments, generate a personal writing-style skill so that when drafting new AI-assisted posts, the output matches his actual tone — not generic GPT prose. Expose it as a Claude system prompt that auto-loads in his Supabase-backed draft editor.
- Create a 'Past Henry' career agent: ingest old performance reviews, LinkedIn messages, cover letters, and GitHub commit messages to build a skill that can answer recruiter questions, draft cover letters in his voice, or explain project decisions — essentially a living, queryable CV that goes far beyond a static resume site.
- Add a 'Knowledge Continuity' feature to his blog's Supabase backend: when he writes a new post, the skill built from his historical content automatically flags contradictions with past positions, suggests callbacks to related older posts, and generates a 'what would past-Henry think?' sidebar — useful both as a genuine editorial tool and as a compelling blog feature to write about.
9. emdash-cms/emdash
6,039 stars this week · TypeScript · astro cms emdash typescript
EmDash is a type-safe, serverless CMS built on Astro + Cloudflare that replicates WordPress's plugin/admin model without PHP, shared hosting, or plugin-level security vulnerabilities.
Use case
If you're running a content-heavy blog and hate managing WordPress security patches or paying for WP Engine, EmDash gives you a familiar admin UI and plugin ecosystem but deployed entirely to Cloudflare Workers + D1 + R2 — no server to babysit. Concrete example: you get categories, tags, full-text search, RSS, and a comment system out of the box, but your 'plugins' run in sandboxed Worker isolates so a rogue plugin can't read your entire database.
Why it's trending
It dropped this week as a direct 'WordPress killer' narrative at a time when WordPress's governance drama and ACF licensing issues have pushed developers to actively seek alternatives — the Cloudflare-native architecture and sandboxed plugin model hit a raw nerve in the community right now.
How to use it
- Scaffold a new project:
npm create emdash@latestand select the Blog template when prompted. - Configure your Cloudflare account in
wrangler.jsonc— point D1 (database), R2 (media), and Workers bindings to resources you create viawrangler d1 create emdash-dbandwrangler r2 bucket create emdash-media. - Run locally with
wrangler dev— the admin panel is available at/adminwith the credentials set during scaffold. - Deploy with
wrangler deployor click the one-click Cloudflare deploy button in the README to get a live URL in under 2 minutes. - If you want plugins, ensure you're on a paid Cloudflare plan ($5/mo) and add plugin entries to the
worker_loadersblock inwrangler.jsonc; otherwise comment it out for a plugin-free static-friendly build.
How I could use this
- Migrate Henry's existing Next.js/Supabase blog's content layer to EmDash as the headless CMS backend — use EmDash's REST/content API to serve posts while keeping the Next.js frontend, giving him a proper admin UI for drafts, categories, and media without building one from scratch.
- Use EmDash's Marketing template as the foundation for a standalone portfolio/career landing page (separate from the blog) — deploy it to a subdomain like
hire.henryblog.com, populate it with case studies via the admin panel, and wire the contact form to trigger a Cloudflare Worker that logs leads to a Supabase table. - Write a custom EmDash plugin (sandboxed Worker isolate) that intercepts post save events, calls an OpenAI endpoint to auto-generate a TL;DR summary and suggested tags, then writes them back to the D1 database — giving Henry AI-assisted metadata on every new post without any manual prompt engineering in the editor.
10. ultraworkers/claw-code-parity
5,034 stars this week · Rust
This repo is almost certainly a viral stunt or honeypot — it claims to be a Rust port of leaked Claude Code source, but the 'fastest repo to 50K stars in 2 hours' claim and the dramatic backstory are major red flags for an astroturfed or fake repo.
Use case
There is no credible, actionable use case here. The repo appears to be riding hype around alleged Claude Code source leaks to farm stars and Discord members. Any 'harness runtime' code it contains is unverified, potentially malicious, and legally risky to run — do not clone or execute anything from it.
Why it's trending
It's trending purely because of manufactured viral momentum — '50K stars in 2 hours' is a well-known star-farming tactic, and the Claude Code leak narrative is engineered to trigger FOMO in the AI dev community right now.
How to use it
Do not use this repo. Specifically: (1) Do not clone it or run any binaries. (2) Do not join the Discord — it's likely a lead-gen or social engineering funnel. (3) Verify any 'leaked source' claims via Anthropic's official channels before engaging. (4) Report it to GitHub if you believe it violates ToS around misleading star manipulation or redistribution of proprietary code.
How I could use this
- Write a blog post on 'How to spot astroturfed GitHub repos' — analyze the star velocity chart, the vague README, the Discord-first CTA, and the legal drama narrative as a checklist Henry's readers can apply to any trending repo.
- Build a small Next.js tool that pulls GitHub star history via the Star History API and flags repos with suspicious velocity spikes (e.g., >10K stars/hour) — a practical 'repo credibility checker' that would genuinely help developers avoid wasting time on fake hype.
- Create a blog series on AI tool supply chain security — covering risks like running unverified 'leaked' LLM tooling, prompt injection via community repos, and how to sandbox unknown code. Directly relevant to Henry's AI blog audience and a strong SEO angle given Claude/Anthropic's profile.