Skip to content
Gradland
← GitHub Hot
🔥

GitHub Hot — 1 April 2026

1 April 2026·22 min readGitHubOpen SourceTools

Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.


1. instructkr/claw-code

118,221 stars this week · Rust

This repo is almost certainly a star-farming scam or viral stunt — it has no real substance, and its 'fastest repo to 100K stars' claim is a red flag, not a feature.

Use case

There is no credible real-world use case here. The README is circular and self-referential, the 'Rust port' is perpetually 'in progress', and the primary call to action is to sponsor the author. The repo appears designed to exploit GitHub's trending algorithm and social proof mechanics rather than solve any engineering problem. No one should build anything on top of this.

Why it's trending

It's trending purely because of manufactured social proof — bots or coordinated starring pushed it to GitHub Trending, which then triggered organic curiosity clicks and legitimate stars from developers who assumed trending = legitimate. This is a known GitHub gaming pattern that has appeared before with repos like 'acmesh-official' forks and other viral stunts.

How to use it

Do not use this repo. Specifically: (1) The codebase has no stable API, no versioned releases, and no documentation beyond vague crate descriptions. (2) The 'oh-my-codex' build tool referenced in the description is itself unverified. (3) There is no test suite, no CI config, and no license clearly permitting use. (4) If you need a real Rust-based Claude/LLM harness tool, use the official Anthropic Rust SDK or the async-openai crate instead. (5) If you need MCP orchestration specifically, look at the official Model Context Protocol SDK repos from Anthropic.

How I could use this

  1. Use this as a cautionary blog post: write 'How to spot a GitHub star farming repo' — analyze the telltale signs (circular README, no real code, sponsor CTA as primary content, suspiciously fast star velocity) with screenshots. This kind of critical technical writing builds credibility and gets shared heavily on Hacker News.
  2. Build a small career tool side feature: a 'GitHub repo credibility scorer' that queries the GitHub API and flags repos with star/commit ratio anomalies, first-day star spikes, or empty release histories — useful for dev hiring managers vetting candidates who list open source contributions.
  3. For your AI blog features, use this moment to implement a real alternative: integrate the legitimate @anthropic-ai/sdk in your Next.js blog to power a 'ask questions about this post' widget, and write about what a real, production-grade Anthropic integration looks like — directly contrasting it with vaporware repos like this one.

2. sanbuphy/learn-coding-agent

10,491 stars this week · various

A reverse-engineered architectural breakdown of Claude Code's CLI agent internals — tools, permission flows, sub-agents, and the 12-layer harness system — compiled entirely from public sources.

Use case

If you're building your own coding agent or AI-powered CLI tool, you're essentially flying blind without understanding how a production-grade agent handles tool orchestration, permission escalation, and state management. This repo reverse-engineers Claude Code's architecture so you can steal the patterns — for example, understanding how Claude Code gates destructive file operations behind a progressive permission harness before you implement your own 'AI writes to my blog's database' feature.

Why it's trending

Claude Code just had a major release wave and is dominating developer Twitter/X as the go-to agentic coding tool — developers want to understand what's happening under the hood before building on top of it or replicating its patterns in their own agents. 10k stars in a week signals the community is hungry for architectural clarity that Anthropic hasn't officially published.

How to use it

  1. Clone and read docs/en/ top-to-bottom — start with the Architecture Overview section to understand the Entry → Query Engine → Tools/Services/State pipeline before diving into specifics.,2. Map the 40+ tools taxonomy to your own use case. Ask: which of these tool categories (file read, file write, shell exec, web fetch) do I need for my blog's AI agent, and what permission gates should mirror Claude Code's model?,3. Study the '12 Progressive Harness Mechanisms' section specifically — this is the production pattern for safely layering features onto a raw agent loop without it going rogue. Implement a stripped-down 3-5 layer version for your own Next.js API route that calls an LLM and takes actions.,4. Use the directory reference tree as a scaffold. When building your own agent, mirror this structure: separate concerns into tool definitions, permission resolvers, state managers, and the core agent loop.,5. Cross-reference the telemetry and 'undercover mode' docs to understand what observability a production agent needs — then instrument your own agent with equivalent logging before deploying anything that touches real data.

How I could use this

  1. Build a 'Blog Post Agent' for your Next.js blog that follows Claude Code's progressive harness pattern: it can draft posts freely, but requires explicit approval before calling your Supabase write API — implement the permission escalation flow as a Supabase Edge Function with a UI approval step, modeled on the tool permission architecture documented here.
  2. Create a career tool that acts as a local coding agent for interview prep — given a LeetCode-style problem, it uses the sub-agent pattern from this repo to spawn a 'test runner' sub-agent and a 'solution writer' sub-agent that collaborate, then surfaces the best solution with explanations. Use the tool system architecture to define discrete tools: write_code, run_tests, explain_solution.
  3. Implement a 'safe AI content pipeline' for your blog using the harness mechanism as inspiration: an AI agent that fetches trending dev topics (web fetch tool), drafts a post outline (write tool), checks it against your existing posts in Supabase for duplication (read tool), and only publishes after passing a similarity threshold check — each step gated like Claude Code gates destructive operations.

3. openai/codex-plugin-cc

8,351 stars this week · JavaScript

An OpenAI-official Claude Code plugin that lets you run Codex code reviews and delegate background tasks without leaving your Claude Code workflow.

Use case

Developers using Claude Code as their primary AI coding environment previously had to context-switch to a separate Codex session to get a second-opinion review or run long-running refactor tasks. This plugin bridges that gap — for example, you can fire off /codex:adversarial-review --background on a complex Supabase RLS policy change, keep coding with Claude, then pull the result when ready instead of babysitting a separate terminal.

Why it's trending

It dropped the same week OpenAI clarified Codex pricing (including Free tier access), so developers who dismissed it as expensive are now trying it for the first time. The adversarial review command is also genuinely novel — it's not just a second LLM opinion, it actively tries to poke holes in your logic, which has sparked a lot of 'this caught a bug Claude missed' posts this week.

How to use it

  1. Inside Claude Code, run /plugin marketplace add openai/codex-plugin-cc then /plugin install codex@openai-codex and /reload-plugins.
  2. Run /codex:setup — if Codex CLI is missing it will offer to npm install -g @openai/codex for you, then authenticate with !codex login.
  3. Stage your current changes in git, then kick off a background review: /codex:review --background. Check progress with /codex:status and retrieve output with /codex:result.
  4. For a harder test, run /codex:adversarial-review on any non-trivial function — pass --focus security or --focus edge-cases to steer the challenge.
  5. For long tasks (e.g. 'migrate these 10 API routes to use Supabase edge functions'), use /codex:rescue to hand off the task and reclaim your Claude context for other work.

How I could use this

  1. Wire /codex:adversarial-review into your blog's pre-commit hook via a thin Node script that calls the Codex CLI directly — every time you push a new AI feature (e.g. a server action calling OpenAI), Codex automatically stress-tests the error handling and posts a summary as a GitHub PR comment before you merge.
  2. Build a 'code portfolio explainer' feature: run /codex:review on each project in your portfolio repo, capture the structured JSON output, and feed it into a Next.js page that auto-generates plain-English summaries of your code quality and architectural decisions for recruiters who can't read code.
  3. Use /codex:rescue as a background agent for your blog's AI features — when a reader submits a 'suggest improvements to this code snippet' request, delegate the refactor task to Codex asynchronously, store the job ID in Supabase, and webhook the result back to the reader's session when it completes instead of blocking on a synchronous OpenAI call.

4. claude-code-best/claude-code

7,423 stars this week · TypeScript

A decompiled/reverse-engineered TypeScript source restoration of Anthropic's Claude Code CLI, made runnable via Bun with full type fixes and enterprise-grade build pipeline.

Use case

Anthropic ships Claude Code as a minified/obfuscated CLI bundle, making it impossible to audit internals, fork behavior, or extend functionality. This repo reverse-engineers that bundle back into readable TypeScript so developers can study how Claude Code orchestrates tool calls, manages context windows, handles streaming, and structures its agentic loop — giving you a blueprint to build your own Claude-powered CLI or agent without paying $800 in API costs to figure it out yourself.

Why it's trending

Claude Code launched to massive hype as a terminal-native agentic coding tool, but its closed-source nature frustrated developers who wanted to understand or customize it. This dropped on April Fools' Day, hit 6k stars in 24 hours, and survived without a DMCA — making it the hottest reverse-engineering project of the week.

How to use it

  1. Install Bun >= 1.3.11: curl -fsSL https://bun.sh/install | bash && bun upgrade
  2. Clone and install: git clone https://github.com/claude-code-best/claude-code && cd claude-code && bun install
  3. Run in dev mode and verify you see version '888': bun run dev
  4. Study the source — focus on the tool call orchestration loop and context management in the core agent files to understand how Claude Code structures multi-step agentic tasks
  5. Build a distributable: bun run build — outputs to dist/cli.js runnable by both Bun and Node, which you can publish to a private npm registry

How I could use this

  1. Rip out the agentic loop logic to build a 'blog post generator' CLI that takes a topic, autonomously researches it via web tool calls, writes sections iteratively, then pushes a draft directly to your Supabase posts table — all from one terminal command.
  2. Study how Claude Code manages its system prompt and tool definitions, then clone that pattern to build a resume-tailoring agent: given a job description file and your base resume markdown, it runs multi-step edits (keyword insertion, reordering, gap analysis) and outputs a final PDF via a Puppeteer tool call.
  3. Extract the streaming response handler and context-window management code to power a 'persistent AI writing assistant' feature on your blog — where readers can highlight any paragraph and open a chat sidebar that maintains full article context without hitting token limits, using the same chunking strategy Claude Code uses for large codebases.

5. ChinaSiro/claude-code-sourcemap

7,287 stars this week · TypeScript

Reconstructed TypeScript source of Anthropic's official Claude Code CLI (v2.1.88), extracted from public npm sourcemaps — giving developers rare visibility into how a production AI coding agent is actually architected.

Use case

Developers building AI coding assistants or agentic tools have no reference implementation to study — they're guessing at architecture. This repo exposes the real internal structure of Claude Code: how it handles multi-agent coordination, tool dispatch (Bash, FileEdit, Grep, MCP), plugin systems, and voice/vim modes. For example, if you're building a blog feature that lets AI autonomously edit your MDX files, you can study how Claude Code's FileEdit tool handles safe writes and conflict detection rather than reinventing it.

Why it's trending

Claude Code just hit mainstream adoption as a terminal-based AI dev tool, and developers are hungry to understand how Anthropic built it internally. The sourcemap extraction technique itself is a viral 'I can't believe this worked' moment — 4,756 files recovered from a single .map file.

How to use it

  1. Clone the repo and navigate to restored-src/src/ — focus on tools/ for concrete agent tool implementations and coordinator/ for multi-agent patterns.
  2. Study tools/FileEdit.ts and tools/Bash.ts to understand how tool-call schemas, safety guards, and output formatting are structured for LLM consumption.
  3. Examine commands/ to see how 40+ slash commands are registered and dispatched — this is a clean pattern for building your own CLI or chat command system.
  4. Look at coordinator/ for the multi-agent orchestration pattern — how sub-agents are spawned, given context, and have results merged back.
  5. Use services/ as a reference for structuring your own Supabase + Anthropic API service layer, particularly auth flows and streaming response handling.
// Pattern from coordinator — spawn a sub-task with scoped context
const subAgent = await coordinator.spawn({
  task: 'Refactor this function',
  context: fileContents,
  tools: ['FileEdit', 'Bash'],
  maxTurns: 5
});
const result = await subAgent.run();

How I could use this

  1. Build a blog post auto-editor: study tools/FileEdit.ts to implement a safe AI-driven MDX file editor that rewrites sections of your blog posts based on a prompt, with proper diff preview before committing — similar to how Claude Code stages edits.
  2. Career tool — replicate the commands/review.ts pattern to build a resume/cover letter reviewer that treats each document section as a 'file' and uses structured tool calls to suggest targeted rewrites, outputting a diff-style before/after view in your portfolio site.
  3. Use the coordinator/ multi-agent pattern to build a blog content pipeline: one agent researches a topic via web search, a second writes the draft MDX, a third checks for SEO and internal link opportunities — all coordinated through a lightweight orchestrator you model after Claude Code's coordinator module.

6. Kuberwastaken/claurst

6,465 stars this week · Rust

A clean-room Rust reimplementation of Claude Code's CLI agent, born from reverse-engineering specs extracted after Anthropic accidentally leaked their entire source via an npm sourcemap.

Use case

If you want a self-hostable, auditable terminal coding agent without paying Anthropic's CLI subscription or trusting a closed binary, this gives you the same behavioral contract in idiomatic Rust. Concrete example: run it locally in a CI pipeline to automate code review or scaffolding tasks without shipping your codebase to Anthropic's servers.

Why it's trending

The repo exploded because Anthropic accidentally shipped their full TypeScript source in a public npm sourcemap on March 31st 2026 — one of the biggest accidental open-source moments in recent memory — and this was the first clean-room reimplementation to capitalize on the behavioral specs extracted from that leak.

How to use it

  1. Clone the repo and navigate to src-rust/: git clone https://github.com/kuberwastaken/claurst && cd claurst/src-rust,2. Build with Cargo (requires Rust 1.75+): cargo build --release,3. Set your Anthropic API key: export ANTHROPIC_API_KEY=sk-ant-...,4. Run the agent against a file or directory: ./target/release/claurst --task 'refactor this function to use async/await' --path ./src/lib.rs,5. Read the spec/ directory before customizing — it documents exact tool contracts (file read/write, shell exec, search) so you can safely fork behavior without breaking the agent loop.

How I could use this

  1. Wire claurst into your blog's GitHub Actions workflow to auto-generate a 'code changelog' post whenever you merge a PR — have it diff the changes and produce a plain-English summary you can publish as a dev log entry, keeping your blog content pipeline nearly zero-effort.
  2. Build a local resume-tailoring CLI tool on top of claurst's agent loop: give it a job description file and your resume markdown, and let the agent iteratively rewrite bullet points to match keywords — all running locally so your resume never leaves your machine.
  3. Use the spec/ directory's tool contract definitions as a blueprint to build a sandboxed AI code-execution feature for your blog, where readers can paste a snippet and your Next.js API route spins up a claurst-style tool loop (read, execute, respond) in a Deno/WASM sandbox — turning blog posts into interactive coding tutorials.

7. titanwings/colleague-skill

4,606 stars this week · Python

colleague.skill converts departing coworkers' messages, docs, and emails into a Claude-powered AI agent that codes in their style, answers in their tone, and knows their quirks — basically digitally preserving a colleague after they leave.

Use case

When a key engineer quits and leaves 3 pages of docs to cover 3 years of undocumented tribal knowledge, this tool ingests their Slack history, emails, markdown docs, and screenshots to build a Claude skill that can answer 'how would Zhang Wei have handled this auth edge case?' It solves the brutal knowledge-drain problem that happens at every company during offboarding — not just documentation gaps, but the implicit judgment, preferences, and shortcuts only that person knew.

Why it's trending

It's going viral because it hits a raw nerve — the README quote calling LLM developers 'code traitors' who are killing frontend, backend, QA, and DevOps jobs is dark humor that resonates deeply with Chinese tech workers facing mass layoffs right now. The companion 'ex-girlfriend.skill' repo adds viral gossip fuel that's driving cross-sharing across WeChat and tech communities.

How to use it

  1. Clone the repo and install deps: pip install -r requirements.txt, then copy .env.example to .env and add your Anthropic API key.
  2. Gather your colleague's raw materials — export Slack history via the API collector (python collect_slack.py --user @departing-colleague), or manually drop .eml files, markdown docs, and screenshots into ./inputs/.
  3. Run the skill builder: python build_skill.py --name 'Alex Chen' --description 'Senior backend, owns auth service, passive-aggressive in PRs, never merges on Fridays' — the description shapes the personality layer.
  4. This generates a alex_chen.skill file (a structured Claude system prompt + RAG context bundle) you can load into Claude Code or any Anthropic API call as the system prompt.
  5. Test it: python chat.py --skill alex_chen.skill and ask it things like 'how would you structure the new payments webhook?' — it should respond with Alex's actual code patterns and communication style.

How I could use this

  1. Build a 'blog voice preservation' tool for your own blog — feed it 6 months of your own posts and generate a Henry.skill that can draft new posts matching your exact tone, code style preferences, and opinion patterns. Useful when you want AI-assisted writing that doesn't sound generic.
  2. Create a 'past-me career advisor' skill by feeding it your old PRs, commit messages, Slack threads, and README files — then use it as a reference point in a portfolio feature that shows visitors how your technical communication and decision-making has evolved over time.
  3. Implement a Supabase-backed 'project memory' feature for your blog's AI assistant: when you finish a project post, run the colleague.skill pipeline on all your notes and drafts to generate a persistent skill file stored in Supabase Storage, so future AI features on your site can answer contextual questions about past projects in your actual voice.

8. Gitlawb/openclaude

3,150 stars this week · TypeScript

SAFETY ALERT: Do not install this package — it exhibits multiple indicators of a malicious supply chain attack disguised as a Claude Code alternative.

Use case

This repo should NOT be used. It claims to unlock Claude Code for any LLM, but the combination of a lookalike domain (gitlawb.com vs github.com), opaque clone URLs, a global npm install requesting bash/filesystem access, fabricated model names (GPT-5.4), and a dubious 'source leak' narrative are textbook supply chain attack vectors. A real-world harm: running this globally could exfiltrate your API keys, SSH keys, and source code silently.

Why it's trending

Likely trending due to botted or purchased GitHub stars — a known manipulation tactic used to make malicious repos appear legitimate and get picked up by trending aggregators. The 'leaked source' narrative is engineered to go viral in developer communities.

How to use it

DO NOT USE. Steps to stay safe instead: (1) Never install global npm packages from unknown orgs without auditing source on a verified domain. (2) Check the npm registry page for publish history, maintainer accounts, and download counts. (3) Run suspicious packages only in an isolated VM with no real credentials. (4) Report the repo to GitHub's trust & safety team at github.com/contact/report-abuse. (5) If you already ran it, rotate all API keys and audit your shell history immediately.

How I could use this

  1. Write a blog post for Henry's site titled 'How to Spot a Supply Chain Attack in the Wild' using this exact repo as the case study — it's a perfect real-world teaching example with concrete red flags readers can learn to recognize.
  2. Build a small CLI tool or browser extension for Henry's portfolio that checks an npm package name against known red-flag patterns (lookalike org names, star velocity anomalies, no listed topics) — a genuinely useful open-source safety tool.
  3. Create an AI-assisted 'dependency auditor' feature for Henry's blog where readers can paste an npm install command and get an automated red-flag analysis using GPT-4o with a structured prompt — demonstrates real AI utility while building trust with the developer audience.

9. tvytlx/ai-agent-deep-dive

2,956 stars this week · various

A deep-dive PDF research report dissecting AI agent architectures (like AutoGPT, LangChain agents, CrewAI) at the source-code level — useful for understanding what's actually happening under the hood.

Use case

Most AI agent tutorials show you how to call an API, not how the orchestration loop, memory management, or tool-calling actually works internally. This report reverse-engineers real agent codebases so you can make informed architecture decisions — e.g., understanding why LangChain's AgentExecutor retries differ from CrewAI's task delegation before you build a multi-agent blog content pipeline.

Why it's trending

Multi-agent frameworks exploded in early 2025 and developers are now hitting production-level bugs they can't debug without understanding internals. A source-level teardown in Chinese fills a gap for a massive developer audience that English docs underserve.

How to use it

  1. Download ai-agent-deep-dive-v2.pdf from the repo and skim the table of contents to identify which agent framework matches your stack (LangChain, AutoGPT, CrewAI, etc.).,2. Cross-reference the report's call-graph diagrams against the actual framework source on GitHub — e.g., for LangChain open langchain/agents/agent.py and trace the _take_next_step loop the report describes.,3. Pick one architectural pattern the report highlights (e.g., ReAct loop, tool-use retry logic, memory retrieval) and replicate a minimal version: const agent = await createReactAgent({ llm, tools, prompt }) in LangChain.js, then instrument it with console logs to verify the loop matches the report.,4. Use the report's failure-mode analysis section to add defensive handling in your own agents — e.g., max iteration guards, tool error catching, and token budget checks before deploying to production.,5. Treat the report as a vocabulary reference: when reading framework changelogs or opening issues, the internals terminology (executor, scratchpad, observation, thought) will now map to concrete code paths you've seen.

How I could use this

  1. Write a blog post series called 'AI Agent Internals' where each post takes one concept from the PDF (e.g., the ReAct loop, tool-calling retry logic) and shows a TypeScript/Next.js implementation — this positions Henry as someone who understands agents beyond tutorial-level and drives organic SEO traffic from developers hitting the same production issues.
  2. Build a 'Which AI Agent Framework Should I Use?' interactive quiz on the blog — backed by a Supabase table mapping project requirements (team size, latency needs, memory complexity) to framework recommendations, sourced directly from the architectural tradeoffs the report identifies.
  3. Use the report's memory management teardown to implement a proper long-term memory layer for Henry's blog AI assistant — specifically, store embeddings of past user interactions in Supabase pgvector and replicate the retrieval pattern the report shows for how production agents fetch relevant context, rather than naively stuffing the full history into the prompt.

10. NanmiCoder/claude-code-haha

2,815 stars this week · TypeScript

A patched, locally-runnable version of the leaked Claude Code source that lets you point its full TUI agent at any OpenAI-compatible API endpoint instead of Anthropic's servers.

Use case

Claude Code's official CLI is gated behind Anthropic accounts and usage limits. This repo fixes the broken startup chain in the leaked source so you can run the exact same Ink-based TUI — tool calls, MCP plugins, multi-agent loops — against OpenRouter, MiniMax, or your own proxy. Concretely: you get a self-hosted agentic coding assistant with file-system access and shell execution, zero subscription required.

Why it's trending

The Claude Code source leaked this week and the raw dump wouldn't even start — this repo is the first working patch that makes it actually runnable, so every developer who saw the leak is landing here to try it. It's also the clearest public reference for how Anthropic built the tool-call/permission/state-machine architecture that powers Claude Code.

How to use it

  1. Install Bun: curl -fsSL https://bun.sh/install | bash
  2. Clone and install deps: git clone https://github.com/NanmiCoder/claude-code-haha && cd claude-code-haha && bun install
  3. Configure your API: cp .env.example .env then set ANTHROPIC_BASE_URL=https://openrouter.ai/api/v1 and ANTHROPIC_API_KEY=<your-openrouter-key> and CLAUDE_MODEL=anthropic/claude-3.5-sonnet (or any compatible model)
  4. Run interactively: bun run start — you get the full Ink TUI
  5. Run headless/CI mode: bun run start --print 'Refactor src/utils.ts to use async/await' — pipe output into scripts or GitHub Actions steps

How I could use this

  1. Wire the --print headless mode into a GitHub Actions workflow on Henry's blog repo: on every PR, run bun run start --print 'Review this diff for TypeScript type safety issues and suggest fixes' against the changed files and post the output as a PR comment — a free AI code reviewer using OpenRouter's free tier models.
  2. Use the MCP server support to expose Henry's Supabase database schema as a tool, then run the agent locally to auto-generate typed query helpers or Zod validation schemas from his actual tables — point ANTHROPIC_BASE_URL at a cheap model and let it churn through schema introspection without burning Claude API credits.
  3. Study the docs/08-state-data-flow.png architecture diagram to replicate the conversation-state + tool-permission model in Henry's blog's AI chat widget — specifically the pattern where tool calls are queued, user-approved, then executed, which solves the 'AI did something destructive' UX problem he'll hit when adding any write-capable AI features to his site.
← All issuesGo build something