Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. AgentSeal/codeburn
2,560 stars this week · TypeScript · ai-coding claude-code cli codex
CodeBurn is a zero-config TUI dashboard that reads AI coding session logs directly from disk and shows you exactly which tasks, models, and projects are burning your token budget.
Use case
Developers using Claude Code, Cursor, or Codex on paid plans have no native way to see which types of work (debugging vs. scaffolding vs. refactoring) cost the most tokens. For example, Henry might discover that asking Claude to write Supabase RLS policies burns 3x more tokens with 40% first-try success vs. asking it to write React components — letting him decide when to write code manually vs. lean on AI.
Why it's trending
Claude Code's usage-based billing went mainstream in the last few weeks and developers are getting surprise invoices with no granular breakdown. CodeBurn fills that observability gap without requiring any API key or proxy setup.
How to use it
- Install globally:
npm install -g codeburn(or justnpx codeburnto try it instantly — no config needed). - Run
codeburnin your terminal — it auto-discovers session logs from Claude Code, Cursor, and Codex on disk. - Navigate the TUI with arrow keys: drill into the 'By Task Type' panel to see which activity categories have low one-shot success rates (those are your token sinkholes).
- Export a snapshot for analysis:
codeburn --export json > ai-costs.jsonthen pipe it into any script. - Optionally pin the macOS menu bar widget via SwiftBar for a live token-burn counter while you code.
How I could use this
- Build a public 'AI Dev Cost Transparency' page on the blog: pipe
codeburn --export jsoninto a weekly cron job, push the data to a Supabase table, and render a live chart (Recharts) showing Henry's rolling 30-day token spend by task category — a genuinely rare public artifact that signals engineering maturity to hiring managers. - Create a 'Claude Code ROI Calculator' career tool: use the exported CSV to cross-reference token cost per feature with actual lines-of-code shipped (pulled from git log), then display a cost-per-PR metric on a portfolio dashboard — concrete proof of efficient AI usage that goes on a resume or LinkedIn instead of vague 'AI-assisted development' claims.
- Add a smart prompt-routing feature to the blog's AI assistant: feed one week of codeburn JSON into an LLM to identify which prompt patterns have low one-shot success rates, then use those patterns as a blocklist to rewrite user queries before sending them to the API — automatically steering expensive retry-prone requests toward cheaper, more reliable formulations.
2. Nightmare-Eclipse/RedSun
1,333 stars this week · C++
RedSun is a Windows Defender privilege escalation PoC that exploits Defender's own file-restoration behavior to overwrite system files and gain admin access.
Use case
This exposes a logic flaw where Windows Defender, upon detecting a cloud-tagged malicious file, rewrites the file back to its original location instead of quarantining or deleting it — effectively becoming a primitive for arbitrary file write as SYSTEM. A real-world attacker could craft a payload, let Defender 'find' it, and watch Defender itself plant the file in a privileged path like System32.
Why it's trending
This is trending because it's a rare and embarrassing class of vulnerability — the security tool itself is the attack vector — and the author's deliberately withholding the full PoC, which is driving curiosity and community reverse-engineering. It also highlights a broader 2024/2025 pattern of AV/EDR products being weaponized against themselves (see also: PPLdump, EDRSandblast).
How to use it
- Do not run untested exploit code on production systems — set up an isolated Windows VM with Defender enabled and cloud protection turned on. 2. Read the README carefully: the core mechanic is that Defender's cloud-tag rewrite behavior restores a file you control to a privileged path, so your research goal is to identify which Defender component performs the write and under what SYSTEM context. 3. Use Process Monitor (Sysinternals) filtered on 'WriteFile' events from MsMpEng.exe to observe Defender's file write behavior when it detects a tagged sample. 4. Cross-reference with public Windows Defender internals research (e.g., Tavis Ormandy's prior work) to understand the cloud submission and remediation pipeline. 5. Follow the repo for the eventual PoC drop — the author has signaled they'll release it, and tracking the commit history will show incremental clues.
How I could use this
- Write a deep-dive blog post titled 'When Your AV Is the Attacker' explaining the RedSun class of vulnerability for a general developer audience — cover the Windows Defender remediation pipeline, why cloud-tagging creates this footgun, and what the patch surface looks like. This kind of security explainer drives serious SEO traffic and establishes technical credibility without you needing to run the exploit yourself.
- Build a 'Security Vulnerability Tracker' career tool for your portfolio: a Next.js + Supabase dashboard that pulls trending CVEs and high-signal GitHub security repos (via GitHub's trending API and NVD feed), lets you tag and annotate them, and generates a weekly digest. Hiring managers at security-adjacent companies notice candidates who actively track the threat landscape.
- Create an AI-powered 'Vulnerability Explainer' feature on your blog using GPT-4o: readers paste a CVE ID or a GitHub repo link, and the tool fetches the description/README, then generates a plain-English breakdown (what it does, who is affected, severity, mitigation). Pair it with Supabase to cache explanations and track which CVEs your readers look up most — that's a built-in content calendar for future posts.
3. vercel-labs/wterm
1,317 stars this week · TypeScript
A terminal emulator for the web
Use case
A terminal emulator for the web
Why it's trending
How to use it
How I could use this
4. Mouseww/anything-analyzer
1,270 stars this week · TypeScript · 2api ai-tools analysis-cli api-analysis
A unified MITM proxy + embedded browser + AI analysis toolkit that captures traffic from any source (web, desktop, CLI, mobile) and auto-generates reverse-engineering reports — replacing the Fiddler + DevTools + manual analysis workflow with one tool.
Use case
The real problem: when building AI-powered features that call third-party APIs (no SDK, undocumented endpoints, encrypted payloads), you normally juggle Fiddler for proxying, DevTools for browser traffic, and manual inspection of hundreds of requests. Anything Analyzer intercepts all of it in one session and lets an AI agent explain the authentication scheme, request signatures, or encryption logic. Concrete example: you want to replicate a proprietary website's search API for a blog feature — point Anything Analyzer at the site, trigger the search, and get an AI-generated breakdown of the request structure, headers, and any JS-side signing logic without reading minified source code.
Why it's trending
MCP (Model Context Protocol) server integration is the hot topic right now — this tool exposes captured traffic directly to AI agents and IDEs like Cursor, letting devs reverse-engineer APIs inside their coding environment without context-switching. That MCP angle plus the all-in-one positioning hit the zeitgeist of 'agentic dev tooling' hard this week.
How to use it
- Clone and install:
git clone https://github.com/Mouseww/anything-analyzer && cd anything-analyzer && npm install && npm run dev— launches the Electron app with the embedded browser and MITM proxy on port 8888. - For browser traffic: use the built-in browser tab, navigate to your target site, and all requests auto-populate in the unified session view.
- For CLI/script traffic: set
HTTP_PROXY=http://127.0.0.1:8888 HTTPS_PROXY=http://127.0.0.1:8888as env vars before runningcurlor a Pythonrequestsscript — the tool handles cert installation for HTTPS decryption. - Trigger the AI analysis: select a session or individual request group, click 'AI Analysis', and point it at your OpenAI/local LLM endpoint — it returns a structured report covering auth patterns, payload structure, and any detected signing/encryption.
- For MCP/agent integration: enable the MCP Server mode (exposed on a local port), then add it as a tool in your Cursor or Claude Desktop config so your coding agent can query live captured traffic as context while you write integration code.
How I could use this
- Reverse-engineer Supabase's realtime WebSocket protocol or any undocumented third-party API you want to integrate into your blog (e.g., a social platform's embed API), then use the AI-generated protocol report to write a typed TypeScript client — document the entire process as a blog post showing the MITM → AI report → typed SDK pipeline.
- Build a 'API compatibility checker' career tool: use Anything Analyzer to capture the exact HTTP traffic a job board or LinkedIn sends when you update a profile, then write a Next.js route that programmatically replicates those calls to auto-sync your resume data across platforms — useful for a portfolio piece demonstrating practical reverse engineering skills.
- Pipe the MCP server output into your blog's AI writing assistant: capture your own blog's Supabase API calls during content editing sessions, feed that traffic context to an LLM via MCP, and build a feature that auto-suggests related posts or detects missing content gaps based on what API endpoints are actually being hit during reader sessions.
5. alchaincyf/darwin-skill
1,087 stars this week · HTML
Darwin-skill is an autonomous optimization loop for Claude Code SKILL.md files — it evaluates, improves, tests, and either keeps or reverts changes using an 8-dimension scoring rubric, so your AI agent skills compound over time instead of drifting.
Use case
If you're building Claude Code skills (SKILL.md files) for tasks like blog post drafting, SEO analysis, or code review, you have no systematic way to know if a skill actually performs better after you edit it. Darwin-skill solves this by running a structured evaluate→improve→test→keep/revert cycle: for example, you tweak your 'generate-post-outline' skill, and the system scores it across 8 dimensions (including live test runs), only committing the change if it measurably improves. Without this, skill degradation is invisible.
Why it's trending
Karpathy's autoresearch repo dropped recently and sparked immediate interest in applying self-improving loops beyond model training — darwin-skill is a direct, practical port of that idea to the Claude Code skill ecosystem, which is itself exploding in adoption across Codex, Trae, and CodeBuddy toolchains.
How to use it
- Install the skill into your Claude Code environment: run
npx skills add alchaincyf/darwin-skillin your project root. 2. Create or point to an existing SKILL.md you want to optimize — for example, a skill that generates blog post metadata from a draft. 3. Create atest-prompts.jsonfile with 3-5 representative inputs that reflect real usage of that skill (e.g., actual blog post excerpts you'd pass to it). 4. Invoke darwin-skill via Claude Code with a prompt like: 'Run darwin-skill optimization on /skills/generate-metadata.md using test-prompts.json' — it will score the current skill, attempt an improvement, re-run tests, and either commit or revert. 5. Review the structured diff and scores it outputs, confirm you're happy, then let it proceed to the next skill in your queue.
How I could use this
- Build a 'blog-writing' SKILL.md for Claude Code that handles your specific post format (frontmatter, SEO slug, reading time estimate, internal link suggestions) — then run darwin-skill against a test set of your 10 most-read posts to iteratively improve the skill until it consistently matches your editorial style without manual cleanup.
- Create a career-tool skill for cover letter generation tuned to specific job description patterns, then use darwin-skill's 8-dimension rubric (especially the 25-point live performance score) to automatically identify which prompt structures produce letters that pass your own checklist — giving you a measurable, version-controlled skill instead of a prompt you paste into ChatGPT each time.
- Wire darwin-skill into a nightly GitHub Action that runs optimization passes on your Supabase query-generation skill — the one that translates natural language blog analytics questions into SQL — using last week's failed queries as the test-prompts.json, so the skill self-improves against real production failures while the ratchet mechanism prevents regressions.
6. browser-use/video-use
919 stars this week · Python
video-use lets Claude Code autonomously edit raw video footage into polished final cuts using ffmpeg, ElevenLabs, and Manim — all via chat prompts with no video editing GUI.
Use case
Developers and content creators who record tutorials, talks, or demos hate the manual grind of cutting filler words, syncing subtitles, and color grading in Premiere or DaVinci. With video-use, you drop a folder of raw .mp4 takes, type 'turn these into a 3-minute tutorial', and get back a clean final.mp4 with burnt subtitles, audio fades, and color grading — no timeline scrubbing required.
Why it's trending
Claude Code's agentic skill system just matured enough to support multi-step media pipelines, and this is one of the first real-world demos showing it can orchestrate parallel sub-agents (one per animation) against binary artifacts like video — a major jump beyond text-only Claude workflows.
How to use it
- Clone and symlink into Claude Code skills:
git clone https://github.com/browser-use/video-use && ln -s "$(pwd)/video-use" ~/.claude/skills/video-use - Install deps:
pip install -e . && brew install ffmpeg yt-dlp - Add your ElevenLabs key to
.envfor AI voiceover/subtitle generation - Navigate to your raw footage folder and launch Claude:
cd ~/Videos/my-tutorial && claude - Prompt Claude: 'cut filler words, add 2-word uppercase subtitles, warm color grade, export final.mp4' — approve the proposed strategy and let it render
How I could use this
- Record a raw 10-minute screen capture of building a Next.js feature, drop it into video-use, and let Claude auto-edit it into a tight 3-minute blog embed — then ship the video alongside your written post with zero manual editing, making your blog the only dev blog with polished video walkthroughs for every article.
- Record yourself doing a live coding interview or system design walkthrough, run it through video-use to cut dead air and add burnt subtitles, then embed the final clip on your portfolio as a 'see me think' section — far more compelling to hiring managers than a resume bullet point.
- Build a Next.js API route that accepts a Supabase storage path to raw video, shells out to video-use via a Python subprocess with a predefined prompt template (e.g. 'make this an AI explainer with Manim overlays'), polls for
edit/final.mp4, and auto-publishes it to your blog — giving you a one-click 'publish video post' workflow from raw footage.
7. lewislulu/html-ppt-skill
891 stars this week · HTML
A zero-build-step HTML/CSS/JS presentation engine with 36 themes, 31 layouts, and a BroadcastChannel-powered presenter mode — essentially a self-contained Reveal.js alternative you can deploy as a static file.
Use case
The real problem: generating polished slide decks programmatically from AI output is painful because PowerPoint/Google Slides have no good headless API, and Reveal.js requires a Node build pipeline. This repo solves it by giving an LLM agent a single HTML template it can clone and populate with pure string manipulation — no npm, no webpack, no server. Concrete example: an AI writing assistant generates a '5 slides on RAG architecture' deck by forking a template file, injecting slide content into pre-built layout divs, and serving it instantly as a static asset from Supabase Storage.
Why it's trending
AgentSkill tooling is spiking right now as developers build autonomous coding agents (Cursor, Claude Code, GPT-4o) that need artifact generation beyond markdown — HTML presentations are the next natural output format. This repo is essentially a ready-made skill plugin for those agents, which is a workflow pattern developers are actively wiring up this week.
How to use it
- Clone the repo and open any file in
decks/directly in a browser — no build step, justopen decks/theme-aurora.html. - Pick a layout by copying an existing
<section data-layout='two-col'>block and swapping in your content — layout classes map 1:1 to the 31 documented layouts inreferences/layouts.md. - Press
Sduring presentation to open presenter mode; a second BroadcastChannel-synced window appears with speaker notes, timer, and next-slide preview. - To generate decks programmatically, treat the HTML as a template string:
const deck = fs.readFileSync('template.html','utf8').replace(/{{SLIDE_N_TITLE}}/g, aiOutput.title)— no DOM parsing needed. - Deploy the output file to Supabase Storage with public access and return the URL — the entire deck is one self-contained HTML file with no external dependencies.
How I could use this
- Build a 'Post to Deck' button on Henry's blog: when a post is published, an Edge Function calls GPT-4o to extract 5-7 key points, slots them into this template's
two-collayout sections, uploads the resulting HTML to Supabase Storage, and surfaces a shareable /slides/[slug] URL — turning every blog post into a conference-ready talk asset automatically. - Create a 'Portfolio Pitch Deck' generator for Henry's career tools: given a job description pasted into a form, an AI agent maps Henry's resume data (stored in Supabase) to the
case-studylayout template, producing a role-specific 8-slide narrative deck (problem → Henry's solution → measurable impact) that downloads as a single HTML file — far more memorable than a PDF resume. - Wire this as a literal AgentSkill in a Next.js AI chat route: when the user's message contains intent like 'make me slides about X', the streaming response calls a tool
generate_deck(topic, slide_count, theme)that forks the aurora theme, injects Claude-generated speaker notes into the<aside data-notes>tags, saves to Supabase Storage, and streams back an iframe embed — giving Henry's blog an AI feature competitors won't have.
8. BuilderPulse/BuilderPulse
876 stars this week · various · ai builders indiehackers
BuilderPulse is a daily AI-curated digest that scans 300+ signals across HN, GitHub, Reddit, and Product Hunt to surface actionable build ideas for indie hackers every morning.
Use case
Indie hackers waste hours manually monitoring trend sources just to find a viable gap to build into. BuilderPulse solves the discovery bottleneck — for example, it surfaced the €54K Firebase billing horror story and immediately framed it as a concrete product idea (a cross-cloud billing tripwire daemon), compressing hours of trend research into a two-hour build opportunity.
Why it's trending
It's gaining traction this week because the AI tooling landscape is moving so fast that manual trend-following is genuinely untenable — new model launches, viral cost horror stories, and emerging GitHub repos are creating real product gaps daily, and builders want a single filtered signal rather than five open tabs.
How to use it
- Star and Watch the repo on GitHub, then subscribe to the RSS feed at
https://github.com/BuilderPulse/BuilderPulse/commits/main.atom— new daily reports commit to main every morning, so your RSS reader (Inoreader, Feedly, etc.) fires automatically.,2. Each day's report lives aten/YYYY/YYYY-MM-DD.md— parse these markdown files programmatically if you want to pipe the content into your own tooling:curl https://raw.githubusercontent.com/BuilderPulse/BuilderPulse/main/en/2026/2026-04-17.md,3. Use the '💡 If you had 2 hours, build...' prompt at the top of each report as a daily writing or project seed — it's already validated against trending signals so you're not guessing at relevance.,4. Cross-reference the '20 questions' structure in each report against your own backlog: if a pain point surfaces in the report AND you've seen it in your own Supabase logs or blog comments, that's a high-confidence signal worth acting on.,5. For automation, set up a GitHub Action that fetches the latest report markdown on a cron schedule, extracts the top 3 build ideas via an OpenAI call, and posts them as a draft in your CMS or Notion database.
How I could use this
- Add a 'Builder Pulse' widget to Henry's blog sidebar that fetches today's top 3 build ideas from the raw GitHub markdown via a Next.js API route with 1-hour ISR caching — giving readers a live 'what to build today' panel that costs $0 and drives daily return visits.
- Build a personal 'opportunity radar' tool for Henry's portfolio: a Supabase Edge Function that runs nightly, fetches the latest BuilderPulse report, uses GPT-4o to score each build idea against Henry's stated skills (Next.js, Supabase, TypeScript), and emails him only the top match — a concrete career-differentiation artifact he can show in interviews as a shipped AI agent.
- Use the BuilderPulse archive (
en/folder, 7+ days of reports) as a fine-tuning or RAG dataset for a blog post idea generator: embed each daily report into a Supabase pgvector table, then when Henry starts drafting a post, query the nearest trending signals to auto-suggest an angle that's timely — turning his blog into a trend-reactive content engine rather than a static portfolio.
9. Manavarya09/design-extract
868 stars this week · JavaScript · agent-skill agent-skills ai claude-code-plugin
A CLI tool that reverse-engineers any website's complete design system into 8 ready-to-use output formats (Tailwind config, shadcn theme, W3C tokens, etc.) in a single command.
Use case
When you want to match or be inspired by a site's visual design (say, Linear or Vercel), you'd normally spend hours manually inspecting colors, font stacks, spacing scales, and shadow values in DevTools. designlang automates that entire audit — run it against any URL and get a Tailwind config, React theme, and WCAG score you can drop straight into your project without transcription errors.
Why it's trending
The Claude Code plugin ecosystem is exploding right now, and this repo hits that wave perfectly — it positions itself as an agent skill that lets Claude autonomously extract and apply design systems, which is a concrete, immediately useful workflow that developers are actively looking for.
How to use it
- Run extraction against any site you want to reference:
npx designlang https://linear.app --full
- Check the generated
linear-tailwind.config.jsand copy thetheme.extendblock directly into your project's Tailwind config. - Import the generated
linear-react-theme.jsinto your app's ThemeProvider to get typed CSS variables matching Linear's exact design language. - Feed the
linear-design-language.mdfile to Claude or GPT-4 with a prompt like 'Build a card component that matches this design system' — the markdown is structured specifically for LLM consumption. - Use the
*-preview.htmlfile to QA the extracted tokens visually and check the WCAG accessibility score before committing anything.
How I could use this
- Run designlang against 5 top dev-focused blogs (Lee Robinson, Josh Comeau, Dan Abramov's overreacted.io, etc.), extract their design tokens, and build a 'blog theme switcher' feature on Henry's blog that lets visitors toggle between those design languages — with the extracted Tailwind configs powering each theme dynamically.
- Build a 'company design audit' tool for career prep: before interviewing at a company, run designlang on their public marketing site, feed the markdown to Claude, and auto-generate a portfolio piece or code sample that visually matches their brand — showing interviewers you already understand their design system before day one.
- Integrate designlang into an AI post-generation pipeline: when Henry writes a new blog post that references an external product or site, a Supabase Edge Function triggers designlang on that URL, extracts the brand colors and logo treatment, and uses them to auto-style the embedded link preview card for that post — making every external reference visually on-brand for its source.
10. sogonov/anubis
867 stars this week · Kotlin
Anubis is an Android app manager that uses system-level app disabling (not sandboxing) to enforce strict VPN-in/VPN-out network policies per app group, preventing apps from ever detecting or bypassing your VPN.
Use case
The real problem: privacy-sensitive apps (banking, location-based, tracking-heavy) can still detect VPN presence through the shared network stack even inside work profile sandboxes like Island or Shelter. Anubis solves this by completely disabling apps via pm disable-user when VPN conditions aren't met — a disabled app runs zero code, period. Concrete example: you route a social media app through a VPN-only group, so it literally cannot launch or phone home unless your VPN tunnel is active.
Why it's trending
Surging interest in mobile privacy tooling post-data-broker exposure cycles, plus Shizuku's growing adoption making root-free system-level Android control mainstream — Anubis hits both trends simultaneously without requiring a rooted device.
How to use it
- Install Shizuku on your Android device (requires ADB pairing once via
adb tcpip 5555 && adb connect <device-ip>) and start the Shizuku service.,2. Install Anubis APK from the releases page and grant it Shizuku permission when prompted.,3. Open Anubis, tap 'New Group', select 'VPN Only' policy, and add apps like your browser or social apps to the group.,4. Set your VPN client (WireGuard, Mullvad, etc.) in Anubis settings — it will auto-start/stop the VPN client and freeze/unfreeze group apps based on VPN state.,5. Long-press app icons in the Anubis launcher to pin home screen shortcuts that handle the full freeze→VPN→launch orchestration in one tap.
How I could use this
- Write a deep-dive blog post titled 'Why Work Profiles Don't Actually Hide Your VPN (And What Does)' — benchmark Anubis vs Island/Shelter with Wireshark captures showing network interface detection, positioning yourself as a serious privacy/Android internals writer rather than a surface-level blogger.
- Build a companion web tool hosted on your blog: a 'Android Privacy Audit' form where users input their app list and get back recommended Anubis group policies (Local/VPN Only/Launch with VPN) based on app category data from an open API like AppBrain — pure client-side TypeScript, no backend needed.
- Create an AI-powered 'Privacy Risk Scorer' Supabase Edge Function that accepts an Android package name, fetches its Play Store permissions via a scraper, and uses GPT-4o to classify which Anubis group policy it should belong to — surface this as a searchable database page on your blog with cached results stored in Supabase.