Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. nexu-io/open-design
12,595 stars this week · TypeScript · agent-skills ai-agents ai-design byok
A local-first, open-source UI prototyping engine that turns any coding agent CLI (Claude Code, Gemini, Cursor, etc.) into a design tool with 31 composable skills and 72 pre-built design systems — no Figma license required.
Use case
The real problem: design handoff between 'I have an idea' and 'I have working HTML/CSS' requires either a Figma subscription or manual iteration. Open Design solves this by making your existing Claude Code CLI the rendering engine — you describe a UI in natural language, a Skill handles layout/system selection, and you get sandboxed HTML with export to PDF/PPTX/MP4. Concrete example: you need a polished career dashboard wireframe to show a stakeholder — instead of spending 3 hours in Figma, you run a single agent prompt and export a PDF deck in minutes.
Why it's trending
It dropped the week Anthropic launched Claude Design as a closed product, positioning itself as the immediate BYOK alternative — 12k stars in one week is pure timing arbitrage. Engineers who already use Claude Code daily can slot it in with zero new credentials.
How to use it
- Clone and install:
git clone https://github.com/nexu-io/open-design && cd open-design && npm install,2. Start the local server:npm run dev— it auto-detects whichever agent CLIs are on your PATH (Claude Code, Gemini CLI, etc.),3. Open the browser UI, pick a Design System (e.g. 'Editorial Serif' or 'Brutalist Grid'), then choose a Skill likeui-generatororslide-deck,4. Type a natural-language prompt: 'Build a mobile job-match dashboard card showing match score, salary range, and visa eligibility badge' — the agent CLI executes the Skill and renders a sandboxed preview,5. Export as HTML (drop into your Next.js project), PDF (stakeholder deck), or MP4 (social proof video) — no extra tooling needed
How I could use this
- Use the
slide-deckSkill to auto-generate a weekly 'Australian Tech Jobs Report' slide deck from your scraped job data — export as PDF and embed it on the blog as a downloadable lead magnet for international graduates. - Wire the
ui-generatorSkill into a 'Resume Section Builder' tool: the user describes their role and experience in plain text, Open Design renders a polished resume card component (HTML), and they download it — complements your existing resume analyser without you hand-coding every layout variant. - Use the sandboxed HTML export to build a 'Visa Pathway Visualiser' prototype: prompt the agent to generate an interactive flowchart UI (485 → 482 → ENS) using one of the timeline-focused Design Systems, then embed the static HTML output directly into your visa-news pages as a rich visual explainer.
2. cursor/cookbook
2,993 stars this week · TypeScript
The Cursor SDK lets you programmatically spawn, prompt, and stream Cursor's coding agent from your own TypeScript apps — treating the agent as an API call rather than an IDE UI.
Use case
When you need a coding agent embedded inside your own product rather than inside an IDE, this SDK is the missing piece. Concrete example: a SaaS that lets users describe a feature, then spins up a Cursor cloud agent to scaffold the code, streams the output live, and returns downloadable artifacts — all without the user ever opening Cursor themselves.
Why it's trending
Cursor just shipped a public API and cloud agent runtime this week, making programmatic agent orchestration possible for the first time — this cookbook is the official example repo that dropped alongside that launch, which explains the 3k stars in a single week.
How to use it
- Get a Cursor API key from cursor.com/dashboard/integrations and set CURSOR_API_KEY in your env.
- Install the SDK:
npm install @cursor/sdk - Create a local agent and stream its output:
import { CursorClient } from '@cursor/sdk';
const client = new CursorClient({ apiKey: process.env.CURSOR_API_KEY });
const run = await client.agent.create({ prompt: 'Add a /health route to this Express app' });
for await (const event of run.stream()) {
if (event.type === 'delta') process.stdout.write(event.content);
}
- For cloud runs (no local workspace needed), swap
agent.createfor a cloud agent call and pass arepositoryref. - Poll
run.artifacts()after completion to retrieve generated files.
How I could use this
- Build a 'Generate this post's code sample' button on any blog post — readers paste a description, a Cursor cloud agent scaffolds a working repo, and the page streams the agent's progress live via SSE into a terminal-style UI using your existing comic-panel card design.
- Wire a Cursor agent into the resume analyser flow: after Claude identifies skill gaps, auto-generate a personalised coding exercise repo tailored to those gaps (e.g. 'here's a small Next.js project that practises the React patterns your resume is weak on'), stream the scaffold, and let the user download it as interview prep.
- Use the DAG task runner pattern to break down a user's learning path (from your /learn feature) into parallel sub-tasks — each node in the DAG is a Cursor agent that generates a code example for one concept — then stream all of them concurrently and stitch the results into a single interactive study guide page.
3. theori-io/copy-fail-CVE-2026-31431
2,595 stars this week · Python
Proof-of-concept and technical writeup for CVE-2026-31431, a Linux kernel copy-on-write race condition affecting Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16 — including the kernel version shipping on AWS EC2 instances right now.
Use case
Security teams and DevOps engineers running any of these distros need to audit their fleet immediately — especially AWS users where Amazon Linux 2023 kernel 6.18.8-9.213 is the default. The writeup gives defenders the technical detail needed to understand the attack surface, write detection rules, and justify emergency patching to management.
Why it's trending
The CVE was just published with a full technical writeup and working PoC, and it hits the AWS default kernel — that combination creates immediate urgency for a large percentage of production cloud infrastructure. 2,595 stars in a week signals the security community treating this as high severity.
How to use it
- Check your kernel version:
uname -r— compare against the affected versions in the table (6.17.0-1007-aws, 6.18.8-9.213.amzn2023, 6.12.0-124.45.1.el10_1, 6.12.0-160000.9-default). 2. Read the technical writeup at xint.io/blog/copy-fail-linux-distributions to understand the vulnerability class (CoW race condition) before touching anything else. 3. Apply patches via your distro's package manager (apt upgrade,dnf update,zypper update) and reboot — do not defer kernel patches for this class of bug. 4. If you cannot patch immediately, review your exposure: is the affected system multi-tenant or accessible by untrusted local users? That is the primary risk vector. 5. Subscribe to your distro's security mailing list (ubuntu-security-announce, alas.aws.amazon.com) so you get patch availability notices before PoCs drop.
How I could use this
- Write a 'How to read a CVE writeup' explainer post for junior devs — walk through CVE-2026-31431's structure (affected versions table, race condition explanation, patch diff) as a template for how to triage any future CVE in your infrastructure. This kind of post ranks well and is genuinely scarce in plain English.
- Not a strong fit for career tools (resume analyser, interview prep) — but you could add a 'Security awareness' section to the interview prep tool with a rotating CVE-of-the-week question like 'How would you assess whether your EC2 fleet is exposed to a newly published kernel CVE?' — a real signal question in SRE/DevOps interviews.
- Build a lightweight AI tool that takes a CVE number or NVD URL, calls the Anthropic API (Haiku is cheap enough), and returns a plain-English triage summary: affected versions, attack vector (local vs remote), patch availability, and a one-line severity verdict — useful as a blog widget that demonstrates practical AI use without being a toy example.
4. denuitt1/mhr-cfw
1,349 stars this week · Python
A Domain-Fronting Relay that routes traffic though GAS (Google Apps Script) and forwards it to Cloudflare Workers. Designed to bypass DPI.
Use case
A Domain-Fronting Relay that routes traffic though GAS (Google Apps Script) and forwards it to Cloudflare Workers. Designed to bypass DPI.
Why it's trending
How to use it
How I could use this
5. willchen96/mike
1,140 stars this week · TypeScript
A production-quality OSS AI legal platform that shows you exactly how to build async document ingestion, multi-model AI analysis, and Supabase-backed storage in a Next.js + Express monorepo.
Use case
The real problem: most AI document tools are either toy demos or locked SaaS products — there's no reference implementation showing how to wire async PDF processing, S3 storage, Supabase auth, and swappable LLM providers together in a production codebase. Concrete example: a law firm uploads a 40-page employment contract, the backend converts it via LibreOffice, chunks it, stores it in R2, runs it through whichever model is cheapest/fastest for that document type, and surfaces clause-level analysis in the Next.js UI. The architecture handles the hard parts — large file uploads, format conversion, streaming responses — not just the prompt engineering.
Why it's trending
Legal AI is one of the last document-heavy verticals where OSS tooling is still thin — most repos stop at 'send PDF to GPT', not full ingestion pipelines. Dropping 1,140 stars in a week suggests the community has been waiting for a complete, copy-paste-able reference implementation that uses the actual production stack (Next.js + Supabase + S3 + multi-provider LLM) rather than LangChain abstractions.
How to use it
- Clone and scaffold:
git clone https://github.com/willchen96/mike && npm install --prefix backend && npm install --prefix frontend - Provision services: create a Supabase project, a Cloudflare R2 bucket, and grab at least one LLM API key (OpenAI or Anthropic), then copy both
.env.examplefiles and fill them in. - Seed the schema: paste
backend/migrations/000_one_shot_schema.sqlinto the Supabase SQL editor — this is a one-shot schema, no migration runner needed. - Run both servers:
npm run dev --prefix backend(Express on :8080) +npm run dev --prefix frontend(Next.js on :3000) — study how the backend streams document processing events back to the frontend. - Port the document ingestion pattern: the key file to read is the backend's document processing module — extract the LibreOffice → PDF → chunk → embed → store pipeline and adapt it for your own document types (resumes, visa letters, employment contracts).
How I could use this
- Visa document analyzer for the blog: wire Mike's PDF ingestion pipeline into a
/api/visa/analyzeroute where users upload their 485/482 grant letters or CoE documents — the backend extracts work condition clauses, visa expiry, and streams a plain-English summary. Same Supabase + R2 stack you already run, and it's a genuine pain point for your audience that no other career tool covers. - Employment contract checker for career tools: adapt Mike's multi-model document pipeline to let users upload Australian job offer letters — run clause extraction against Australian employment law benchmarks (probation periods, superannuation, restraint of trade) and flag anything non-standard. This directly serves the 482 visa holders on your platform who are signing contracts they don't fully understand and can't afford a lawyer to review.
- Resume-to-job-description gap analyzer: use Mike's chunking + embedding architecture (not just raw GPT calls) to build a semantic diff between a user's resume and a job description — chunk both documents, embed them with the same model, and surface the specific skills/phrases in the JD that are absent or weak in the resume. The structured document processing approach means you handle multi-page resumes and lengthy JDs without hitting context limits, which naive implementations fail on.
6. DanOps-1/Gpt-Agreement-Payment
885 stars this week · Python · adversarial-ml anti-fraud bug-bounty captcha-solver
ChatGPT Plus/Team/Pro 订阅协议端到端重放工具集 · hCaptcha 视觉求解器 · 反欺诈机制实证研究 / End-to-end protocol replay toolkit for ChatGPT Plus/Team/Pro subscription with from-scratch hCaptcha solver and empirical anti-fraud research
Use case
ChatGPT Plus/Team/Pro 订阅协议端到端重放工具集 · hCaptcha 视觉求解器 · 反欺诈机制实证研究 / End-to-end protocol replay toolkit for ChatGPT Plus/Team/Pro subscription with from-scratch hCaptcha solver and empirical anti-fraud research
Why it's trending
How to use it
How I could use this
7. darrylmorley/whatcable
872 stars this week · Swift · apple-silicon hardware-info iokit mac-app
WhatCable is a macOS menu bar app that reads IOKit/USB Power Delivery data to tell you exactly what each USB-C cable is capable of — speed, wattage, and the specific bottleneck when your Mac charges slowly.
Use case
USB-C cables are physically identical but span a 40x performance range: a USB 2.0 charge-only cable and a Thunderbolt 4 240W cable use the same connector. A developer plugs in a 96W charger and gets 30W — WhatCable pinpoints whether the cable's e-marker chip, the charger's PDO negotiation, or the Mac itself is the limiting factor. Without this, you're guessing by swapping cables one at a time.
Why it's trending
Apple Silicon M3/M4 Macs now support USB4 80Gbps and 140W+ charging, making cable capability gaps more costly than ever — a wrong cable can halve your GPU-over-Thunderbolt bandwidth or cut charging to a trickle. With USB-C now the only port on MacBooks, the 'why is this slow?' question hits every developer daily.
How to use it
- Download the latest .dmg from GitHub Releases and drag WhatCable.app to /Applications — no Homebrew, no install script.
- Launch it; a cable icon appears in the menu bar. Click it to see a popover with per-port diagnostics.
- To explore the underlying IOKit data yourself, open Terminal and run:
system_profiler SPThunderboltDataType SPUSBDataType— WhatCable parses the same tree via IOServiceGetMatchingServices. - To build from source: clone the repo, open WhatCable.xcodeproj in Xcode 15+, select your Mac as target, and hit Run — no dependencies, pure SwiftUI.
- The key IOKit entitlement you need for a signed distribution build is
com.apple.security.device.usbin the entitlements file — worth noting if you fork it.
How I could use this
- Write a deep-dive post titled 'What macOS knows about your USB-C cables (and how to read it)' — walk through the IOKit calls WhatCable uses (IOServiceGetMatchingServices, IORegistryEntryCreateCFProperties) with Swift snippets. This is a niche developer topic with almost no good English-language coverage and will rank well for 'macOS IOKit USB Swift' searches.
- Build a 'Dev Workstation Checker' page for your career tools: a checklist-style guide where international IT graduates verify their setup (cable speeds, monitor bandwidth, docking station compatibility) before starting an AU tech job. The WhatCable source code gives you the exact IOKit keys to reference — kUSBCurrentVbus, kUSBVendorID — so the advice is technically grounded, not generic.
- Use the WhatCable concept as a framing device for an AI explainer feature: a Claude-powered 'decode my system info' tool where a user pastes their
system_profiler SPUSBDataTypeoutput and gets a plain-English breakdown of every device, speed negotiated, and potential bottleneck — directly analogous to what WhatCable does in the GUI, but web-based and accessible to non-Mac users reading your blog.
8. b-nnett/codex-plusplus
779 stars this week · TypeScript
Codex++ is a plugin runtime that patches OpenAI's Codex desktop app so you can inject custom ESM tweaks — persistent UI changes, workflow fixes, and new features — without touching the app bundle.
Use case
Codex ships as a locked Electron binary. If the default UI misses something you need — a keyboard shortcut, a persistent system prompt, a sidebar panel — you're stuck waiting for OpenAI to ship it. Codex++ patches app.asar once at install time, then loads tweaks from your home directory on every launch. Concrete example: you write a 30-line ESM module that pre-populates every new Codex session with your team's coding standards; no forks, no rebuilds, just save and reload.
Why it's trending
Codex launched its desktop app in late April 2026 and immediately attracted power users who hit its UI limitations within days. Codex++ appeared within the same week, hitting that exact frustration window when demand for customisation is highest and the official plugin API doesn't exist yet.
How to use it
- Install globally:
bun install -g github:b-nnett/codex-plusplus && codexplusplus install— this patches your localapp.asarand backs up the original. - Navigate to
~/.codex-plusplus/tweaks/(created by the installer) and create a new directory for your tweak, e.g.my-tweak/. - Add a manifest and entry point — the minimal shape is
{ name, version, start(), stop() }as a named ESM export. - Restart Codex and open Settings → Tweaks; enable your tweak from the injected UI tab.
- Iterate: edit the ESM file, hit reload in the Tweaks tab — no reinstall needed since the runtime lives outside the app bundle.
How I could use this
- Write a githot post benchmarking three Codex++ tweaks side-by-side — a system-prompt injector, a task-history sidebar, and a token-counter overlay — with the actual tweak source included in a GitHub Gist. This is exactly the kind of concrete, runnable content that ranks for 'Codex desktop customisation' searches and drives repeat visits from power users.
- Build a Codex++ tweak that hits your
/api/interviewendpoint: when Codex opens a file matching*interview*or*resume*, the tweak silently fetches a relevant interview question from your pool and injects it as a pinned comment at the top of the session context. Dogfoods your own API and makes a compelling demo for the TechPath AU career-tools pitch. - Use Codex++ as a reference architecture for building your own runtime injection pattern in the blog's AI features — specifically, the
start()/stop()lifecycle with a manifest is a clean model for registering per-page AI assistants in a Next.js app: each page declares what tools it exposes, the root layout boots only the relevant ones, and navigation triggersstop()on cleanup. Worth a dedicated post contrasting it with React Context for AI state.
9. GENEXIS-AI/chromex
741 stars this week · TypeScript
A Chrome MV3 side-panel extension that pipes live browser context (active tab, screenshots, PDFs, open tabs) into Codex via a local native bridge — keeping API keys off the extension entirely.
Use case
The core problem: browser extensions can't securely hold API keys, and LLM chat tools have no awareness of what's actually on your screen. Chromex solves both by running a local native host process that holds credentials and proxies requests, while the extension sends rich page context (DOM text, screenshots, tab list, uploaded files) alongside each prompt. Concrete example: a dev has 12 tabs open researching a Supabase RLS bug — Chromex lets them ask 'compare what these 4 tabs say about RLS policies' without copy-pasting anything.
Why it's trending
Codex's new model capabilities (particularly image and file handling) landed recently, and developers are racing to build browser-native wrappers around it — Chromex is one of the first clean MV3 implementations with a native bridge architecture that passes Chrome's security model. The 741-star week suggests it caught the wave of the Codex API launch hype.
How to use it
- Clone the repo and run
npm install && npm run buildin the root to produce thechromex-extension/folder. - Install the native bridge: run
node scripts/install-native-host.js— this registers a local process that holds your Codex API key and proxies requests from the extension. - In Chrome, go to
chrome://extensions, enable Developer Mode, click 'Load unpacked', and select thechromex-extension/folder. - Open the side panel (Ctrl+Shift+Y or via the extension icon), paste your OpenAI/Codex API key into the native host config (not the extension UI — that's the point), and start chatting.
- To send page context: click 'Use current tab' before submitting — the extension extracts readable text from the DOM and appends it to your prompt automatically.
How I could use this
- Build a 'research assistant' mode for Henry's blog posts: a Chromex-style side-panel that reads the current draft in the CMS editor (or a local markdown file via a Chrome extension), then suggests SEO improvements, missing sections, or related internal links from the existing
content/posts/directory — all without leaving the browser. - For TechPath AU's resume analyser: create a companion Chrome extension that lets users highlight a job listing on Seek or LinkedIn, click 'Analyse for my resume', and have it POST the selected DOM text directly to
/api/resume/match— eliminating the copy-paste step that kills conversion. The native bridge pattern means the Supabase session cookie can be read from the browser and forwarded, so the API route already has the authenticated user context. - Port the voice transcription + page-context pattern to an interview prep tool: a side-panel extension that listens via the Web Speech API while the user reads a job description, then auto-populates the interview question form at
/interviewwith role, company, and JD text extracted from the active tab — turning a 5-field form fill into a one-click action.
10. t8y2/dbx
621 stars this week · Vue · clickhouse database database-client database-management
A Tauri + Vue desktop database client that replaces TablePlus/DBeaver with a single free, open-source app supporting 11 databases including DuckDB and ClickHouse.
Use case
Developers paying $79+ for TablePlus or tolerating DBeaver's sluggish Java UI now have a native-speed free alternative. Concrete scenario: a fullstack dev juggling Supabase (Postgres) for auth/data, Redis for session caching, and DuckDB for local analytics can manage all three from one app without tab-switching or license juggling. Especially relevant for teams onboarding junior devs who can't expense paid tooling.
Why it's trending
Tauri is eating Electron's lunch — the binary is a fraction of the size with native performance, and devs are actively hunting Electron replacements in 2025. DuckDB hitting mainstream adoption also means people need a GUI for it beyond the CLI, and DBX is one of the few multi-DB clients that includes it.
How to use it
- Download the latest release binary for your OS from github.com/t8y2/dbx/releases — no install wizard, just run it.
- Click 'New Connection', select your DB type (e.g. PostgreSQL), and paste in your connection string or fill host/port/credentials.
- Browse tables and schemas in the left panel, open the SQL editor with Ctrl+T, run queries directly.
- For Supabase: use the direct Postgres connection string from your Supabase project settings (not the pooler URL) — connection string format: postgresql://postgres:[password]@db.[ref].supabase.co:5432/postgres
- For local DuckDB: point it at a .duckdb file path — useful for offline analytics against exported datasets.
How I could use this
- Write a hands-on post: 'I audited my Supabase schema with a free TablePlus alternative' — walk through DBX's interface using your actual TechPath AU tables (resume_analyses, interview_questions, visa_tracker), annotate RLS policies visually, and publish the schema diagram. This hits a high-intent search query from Next.js + Supabase devs.
- Use DBX as a debugging companion while building the resume analyser or visa tracker — connect to your local Supabase instance and write the raw SQL for your career tools' queries first in DBX (where you get autocomplete and instant feedback), then port them to your Supabase query builder. Document this workflow in a dev blog post showing the query → RLS policy → Next.js API route pipeline.
- Connect DBX to a local DuckDB file and load your scraped AU job data into it for offline analysis — run analytical queries to find salary distribution by visa type (482 vs 485) or in-demand skills by city, then pipe those aggregated insights into your salary checker or visa tracker features as pre-computed statistics rather than hitting Postgres on every request.