Skip to content
Gradland
← GitHub Hot
🔥

GitHub Hot — 1 May 2026

1 May 2026·21 min readGitHubOpen SourceTools

Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.


1. nexu-io/open-design

8,913 stars this week · TypeScript · ai-agents ai-design anthropic byok

Open Design is a local-first, BYOK alternative to Anthropic's closed Claude Design tool — it wraps any coding agent CLI (Claude Code, Cursor, Codex, Gemini CLI) into a design engine backed by 72 brand-grade design systems and 31 composable Skills, outputting sandboxed HTML/PDF/PPTX.

Use case

The real problem is that AI-generated UI consistently breaks your design system — you end up re-describing your tokens, palette, and typography rules on every prompt. Open Design solves this by letting you define your brand system once (or pick one of 72 presets) and then drive all component generation through that context automatically. Concrete scenario: you need a new InterviewPrepCard component that matches your Eastern Ink palette — instead of pasting your CSS variables into each Claude prompt, you run Open Design with a TechPath brand definition and every generated component already uses --vermilion, --panel-shadow, and Lora headings correctly.

Why it's trending

Anthropic's own Claude Design launched this week as an invite-only closed tool, and Open Design is the immediate open-source counterpunch — 8.9k stars in one week is almost entirely reaction to that launch. The multi-CLI auto-detection (it finds whatever agent you have on PATH) also removes the vendor lock-in concern that made Claude Design controversial.

How to use it

  1. Clone and install: git clone https://github.com/nexu-io/open-design && npm install && npm run dev — runs locally on localhost:3000.
  2. Select or define a Design System — pick from 72 presets or create brand/techpath-au.json with your CSS tokens (--vermilion, --ink, --panel-shadow, font stacks).
  3. Open Design auto-detects your agent CLI (it finds claude on PATH if you have Claude Code installed — your CLAUDE_CODE_OAUTH_TOKEN works here).
  4. Use a Skill — e.g. the component skill: describe a JobCard and it generates sandboxed HTML preview + copy-ready TSX styled to your brand definition.
  5. Export: download as HTML snippet to drop into your Next.js component tree, or PDF/PPTX for design reviews.

How I could use this

  1. Define a eastern-ink.json brand system with all your CSS custom properties and comic-panel shadow rules, then use Open Design's page Skill to rapidly prototype new blog page layouts (e.g. /learn/diagrams, /visa-news) — get sandboxed HTML previews before writing a single TSX file, cutting your design iteration time from hours to minutes.
  2. Use Open Design's form and card Skills with a TechPath AU brand definition to generate first-pass TSX for new career tool UI (resume upload form, salary checker result card, interview question deck) — the generated components will already reference your design tokens, so the only manual work is wiring in your Supabase data fetching and API calls.
  3. Treat Open Design's Skills architecture as a blueprint for a /api/design/component endpoint in TechPath AU: accept a natural-language component spec, inject your Eastern Ink token context into the Claude Haiku system prompt, and return styled HTML — essentially building your own private version of Open Design scoped to your brand, usable from Claude Code sessions to scaffold new features on demand.

2. cursor/cookbook

2,706 stars this week · TypeScript

The Cursor SDK lets you programmatically spawn and control Cursor's coding agent from your own apps, scripts, and CI pipelines — not just from inside the IDE.

Use case

When you want to automate code generation tasks without a human sitting in the Cursor IDE — for example, a GitHub Action that receives a bug report issue, spins up a Cursor cloud agent with the repo and issue text as context, and opens a PR with a draft fix. The SDK handles streaming agent events, managing conversation state, and retrieving generated artifacts, so you can wire Cursor's agent into any Node.js backend or CLI tool.

Why it's trending

Cursor just shipped a public API and SDK this week, making it the first major AI IDE to expose its coding agent programmatically — developers are racing to build automations and integrations before the obvious use cases get productised. 2,700 stars in a single week signals this is the "oh damn, Cursor is now an API" moment.

How to use it

  1. Generate a Cursor API key at cursor.com/dashboard/integrations and set CURSOR_API_KEY in your env.
  2. Install the SDK: npm install @cursor-so/sdk
  3. Spawn a local agent and stream its output:
import { CursorAgent } from '@cursor-so/sdk';

const agent = new CursorAgent({ apiKey: process.env.CURSOR_API_KEY });
const run = await agent.run({
  prompt: 'Add input validation to the signup form in app/auth/page.tsx',
  workspace: '/path/to/local/repo'
});
for await (const event of run.stream()) {
  if (event.type === 'message') console.log(event.content);
}
  1. For cloud runs (no local workspace needed), swap workspace for repository: 'org/repo' and the agent runs in a sandboxed cloud environment.
  2. Retrieve artifacts (generated files, diffs) from run.artifacts() after the stream closes.

How I could use this

  1. Wire a Cursor cloud agent into your daily GitHub Actions workflow: when the daily ai-news or visa-news scripts produce a markdown file, trigger a Cursor agent to auto-generate a matching social media thread draft and commit it to content/threads/ — zero manual effort to repurpose content across formats.
  2. Build a '/dashboard/code-review' route that lets TechPath AU users paste a GitHub repo URL and a job description, then spawns a Cursor cloud agent to audit their public code against the role's requirements — returning specific, actionable feedback like 'your auth.ts lacks input sanitisation, which is a red flag for this security-focused role'.
  3. Replace the current static diagram generation pipeline (fetch-diagrams.ts) with a Cursor SDK agent that not only generates Mermaid diagrams but also iterates on them: stream the agent's reasoning as it refines the diagram based on a rubric, then surface the agent's 'thinking steps' in the /learn/diagrams UI as an expandable 'how this was built' panel — turns a static asset into an educational artifact.

3. freestylefly/awesome-gpt-image-2

2,666 stars this week · various

A reverse-engineered library of 367 structured GPT-Image-2 prompts organised as composable schemas — treating image prompts like typed function signatures rather than free-form text.

Use case

The core problem is prompt fragility: a prompt that produced a great result once is nearly impossible to reproduce or parameterise for batch use. This repo solves it by decomposing successful outputs back into atomic components (subject, lighting, material, typography, layout hierarchy) so you can slot variables in and call them from code. Concrete example: instead of a 400-word prose prompt for an infographic, you call a structured template with { topic: 'visa subclass 482', style: 'editorial', palette: 'warm' } and get consistent, brandable output every run.

Why it's trending

OpenAI's gpt-image-1 API only became available to all paid API tiers in April 2025, so developers are in the first sprint of figuring out reliable production patterns for it. This repo landed at exactly the right moment — it's the first large-scale attempt to treat image prompts as versioned, reusable code assets rather than one-off incantations.

How to use it

  1. Browse docs/templates.md to find the template category closest to your use case (UI mockups, infoviz, poster/typography are the most production-ready).
  2. Copy the atomic schema for that category — it defines required fields like [SUBJECT], [LIGHTING], [COLOR_PALETTE], [LAYOUT].
  3. Wrap it in a TypeScript helper that interpolates your variables:
const buildPrompt = (vars: ImageVars) =>
  POSTER_TEMPLATE
    .replace('[SUBJECT]', vars.subject)
    .replace('[COLOR_PALETTE]', vars.palette)
    .replace('[LAYOUT]', vars.layout);

const image = await openai.images.generate({
  model: 'gpt-image-1',
  prompt: buildPrompt({ subject: 'Visa 485 Timeline', palette: 'warm parchment', layout: 'vertical editorial' }),
  size: '1024x1024'
});
  1. Pin the template version in your repo so a prompt update doesn't silently break generated outputs.
  2. For batch pipelines (e.g. auto-generating cover images for daily posts), log the template name + variable hash alongside the image URL so you can reproduce any output exactly.

How I could use this

  1. Auto-generate branded Open Graph cover images for every githot/digest/AI-news post at build time — use the 'poster & typography' template category with your Eastern Ink palette tokens (--vermilion, --ink, --parchment) hard-wired into the prompt schema, so every post image looks like it came from the same designer rather than a random generation.
  2. Add a 'Career Card Generator' to the dashboard: user fills in their target role, current stack, and visa status, and you call the 'UI & interface' template category to produce a shareable 1080×1080 career summary card — same visual language as a LinkedIn banner but personalised. Store the image URL in Supabase against the user's profile and surface it on /dashboard/profile next to their ReadinessScore.
  3. Build a scripts/fetch-githot-images.ts pipeline (mirroring your existing fetch-diagrams.ts pattern) that reads each new githot entry's repo name + description and generates a thumbnail using the 'infoviz / chart' template — so the /githot listing page gets unique visual thumbnails instead of generic avatars, without any manual design work per entry.

4. theori-io/copy-fail-CVE-2026-31431

1,788 stars this week · Python

Proof-of-concept demonstrating CVE-2026-31431, a Linux kernel memory-copy vulnerability affecting Ubuntu 24.04, Amazon Linux 2023, RHEL 10.1, and SUSE 16 — relevant to anyone running cloud workloads on these distros.

Use case

This repo documents a kernel-level flaw in how certain copy operations are handled, sufficient to affect major production Linux distributions as of early 2026. The real value for most developers is the technical writeup (xint.io/blog/copy-fail-linux-distributions) explaining the root cause, affected kernel versions, and how to verify whether a system is patched. Concrete scenario: a DevOps engineer at a Sydney startup running Ubuntu 24.04 on AWS needs to know whether their kernel build is in the vulnerable range before the next patch window.

Why it's trending

Freshly disclosed CVE hitting four major enterprise Linux distributions simultaneously — Ubuntu LTS, Amazon Linux, RHEL, and SUSE — means every cloud-ops and platform-engineering team is checking their kernel versions right now. Cross-distro CVEs always spike GitHub traffic because they affect the widest possible audience.

How to use it

  1. Check your kernel version: uname -r — compare against the affected versions in the README table. 2. Ubuntu: apt-get update && apt-get upgrade linux-image-$(uname -r) to pull the patched kernel. 3. Amazon Linux 2023: dnf update kernel then reboot. 4. RHEL/SUSE: follow vendor errata pages for CVE-2026-31431 — both have published patches. 5. After rebooting, re-run uname -r and cross-reference the fixed-version range in the linked technical writeup to confirm you're patched.

How I could use this

  1. Write a 'CVE Watch for AU Cloud Teams' post series — each week summarise the one kernel/container CVE most likely to affect AWS Sydney workloads, with the exact uname -r range and one-liner patch command. This fills a gap: most CVE content is US-centric and misses the AEST patch-window timing.
  2. Add a 'Stack Health' widget to the TechPath AU dashboard that checks whether a user's listed tech stack (e.g. 'Ubuntu 24.04 on AWS') appears in recently disclosed CVEs — surface it as a talking point for interviews: 'I track CVEs affecting my team's infra' is a strong signal for senior SRE/DevOps roles.
  3. Build a Claude-powered 'CVE Plain English' micro-tool: paste a CVE ID or NVD URL, get back a 3-sentence plain-English summary of impact, affected versions, and fix status — aimed at junior devs who see CVE numbers in standup but don't know how to quickly assess whether they're affected.

5. willchen96/mike

822 stars this week · TypeScript

A fully open-source AI legal platform (Next.js + Express + Supabase) that lets you upload documents and query them with LLMs — think a self-hostable NotebookLM for contracts and legal filings.

Use case

The core problem: legal documents (employment contracts, visa decisions, lease agreements) are dense, expensive to have a lawyer review, and time-consuming to parse manually. Mike gives you a structured pipeline to ingest PDFs/DOCXs, convert them to processable text via LibreOffice, store them in Supabase, and run multi-turn AI Q&A over them. Concrete example: an international grad on a 482 visa uploads their employment contract and asks 'does this require 6 months notice or 3?' — Mike retrieves the relevant clause and answers with the source passage.

Why it's trending

Document-AI repos are flooding GitHub right now but nearly all are Python/LangChain demos or closed SaaS. Mike is rare: a production-grade TypeScript stack (same as the mainstream Next.js ecosystem) with real auth, real storage, and a one-shot Supabase migration — meaning developers can fork and ship a working product in hours, not days. The AGPL-3.0 license also signals it's protecting against silent enterprise forks, which signals the maintainer is serious about it as a product.

How to use it

  1. Clone the repo, copy both .env.example files, and add your Supabase URL/key + an S3-compatible bucket (Cloudflare R2 has a generous free tier) + one model provider key (OpenAI or Anthropic). 2. Run the one-shot migration in the Supabase SQL editor: backend/migrations/000_one_shot_schema.sql — this sets up all tables, RLS policies, and storage buckets in one paste. 3. npm run dev --prefix backend starts the Express API on port 3001; npm run dev --prefix frontend starts Next.js on 3000. 4. Upload a PDF through the UI — the backend uses LibreOffice (install locally: sudo apt install libreoffice) to convert DOCX to PDF, then chunks and embeds it into Supabase pgvector. 5. Query the document via the chat interface — the backend retrieves top-N chunks via cosine similarity, then sends them as context to your configured model.
// The core retrieval pattern Mike uses (adapt for your own routes):
const { data: chunks } = await supabase.rpc('match_documents', {
  query_embedding: embedding,
  match_threshold: 0.78,
  match_count: 5,
  document_id: docId
});

How I could use this

  1. Build a 'Visa Document Checker' feature on TechPath AU: let 482/485 applicants upload their visa grant letter or employment contract and ask natural-language questions like 'what work conditions apply to me?' or 'can I change employers?'. Fork Mike's document ingestion pipeline (LibreOffice → PDF → pgvector) and wire it into your existing Supabase instance — the migration is a single SQL file so it drops straight into your supabase/ migrations folder as 018_document_qa.sql.
  2. Add an 'Employment Contract Analyser' to the resume/job tools section: users paste or upload a job offer PDF and Mike's chunk-retrieval pattern surfaces red flags (probation periods, IP assignment clauses, restraint of trade). Differentiate from generic AI chat by pre-seeding system prompts with Australian employment law context (Fair Work Act minimums, NES entitlements) so answers are jurisdiction-specific — exactly the gap international grads fall into.
  3. Use Mike's architecture as a reference implementation for a 'Learning Path RAG' feature: embed your existing markdown content from content/posts/ and content/digest/ into pgvector, then expose a chat interface at /learn/ask that answers questions like 'what visa do I need to stay after my 485 expires?' with citations pointing back to your own articles. This turns your content moat into an interactive product differentiator and keeps users on-site instead of going to ChatGPT.

6. DanOps-1/Gpt-Agreement-Payment

811 stars this week · Python · adversarial-ml anti-fraud bug-bounty captcha-solver

ChatGPT Plus/Team/Pro 订阅协议端到端重放工具集 · hCaptcha 视觉求解器 · 反欺诈机制实证研究 / End-to-end protocol replay toolkit for ChatGPT Plus/Team/Pro subscription with from-scratch hCaptcha solver and empirical anti-fraud research

Use case

ChatGPT Plus/Team/Pro 订阅协议端到端重放工具集 · hCaptcha 视觉求解器 · 反欺诈机制实证研究 / End-to-end protocol replay toolkit for ChatGPT Plus/Team/Pro subscription with from-scratch hCaptcha solver and empirical anti-fraud research

Why it's trending

How to use it

How I could use this


7. GENEXIS-AI/chromex

703 stars this week · TypeScript

A Chrome MV3 side-panel extension that pipes live browser context (current page, tabs, PDFs, screenshots) into Codex via a local native bridge — no credentials stored in the extension.

Use case

The core problem is that browser AI assistants (Arc, Copilot in Edge, etc.) either lock you into a specific model or require you to paste content manually. Chromex solves this by injecting the actual DOM + selected tabs into your own Codex instance automatically. Concrete example: you're researching 482 visa processing times across 6 IMMI tabs — instead of copy-pasting each, Chromex sends all selected tabs as context and you ask 'what's the current median processing time per stream?'

Why it's trending

OpenAI just repositioned Codex as an agentic coding/reasoning API (not just completions), and Chromex is one of the first MV3-compliant extensions that exposes Codex's full context window to live browser state. The native bridge design (credentials stay off the extension, live in a local process) also sidesteps Chrome Web Store policy friction that killed earlier AI extensions.

How to use it

  1. Clone the repo and run npm install && npm run build in the root — this produces the extension bundle in dist/. 2. Load dist/ as an unpacked extension in chrome://extensions (Developer mode on). 3. Start the local native bridge: node bridge/index.js — this is what holds your OpenAI/Codex API key and proxies requests from the extension. 4. Open the Chrome side panel (toolbar icon) and configure your Codex model + endpoint in Settings. 5. Navigate to any page, hit 'Use current page' in the panel, and start chatting — the extension sends the DOM text + any selected tabs as context via the bridge.

How I could use this

  1. Build a 'TechPath Research Mode' bookmarklet: when Henry visits a job listing on Seek or LinkedIn, a Chromex-style side panel auto-extracts the role requirements and cross-references them against the user's stored resume in Supabase — surfacing gap analysis without leaving the job board.
  2. Visa tracker browser assistant: a stripped-down Chromex fork that monitors the user's IMMI account page and DOHA processing times page, summarises any delta since last visit, and pushes a digest to the existing visa-news content pipeline via a local webhook — turning manual monitoring into automated alerts.
  3. Interview prep context injector: when a user is on a company's LinkedIn/About page or Glassdoor reviews page, the side panel sends that page content to the existing /api/interview route as extra context, so Claude can generate company-specific STAR questions rather than generic ones — one API route change + a small Chrome extension.

8. b-nnett/codex-plusplus

589 stars this week · TypeScript

codex-plusplus is a BetterDiscord-style tweak loader for OpenAI's Codex desktop app — it patches the Electron asar bundle at install time so you can inject custom ESM modules without touching the app bundle on every update.

Use case

The Codex desktop app ships with no extension API, so any UI annoyance or missing workflow feature requires waiting on OpenAI. codex-plusplus solves this by patching app.asar once, then loading tweaks from your home directory at runtime — meaning you can ship a tweak that, for example, auto-prepends your system prompt to every session, adds keyboard shortcuts Codex doesn't have, or redirects the 'New Task' button to a custom template picker, all without rebuilding or re-patching the app.

Why it's trending

OpenAI shipped the Codex desktop app in late April 2026 and it landed in the hands of a large developer audience almost immediately — codex-plusplus appeared within days of that launch, riding the same 'I want to fix this thing right now' energy that made BetterDiscord and Tampermonkey popular. 589 stars in a week on an alpha repo means the community found it exactly when they needed it.

How to use it

  1. Install the patcher: brew tap b-nnett/codex-plusplus https://github.com/b-nnett/codex-plusplus && brew install codexplusplus && codexplusplus install — this backs up your Codex.app, patches app.asar, and re-signs the binary.
  2. Create a tweak directory: mkdir -p ~/.codex-plusplus/tweaks/my-tweak.
  3. Write a manifest + ESM module:
// ~/.codex-plusplus/tweaks/my-tweak/index.ts
export const manifest = { id: 'my-tweak', name: 'My Tweak', version: '0.1.0' };
export function start() { console.log('tweak loaded'); }
export function stop() {}
  1. Reload Codex — the runtime discovers the tweak and shows it in the injected 'Tweaks' settings tab.
  2. Iterate: edit the file, hit reload in the Tweaks tab — no re-patching needed.

How I could use this

  1. Write a technical deep-dive post on the Electron asar patching technique (backup → patch → SHA-256 recompute → re-sign) — this is the same mechanism used by BetterDiscord and is genuinely non-obvious; it would rank well for 'how to patch electron app asar' and signals strong systems knowledge to hiring managers reading your blog.
  2. Build a codex-plusplus tweak that injects a 'TechPath AU mode' system prompt into every Codex session — when activated it prepends context like 'You are helping an international IT graduate on a 485 visa in Australia; default all job advice to AU market norms, reference ANZSCO codes, and flag any US-specific advice' — then blog the tweak as a live demo of your platform's value proposition.
  3. Use the tweak architecture as inspiration for a Claude-powered browser extension pattern on your site: the start/stop ESM lifecycle with an in-app manifest UI is a clean pattern for letting logged-in users toggle AI features (auto-summarise job descriptions, highlight visa-relevant keywords in job ads) without you shipping new pages — document the pattern and open-source a minimal version as a GitHub repo to drive backlinks.

9. denuitt1/mhr-cfw

558 stars this week · Python

A Domain-Fronting Relay that routes traffic though GAS (Google Apps Script) and forwards it to Cloudflare Workers. Designed to bypass DPI.

Use case

A Domain-Fronting Relay that routes traffic though GAS (Google Apps Script) and forwards it to Cloudflare Workers. Designed to bypass DPI.

Why it's trending

How to use it

How I could use this


10. Fokkyp/SoftwareCopyright-Skill

496 stars this week · Python

An OpenAI Codex skill that reads your local source code and generates the full document pack needed to file a Chinese software copyright (软著) registration — for free, locally, without handing code to a third-party agency.

Use case

Filing a 软著 (software copyright) in China requires a specific set of documents: an application form with exact field values, an operations manual written for reviewers, and a code exhibit following the '30 pages from the front, 30 from the back' extraction rule. Most developers outsource this to paid agencies (¥500–2000) who just do the document formatting. This skill automates exactly that formatting step — you run it against your real project, confirm key decisions interactively, and get submission-ready .docx and .txt files without touching any external service.

Why it's trending

Chinese indie developers and small teams are increasingly filing software copyrights for SaaS and AI tools as IP protection, and the pain of document prep is a known tax on solo founders. The repo gained traction because it directly undercuts a cottage industry of paid agents by open-sourcing the entire workflow as a Codex skill — timing aligns with the broader 'AI coding agent as automation layer' moment.

How to use it

  1. Copy the software-copyright-materials/ directory into your Codex skills folder (e.g., ~/.codex/skills/software-copyright-materials/).,2. Open your project in Codex and invoke the skill: describe your software name, version, copyright holder, and development environment when prompted.,3. The skill reads your source files, extracts the first 30 and last 30 pages of code per the 鉴别材料 rule, and drafts an operations manual based on actual UI/feature analysis — confirm each checkpoint.,4. Review the Markdown drafts it pauses on (business description, field values, code selection) and correct anything domain-specific before it finalises.,5. Output lands in 软件著作权申请资料/正式资料/ — three files: 申请表信息.txt, *_操作手册.docx, and *_代码材料.docx — ready to upload to the CNIPA online portal.

How I could use this

  1. TechPath AU already helps international IT grads build careers — add a '软著 filing checklist' page under /learn for users who built side projects in China and want to IP-protect them before migrating; integrate this skill's field schema as a guided form that exports a pre-filled 申请表信息.txt.
  2. Build a 'portfolio artifact generator' career tool: given a GitHub repo URL, use Claude to produce the same structured documentation this skill generates (ops manual, feature summary, code exhibit) but formatted for an Australian skills assessment (ACS RPL or VETASSESS) instead of CNIPA — same document-automation pattern, different output schema, directly relevant to your 485/482 visa audience.
  3. Use the skill's interactive confirmation loop as a UX pattern for your AI features: instead of one-shot Claude responses, implement a multi-step 'confirm before finalise' flow in your resume analyser or cover letter tool — show the user a draft at each stage (job match summary → tailored bullet points → final letter) and let them correct before Claude finalises, reducing hallucination risk and increasing trust.
← All issuesGo build something