Skip to content
Gradland
← GitHub Hot
🔥

GitHub Hot — 29 March 2026

29 March 2026·22 min readGitHubOpen SourceTools

Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.


1. slavingia/skills

5,365 stars this week · various

A Claude Code plugin that installs 10 business-strategy slash commands based on Sahil Lavingia's Minimalist Entrepreneur framework, turning entrepreneurship advice into actionable AI-guided workflows.

Use case

Developers building side projects waste time second-guessing scope, pricing, and go-to-market strategy — this gives Claude Code structured, opinionated commands to run against your actual project context. For example, you run /mvp inside your blog repo and Claude analyzes your codebase + goals to tell you what to cut, or /pricing to get a defensible pricing rationale for a paid newsletter tier based on your audience size.

Why it's trending

Claude Code's plugin/skills marketplace just launched and this is one of the first high-profile real-world skill packs from a recognizable founder (Gumroad's CEO), making it a reference implementation everyone in the indie hacker space is studying and forking. It's also riding the wave of 'AI as business co-founder' tooling that spiked this week.

How to use it

  1. Open Claude Code in your project directory. 2. Run /plugin marketplace add slavingia/skills then /plugin install minimalist-entrepreneur — no npm install, no config files. 3. Run /validate-idea with a one-line description of your feature idea and Claude will interrogate it against the book's criteria (is the problem real, is the community reachable, will they pay). 4. If the idea passes, run /mvp inside your actual repo — Claude reads your existing code structure and outputs a scoped feature list ruthlessly cut to the smallest shippable thing. 5. Before launch, run /minimalist-review on any major decision (adding a feature, changing pricing, hiring) to get a gut-check against the minimalist principles.

How I could use this

  1. Run /processize on Henry's blog itself — treat 'AI-powered blog' as the product and have Claude design the manual version first (e.g., Henry manually curates and emails 5 readers a weekly AI-generated summary of his posts before building the automated newsletter feature), surfacing real user value before writing any Supabase edge functions.
  2. Build a lightweight '/validate-my-post' custom skill for the blog: fork this repo, add a skill that takes a draft blog post and runs it through an audience-fit check — does the topic match Henry's stated niche, does it have a clear call to action, is there a community that would share it — essentially a pre-publish editorial checklist powered by Claude Code running against the post's markdown file.
  3. Use /first-customers as a prompt template to generate a structured outreach strategy for Henry's AI tools (resume matcher, cover letter generator) — pipe the output into a Supabase table that tracks which communities to post in, what angle to use per community, and which Reddit/Discord/HN threads to target, creating a repeatable launch playbook stored alongside the codebase.

2. larksuite/cli

2,900 stars this week · Go

A CLI tool with 200+ commands that lets both humans and AI agents programmatically control Lark/Feishu (messaging, docs, calendar, sheets) — essentially an MCP-style interface for the Lark ecosystem.

Use case

When you want an AI agent to autonomously send messages, create docs, update spreadsheets, or schedule meetings in Lark without manually wiring up OAuth flows and API calls. For example: a CI/CD pipeline that posts deployment summaries to a Lark channel, or a Claude/GPT agent that reads a Lark Base (their Airtable equivalent) and schedules follow-up tasks — all via shell commands with zero custom API integration code.

Why it's trending

Agent-native tooling is the hottest category right now — this dropped at exactly the moment teams are building LLM agents that need to 'do things' in workplace tools, and it ships 19 pre-built Agent Skills that plug directly into tool-calling frameworks like Claude's tool use or OpenAI function calling.

How to use it

  1. Install: npm install -g @larksuite/cli
  2. Authenticate interactively (creates a Lark app and stores credentials in OS keychain): lark login
  3. Send a test message to verify: lark message send --chat-id <CHAT_ID> --text 'Hello from CLI'
  4. For AI agent use, point your agent at the Skills directory — each skill is a structured JSON schema describing inputs/outputs. Load a skill like: lark skill show messenger-send-message to get the exact tool definition to paste into your agent's tool config.
  5. Chain commands in scripts: lark sheet read --spreadsheet-token <TOKEN> --sheet-id <ID> | jq '.rows' | your-ai-script.ts

How I could use this

  1. Auto-publish blog post notifications: when Henry merges a new post to main, a GitHub Action runs lark message send to post a formatted card to his team/friends Lark group with the post title, summary, and URL — no webhook config required.
  2. AI writing assistant loop: build a Next.js API route that calls a Lark Base (used as a structured content database) via lark base record list, feeds the rows into GPT to generate a weekly newsletter draft, then uses lark doc create to drop the draft into a Lark Doc for review — entire editorial pipeline automated.
  3. Agent skill integration for the blog's AI chat: expose the 19 pre-built Lark skills as tools in a Vercel AI SDK tool-calling setup, so Henry's blog AI assistant can answer questions like 'schedule a 30-min intro call' by actually calling lark calendar event create on his behalf — a live demo of agentic capability directly on the portfolio site.

3. HKUDS/OpenSpace

2,432 stars this week · Python

OpenSpace is a shared memory and skill-evolution layer that sits on top of existing AI agents (Claude Code, Codex, Cursor, etc.) so they learn from past runs instead of re-reasoning from scratch every time.

Use case

Every time you run an AI coding agent on a task, it burns tokens rediscovering solutions it (or another agent) already found. OpenSpace intercepts agent runs, stores successful task→solution patterns in a shared vector store, and injects relevant past experiences as compressed context on future runs — cutting token usage by ~46%. Concrete example: your Claude Code agent fixes a Supabase RLS policy bug, OpenSpace logs the pattern; next time Codex hits the same RLS issue, it retrieves the cached solution instead of spending 3k tokens exploring.

Why it's trending

This is spiking right now because Claude Code and OpenAI Codex CLI both launched or went viral in the last few weeks, and developers are immediately hitting the token cost wall when running these agents repeatedly — OpenSpace offers a drop-in cost fix with a single CLI command.

How to use it

  1. Install: pip install openspace-agent (Python 3.12+ required), then run openspace init to scaffold a local experience store (SQLite + embeddings by default).
  2. Wrap your existing agent invocation: instead of claude-code --query 'fix auth bug', run openspace --query 'fix auth bug' --agent claude-code — OpenSpace retrieves relevant past experiences and prepends them as compressed context.
  3. After the task completes, OpenSpace automatically indexes the successful solution: openspace store --session <session-id> (or auto-stores if configured).
  4. To share your experience pool with a team or across machines, point OPENSPACE_STORE_URL to a shared Postgres/Supabase instance in your .env — all agents in your org pull from the same knowledge base.
  5. Inspect what was learned: openspace list --top 10 shows the highest-reuse stored patterns with token-savings stats.

How I could use this

  1. Wire OpenSpace into your blog's AI writing assistant: every time the agent helps you draft or edit a post, store the successful prompt→output pattern tagged with topic (e.g., 'TypeScript', 'career'). After 20 posts, your agent will auto-retrieve your personal writing style and structural preferences without you re-prompting them — essentially a self-building style guide.
  2. Build a career-tools agent loop where OpenSpace accumulates successful resume-tailoring patterns per job category. Run Claude Code or Codex against job descriptions, let OpenSpace learn which keyword substitutions and reframings got past ATS filters, then surface the top-3 reusable transformations as a 'what worked before' panel in your cover letter UI — no extra LLM call needed.
  3. Use OpenSpace's shared experience store (backed by Supabase pgvector) as the memory layer for a public 'AI debugging companion' feature on your blog: readers paste their Next.js or Supabase errors, your agent solves them, and OpenSpace indexes every solution. Over time the agent gets faster and cheaper on common errors, and you can expose a leaderboard of 'most-reused fixes' as a genuinely useful SEO content page.

4. magnum6actual/flipoff

2,398 stars this week · JavaScript

A zero-dependency, single-file web app that renders a pixel-perfect split-flap airport board animation in any browser — free alternative to $3,500 physical hardware.

Use case

Anyone who wants a retro aesthetic display (office ambiance, conference lobby, event signage, portfolio hero section) without buying proprietary hardware or paying for SaaS. Concrete example: you're presenting at a meetup and want your talk title to dramatically flip into view on a big screen — clone this, swap the quotes array with your content, and it's done in 10 minutes.

Why it's trending

The 'retro hardware aesthetic on cheap screens' trend is peaking alongside the vibe-coding/no-framework movement — this scratches both itches at once with a single HTML file and zero npm. It also went viral on Twitter/X from dev accounts sharing office TV setups, which spiked the star count this week.

How to use it

  1. Clone and open: git clone https://github.com/magnum6actual/flipoff && open flipoff/index.html — it works immediately with no build step.
  2. Edit the messages array inside index.html to replace the default quotes with your own content (blog post titles, status updates, etc.).
  3. For a live data feed, add a setInterval fetch call that hits your API and calls the internal showMessage(text) function every N seconds — the file is self-contained so just inject a <script> block at the bottom.
  4. Deploy as a static asset to Vercel or Netlify with vercel --prod — no config needed since it's a single HTML file.
  5. Point a Raspberry Pi browser in kiosk mode at the deployed URL for a permanent always-on display.

How I could use this

  1. Use it as the hero section of your blog's homepage: embed the FlipOff canvas in a Next.js <iframe> or port the core CSS/JS animation logic into a React component, then feed it your 5 most recent blog post titles fetched from Supabase on page load — gives an instant 'arrivals board' feel that makes the site memorable and showcases your posts dynamically.
  2. Build a 'career status board' page on your portfolio site that pulls real-time data from a Supabase table (current role, availability for freelance, tech stack of the month) and displays it as a flip board — you update one row in Supabase, the display auto-refreshes via a Supabase Realtime subscription, and recruiters see a living résumé rather than a static PDF.
  3. Wire it to your AI blog pipeline: after your AI writing assistant generates a new post draft, trigger a Supabase Edge Function that pushes the post title + a one-line AI-generated teaser into a pending_posts table, then have the FlipOff embed on your 'what's coming next' page poll that table every 30 seconds — creates a public 'content pipeline ticker' that builds anticipation and demonstrates your Supabase + AI workflow end-to-end.

5. elder-plinius/G0DM0D3

1,977 stars this week · TypeScript

A single-file, open-source multi-model chat interface with red-teaming tools, parallel model racing, and input perturbation — essentially a power-user's ChatGPT that runs 55+ models simultaneously via OpenRouter.

Use case

Developers and researchers who need to compare LLM outputs across models without paying for 10 separate subscriptions or building their own orchestration layer. Concrete example: you're writing a blog post about AI model differences — instead of copy-pasting the same prompt into Claude, GPT-5, and Gemini separately, GODMOD3's ULTRAPLINIAN engine runs all three in parallel and scores the outputs, giving you a defensible comparison in one session.

Why it's trending

GPT-5 just dropped and the AI community is immediately stress-testing it against Claude 4 and Gemini 2.5 — a multi-model parallel evaluator is exactly what people need right now. The 'jailbreak/red-team' framing plus the Plinius brand (known for prompt injection research) is also driving curiosity from the security research crowd.

How to use it

  1. Clone the repo: git clone https://github.com/elder-plinius/G0DM0D3 && cd G0DM0D3
  2. Get a free OpenRouter API key at openrouter.ai — this single key gives you access to all 55+ models.
  3. Open index.html directly in your browser (no server needed — it's a single-file app). Paste your OpenRouter key in the settings panel; it stays in localStorage, never sent anywhere.
  4. Try GODMODE CLASSIC first: type a prompt and watch 5 curated model+prompt combos race in parallel. Note which models nail your use case.
  5. For blog content research, switch to ULTRAPLINIAN Tier 1 (10 models), ask a contested question, and use the composite scores as a citation-worthy data point in your post.

How I could use this

  1. Build a 'Model Showdown' recurring blog series where you use ULTRAPLINIAN to benchmark 10+ models on the same coding or writing task each week — embed the raw scores as a table and let readers vote on which output they prefer. The differential data is genuinely novel content no one else is publishing.
  2. Feed your resume and a job description into GODMODE CLASSIC's 5-model parallel mode and have each model score the ATS match and suggest specific edits — then diff the suggestions across models to find consensus improvements vs. model-specific quirks. Way more signal than a single-model resume checker.
  3. Integrate the Parseltongue input perturbation concept into your blog's AI writing assistant: before sending a prompt to your Supabase Edge Function, apply light lexical substitution (swap synonyms, reorder clauses) to get more diverse draft variations from a single model call — cheaper than calling 5 models but breaks prompt-response monotony.

6. alvinunreal/awesome-opensource-ai

1,919 stars this week · various · agents ai artificial-intelligence awesome

A curated, opinionated index of genuinely open-source AI projects (not just 'available weights' but truly open), covering models, RAG tools, agents, MLOps, and infra — saving you hours of vetting licensing and availability.

Use case

When building an AI-powered blog, you constantly hit the question: 'Can I self-host this without a licensing headache?' This list cuts through the noise by pre-filtering for truly open-source projects. For example, instead of defaulting to OpenAI, Henry can find open-weight LLMs, open RAG pipelines, and open embedding models he can run on Supabase + a VPS without vendor lock-in or API cost blow-ups.

Why it's trending

The 'open-source AI' label has become marketing noise — Meta's Llama, Mistral, and others use non-OSI licenses — and developers are increasingly burned by bait-and-switch licensing. This repo hit 1,900+ stars this week because it draws a hard line on what 'truly open' means, filling a real gap as more devs want self-hostable AI stacks.

How to use it

  1. Browse the repo by category (Models, RAG, Agents, MLOps) to find alternatives to whatever paid API you're currently using — e.g., swap OpenAI embeddings for nomic-embed-text via Ollama.
  2. For each candidate, check the linked license directly — this list only includes OSI-approved or equivalent, but always verify for your use case.
  3. Cross-reference a model or tool you find here against Hugging Face or the project's GitHub to confirm active maintenance (look for commits in the last 90 days).
  4. Spin up a local test using Ollama for LLMs or LangChain/LlamaIndex for RAG:
# Example: run an open model locally via Ollama
ollama pull mistral
curl http://localhost:11434/api/generate -d '{"model": "mistral", "prompt": "Summarize this blog post: ..."}'
  1. Once validated locally, wire it into your Next.js API route using the OpenAI-compatible Ollama endpoint so you can swap providers with zero code changes.

How I could use this

  1. Build a 'Recommended Reading' sidebar on each blog post using a self-hosted open embedding model (e.g., nomic-embed-text via Ollama) + Supabase pgvector for semantic similarity search — zero OpenAI API costs, full data ownership, and a concrete blog post series: 'How I built RAG on my blog for $0/month.'
  2. Create an open-source stack career tool: use an open LLM from this list (e.g., Mistral via Ollama) to power a resume-to-job-description gap analyzer that runs entirely in Henry's own infra — pitch it as 'privacy-first, no data leaves your machine,' which is a strong differentiator vs. ChatGPT-based tools.
  3. Add an 'AI Newsletter Digest' feature to the blog that weekly ingests RSS feeds from AI sources, runs summarization through a self-hosted open LLM, and auto-drafts a post — use this repo as the source list for which open tools to feature, making the newsletter itself a living proof-of-concept of the open-source AI stack.

7. opa334/darksword-kexploit

1,034 stars this week · Objective-C

A reimplemented iOS kernel exploit (CVE-based, affects iOS 15–26.0.1) written in Objective-C, enabling kernel-level privilege escalation on unpatched Apple devices — irrelevant to Henry's blog stack but trending due to security community attention.

Use case

This repo is strictly in the iOS security research / jailbreak community space. It solves the problem of needing a clean, readable Objective-C reference implementation of the DarkSword kernel exploit for researchers studying iOS kernel internals, not something a Next.js/Supabase blog developer would ever integrate. A concrete scenario: a security researcher wants to understand the exploit chain on an EOL iPhone 8 running iOS 15.x without parsing the original obfuscated C code.

Why it's trending

It's trending because a kernel exploit affecting iOS all the way up to 26.0.1 (a beta) is a rare, high-impact disclosure, and opa334 is a well-known jailbreak developer (creator of TrollStore/Dopamine), giving this repo immediate credibility and attention in the iOS sec community.

How to use it

  1. This is low-level iOS kernel exploit code — do not run on a device you care about or on any production hardware. 2. Clone the repo and open in Xcode on a macOS machine with iOS SDK installed. 3. Study the Objective-C reimplementation alongside the original C source at htimesnine/DarkSword-RCE to understand the exploit primitive. 4. If doing legitimate research, run only on a jailbreak-research device (e.g., an EOL iPhone with iOS 15.x) in a controlled environment. 5. Note: offsets are hardcoded for iOS 15.x only — porting to other versions requires kernel offset research via tools like iometa or kernelcache diffing.

How I could use this

  1. Write a technical blog post titled 'What iOS Kernel Exploits Teach Us About Memory Safety' — use this repo as a case study to explain kernel privilege escalation concepts to a developer audience without needing to endorse or run the exploit. It's a high-traffic SEO topic right now given the disclosure timing.
  2. This has zero relevant application to resume/career tooling — do not force a connection. Instead, use the trending topic as content marketing: write a 'security news digest' AI feature for your blog that summarizes weekly CVEs and exploit disclosures (pull from NVD API + GitHub trending) and auto-generates a plain-English explainer post via GPT-4.
  3. Build an AI-powered 'threat relevance scorer' widget for your blog that ingests GitHub trending repos, detects security-related ones using keyword/topic classification, and flags whether they're relevant to web developers (e.g., Next.js XSS) vs. out-of-scope (e.g., iOS kernel exploits) — useful for a dev audience that wants security awareness without noise.

8. nashsu/opencli-rs

979 stars this week · Rust

Opencli-rs is a Blazing fast, memory-safe command-line tool — Fetch information from any website with a single command. Covers Twitter/X, Reddit, YouTube, HackerNews, Bilibili, Zhihu, Xiaohongshu, and 55+ sites, with support for controlling Electron desktop apps, integrating local CLI tools (gh, docker, kubectl)

Use case

Opencli-rs is a Blazing fast, memory-safe command-line tool — Fetch information from any website with a single command. Covers Twitter/X, Reddit, YouTube, HackerNews, Bilibili, Zhihu, Xiaohongshu, and 55+ sites, with support for controlling Electron desktop apps, integrating local CLI tools (gh, docker, kubectl)

Why it's trending

How to use it

How I could use this


9. facebookresearch/tribev2

878 stars this week · Jupyter Notebook

TRIBE v2 is Meta's multimodal foundation model that predicts fMRI brain activity from video, audio, and text — letting you do neuroscience experiments in software without a scanner.

Use case

Researchers and developers can now simulate how the human brain cortex responds to any media content without running expensive fMRI studies. For example, you could feed a product demo video into the model and get a heatmap of predicted cortical activation across ~20k brain surface vertices — useful for understanding which sensory modalities are driving engagement, or for studying AI-brain alignment without human subjects.

Why it's trending

This dropped alongside Meta's broader multimodal push (LLaMA 3.2, V-JEPA2) and is one of the first publicly available brain encoding models that handles video+audio+text simultaneously at this scale — the HuggingFace weights release this week is what triggered the star spike.

How to use it

  1. Install dependencies and clone: pip install tribev2 then grab weights via TribeModel.from_pretrained('facebook/tribev2', cache_folder='./cache') — weights download automatically from HuggingFace.,2. Prepare a stimulus: point it at any .mp4 file — df = model.get_events_dataframe(video_path='clip.mp4') — this auto-extracts word-level text timings, audio frames, and video segments into a single events DataFrame.,3. Run prediction: preds, segments = model.predict(events=df) — output shape is (n_timesteps, ~20480) vertices on the fsaverage5 cortical mesh.,4. Visualize: use nilearn or nibabel to project preds onto a 3D brain surface — plotting.plot_surf_stat_map(fsaverage['infl_left'], preds[t]) gives you a frame-by-frame cortical activation map.,5. Experiment in Colab first: the official tribe_demo.ipynb notebook on their GitHub runs end-to-end on a free T4 GPU — start there before integrating into any app.

How I could use this

  1. Build a 'Brain Response Analyzer' blog post series where you feed your own blog article (converted to TTS audio) into TRIBE v2 and visualize which sections light up language vs. auditory cortex — a genuinely novel demo that differentiates your technical writing from generic AI posts.
  2. Create a 'Content Engagement Predictor' side tool: take a 60-second video pitch (e.g. a portfolio intro video) and compare the cortical activation profile against a known high-engagement baseline clip — surface this as a Next.js API route that returns a 'brain engagement score' as a fun, science-backed portfolio differentiator.
  3. Build an AI feature for your blog where readers can submit a YouTube URL, your backend runs TRIBE v2 inference on a 30-second clip, and returns a cortical surface heatmap image (rendered server-side with nilearn) showing predicted visual vs. language vs. audio cortex activation — stored in Supabase and displayed as an interactive blog annotation.

10. jxnxts/mcp-brasil

869 stars this week · Python · ai-agents apis-publicas brazil claude

A plug-and-play MCP server that connects AI agents (Claude, GPT, Copilot) to 326 tools across 41 Brazilian public APIs — no API keys required for 38 of them.

Use case

Brazilian developers building AI assistants constantly hit a wall: LLMs have no live access to government data like CNPJ lookups, congressional voting records, public procurement contracts, or environmental licensing. This repo solves that by wrapping BrasilAPI, Transparência Pública, TSE, IBGE, CNJ and 36 other sources into a single MCP server. Concrete example: ask Claude 'Show me all public contracts awarded to company X in 2024 plus any active lawsuits against them' — and it cross-references Compras.gov.br + DataJud in one shot.

Why it's trending

MCP (Model Context Protocol) adoption exploded in Q1 2025 as the de-facto standard for giving LLMs tool access, and Brazilian devs have been underserved — this is the first comprehensive MCP server covering the entire Brazilian government data ecosystem, dropping at exactly the right moment.

How to use it

  1. Install with pip install mcp-brasil or uv add mcp-brasil — no mandatory API keys needed to start.,2. Add the server to Claude Desktop's config at ~/Library/Application Support/Claude/claude_desktop_config.json using the JSON snippet from the README. Restart Claude Desktop.,3. Test with a free API first: ask Claude 'Busque informações do CNPJ 00.000.000/0001-91' — it will call BrasilAPI under the hood and return structured company data.,4. For cross-referencing, use the built-in planejar_consulta tool: prompt Claude with 'Crie um plano para buscar gastos, votações e proposições do deputado X' and it will orchestrate multiple API calls in parallel via executar_lote.,5. To integrate into your Next.js app, run the MCP server as a sidecar process and call it via the MCP TypeScript SDK (@modelcontextprotocol/sdk) from a Next.js API route, passing user queries to Claude with the MCP tools attached.

How I could use this

  1. Build a 'Brazilian Tech Job Market' widget on your blog: use the IBGE employment API + Compras.gov.br to show real-time data on public sector tech hiring and government IT contracts, auto-summarized weekly by Claude — unique data angle no other dev blog has.
  2. Create a CV enrichment tool for Brazilian developers: given a candidate's LinkedIn URL, use the MCP server to cross-reference their listed companies against CNPJ data (size, sector, revenue tier) and public procurement wins, then have Claude rewrite the resume bullet points with verified company context that ATS systems reward.
  3. Add a 'Ask about Brazilian AI regulation' chat widget to your blog powered by Claude + this MCP server: wire up the Câmara Federal API (bills/propositions) and Senado API so readers can ask questions like 'What AI-related bills are currently in committee?' and get answers grounded in live legislative data rather than Claude's training cutoff.
← All issuesGo build something