Top 7 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
The dominant trend this week is Claude Skills and agent skill packs. Four of the ten top-starred new repos are skill bundles — packaged prompts, agent definitions, and tool wrappers that make a frontier model behave like a domain specialist with one install. Below is what's worth your attention.
1. op7418/guizang-ppt-skill
3,679 stars this week · HTML
A Claude Code Skill that turns a single prompt into horizontal-swipe magazine-style HTML decks. Ten layouts, five curated themes, WebGL hero backgrounds, single-file output you can drop on any static host.
Use case
Replace PowerPoint for any technical presentation that needs to look modern. You write a paragraph describing the deck — Claude picks layouts, generates copy, embeds a WebGL hero, and ships you one HTML file. Concrete scenario: you're pitching a side project to a friend tomorrow morning. Type claude "make me a 6-slide deck explaining what my AI resume tool does, magazine theme, hero image of an Australian skyline" and you're done in ninety seconds.
Why it's trending
Claude Skills landed in the CLI a few weeks ago and the ecosystem is filling up fast. This one nailed the design language — the output looks closer to a Stripe Press essay than a slide deck — which is why it's the most-starred new repo of the week.
How to use it
- Install the Claude Code CLI:
npm install -g @anthropic-ai/claude-code - Clone the repo into
~/.claude/skills/:git clone https://github.com/op7418/guizang-ppt-skill ~/.claude/skills/ppt - Restart Claude Code so it picks up the skill manifest
- In any session,
claude "use the ppt skill to make a 5-slide deck about X"— the output HTML will be written to your current directory
How I could use this
- Career tools demo decks. Auto-generate a 5-slide explainer for each new feature on TechPath AU and embed it in onboarding emails — no Figma round-trips.
- Resume export format. Spike a "magazine resume" export option in the resume analyser. Lots of devs already pay for fancy resume templates; an AI-generated, single-file HTML one is a wedge.
- Weekly digest as a swipe deck. Pipe my Friday digest through this skill so subscribers get a horizontally-swipeable web mag instead of an email wall of text.
2. freestylefly/awesome-gpt-image-2
1,560 stars this week · gpt-image prompt-engineering
A reverse-engineered prompt library for GPT-Image-2 — 329 worked examples and 13 industrial templates for product shots, infographics, character sheets, and research figures.
Use case
GPT-Image-2 is genuinely good but only if you know how to prompt it. This repo skips the months of trial and error: each example shows the prompt, the output, and which knob to turn for which effect. If you're building anything that generates images at scale (product mocks, blog headers, ad creatives), you want to start here.
Why it's trending
GPT-Image-2 dropped two weeks back and the community is still figuring out the prompt grammar. This is the most comprehensive open dump of working patterns and it's still being added to daily.
How to use it
git clone https://github.com/freestylefly/awesome-gpt-image-2- Browse
templates/— each template is a markdown file with prompt + sample output - Copy a template, swap in your variables, paste into the GPT-Image-2 API or playground
- For automation, the prompts are pure strings — drop them into your codebase and
fetch('/v1/images/generate', { body: ... })
How I could use this
- Auto-generated cover images for every blog post. I currently use emojis. A small script that takes the post title + excerpt and renders a magazine-cover-style image using one of these templates would lift the visual quality of the whole site overnight.
- Research-figure generation for the digest. The "research figures" template category is excellent — generate one explainer figure per digest entry instead of all-text.
- Resume cover photo suggestions. Tasteful, monochrome portrait or city skyline images that fit Australian tech roles, picked by the resume analyser based on the candidate's industry.
3. deepseek-ai/TileKernels
1,306 stars this week · Python
A kernel library written in TileLang from the DeepSeek team — fused attention, MoE routing, GEMM kernels designed for the V4 architecture they shipped this month.
Use case
If you're doing model inference on H100s or B200s, the difference between a hand-written CUDA kernel and a PyTorch reference can be 3–10x. This library gives you DeepSeek's actual production kernels — the ones running their inference for V4 — as drop-in replacements. Concrete scenario: you self-host a DeepSeek-V4 fine-tune for a job-matching feature; swapping in these kernels could halve your GPU bill.
Why it's trending
DeepSeek open-sources the moat that most labs lock down. TileLang is becoming the lingua franca for performance-critical kernel work because it compiles to CUDA, ROCm and Apple Metal from a single source.
How to use it
pip install tilelang(the compiler) and clone this repo- Pick a kernel — e.g.
fused_attention.py— and replace your equivalent PyTorch op - The kernels expose the same forward/backward signature as
torch.nn.functionalso swap-in is one line - Benchmark with the included
bench/scripts before committing the change
How I could use this
- Real-time interview practice mode. Lower latency on the interview AI feature means I can do voice-to-voice; this kind of kernel work is exactly what gets us under the 200ms-to-first-token bar.
- Cheaper resume embedding. The fused attention kernel is ~2.4x faster on H100 than the reference — directly halves the cost of computing semantic embeddings on every uploaded resume.
- Personal learning project. Read one kernel a week and write up what each line does. There's a strong content moat in being one of the few people who can explain TileLang in plain English to web developers.
4. openclaw/clawsweeper
1,244 stars this week · JavaScript
ClawSweeper scans every issue and PR in a repo and suggests what should be closed, with a reason. Runs as a weekly GitHub Action and posts a summary issue.
Use case
Your repo's issue tracker is rotting. You have 80 open issues, 30 of them are duplicates, 20 are fixed in passing PRs, 10 are bot reports nobody read. ClawSweeper reads them all and produces a single comment: "Close these 17, here's why for each." You triage in five minutes instead of five hours.
Why it's trending
This is exactly the kind of agentic chore most maintainers want to delegate. The repo went from zero to 1.2k stars in a week because every open-source maintainer felt the pain in their bones.
How to use it
- Add
.github/workflows/clawsweeper.yml:on: { schedule: [{ cron: '0 9 * * 1' }] } jobs: sweep: runs-on: ubuntu-latest steps: - uses: openclaw/clawsweeper@v1 with: { token: ${{ secrets.GITHUB_TOKEN }}, model: claude-sonnet-4-6 } - Add
ANTHROPIC_API_KEY(orCLAUDE_CODE_OAUTH_TOKEN) to repo secrets - The action posts a single summary issue every Monday morning with proposed closures
- Optionally set
auto_close: trueto actually close them — leave it off until you trust the suggestions
How I could use this
- Run it on this very repo. I have eight bot-generated issues from the daily analyst already — perfect test case.
- Adapt the same pattern for stale resume reviews. When a user uploads a resume and never comes back, a weekly sweep could nudge them with a personalised follow-up email instead of letting the lead die.
- Monthly tech-debt review issue. Fork this and point it at the codebase itself — "here are 12 files that haven't been touched in 6 months, are they dead code?".
5. earthtojake/text-to-cad
1,026 stars this week · JavaScript · agents cad text-to-cad
An open-source agent harness that turns natural language into parametric CAD models. Generates STEP and STL outputs. Runs in WASM — no install.
Use case
You're prototyping a 3D-printable phone holder. Instead of opening Fusion 360, type "phone holder, 165mm tall, slot for USB-C cable, weighted base." Five seconds later you have an STL ready for Bambu Studio. This isn't toy-quality — the parametric output is editable in real CAD tools.
Why it's trending
CAD has been the unsexiest corner of generative AI for two years. This one finally crosses the "actually usable for prototyping" threshold and the maker community noticed immediately.
How to use it
- Try the hosted demo at the linked URL — drop a prompt, download the STL
- For local:
npm install text-to-cadand import the WASM module — runs entirely in-browser - Use the
text2cad.generate({ prompt, format: 'step' })API and pipe the output to your slicer
How I could use this
- A "what should I make this weekend?" tool. Generate three random useful 3D-printable objects each Saturday for myself, embedded as a small interactive widget on the blog.
- Career tools side-quest: visualise your career path as a 3D model. Each role becomes a connected node, the geometry is generated from the user's actual job history. Memorable, shareable, weird in a good way.
- AS A DEMO ON THE BLOG: a "describe a thing, see it 3D-printable" interactive — exactly the kind of thing that gets shared on Twitter/Bluesky and drives traffic.
6. GammaLabTechnologies/harmonist
790 stars this week · Python · multi-agent-framework
Portable agent orchestration with mechanical protocol enforcement — 186 pre-built agents, zero runtime dependencies, runs anywhere Python runs.
Use case
You want a multi-agent system but you're tired of LangGraph's abstraction soup and CrewAI's "trust the LLM to follow the rules" approach. Harmonist enforces inter-agent protocols mechanically — agents literally cannot return malformed messages because the runtime rejects them. Concrete: build a job-application pipeline where the resume-reviewer agent's output schema is checked before the cover-letter agent ever sees it. No silent failures, no "oh the LLM forgot to include the email field" debugging at 2am.
Why it's trending
The multi-agent space is full of frameworks that look great in demos and fall apart in production. "Mechanical protocol enforcement" is a real differentiator — the docs show actual diff-checking of the message bus and it's striking how much it simplifies error handling.
How to use it
pip install harmonist- Define your agents:
from harmonist import Agent, Protocol— each agent declares input and output schemas - Wire them with
Pipeline([resume_agent, match_agent, cover_letter_agent]) - Run: any schema mismatch raises immediately at the seam, not three steps downstream
How I could use this
- Replace the resume → match → cover-letter pipeline. Currently a single big prompt. Splitting into three protocol-enforced agents would let me catch resume-parsing failures before burning tokens on the cover letter.
- Interview practice as agent chain. Question-generator → answer-evaluator → coaching-agent, each with strict schemas. The coaching agent never has to guess what fields the evaluator returned.
- Daily content pipeline rewrite. My ai-news / digest / githot scripts are essentially mini-agents glued together. Rewriting them in Harmonist would give me schema-validated outputs and would make the quota-crash bug I just fixed structurally impossible to recur.
7. future-agi/future-agi
686 stars this week · Python · observability evals
A self-hostable, end-to-end LLM observability platform. Tracing, evals, simulations, datasets, gateway, guardrails. Apache 2.0.
Use case
You're shipping an AI feature and you can't see what's happening in production. Future AGI is what you'd buy from Langfuse or Helicone — except open source and self-hosted. Concrete: a user reports their resume analysis was wrong. With this, you can replay the exact LLM call, see the prompt, the response, and re-run it through a different model in seconds.
Why it's trending
The hosted observability space (Langfuse, Helicone, Arize) is consolidating and pricing up. A genuinely self-hostable open-source alternative with all five capabilities — not just tracing — is rare. The 686 stars are mostly indie devs voting with their feet.
How to use it
git clone https://github.com/future-agi/future-agi && cd future-agi && docker-compose up- Add the SDK to your app:
pip install fagiand wrap your LLM client - Hit
localhost:3000for the dashboard — every call shows up immediately with prompt, response, latency, cost - Define eval suites in YAML and run them on every PR via the included GitHub Action
How I could use this
- Stand up a self-hosted instance for TechPath AU. I'm currently flying blind on the resume analyser and interview features — every call should be traced with cost attribution per user.
- Eval suite for the daily content pipeline. Define "good githot post" as 3-5 evals (specific use cases, real code, no hype words) and gate auto-publish on it. Solves the "AI wrote a vague post" problem permanently.
- Replay tool for support tickets. When a user complains, paste their session ID and instantly see the exact LLM exchange — turns a 30-minute investigation into 30 seconds.
That's the wrap for this week. The unmistakable signal: packaged, distributable AI capabilities — Claude Skills, agent skill packs, drop-in kernel libraries, agent harnesses with strict protocols — are where the velocity is. The era of "build your own agent from scratch" is ending; the era of npm install agent-that-does-X has begun.