Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. farzaa/clicky
3,937 stars this week · Swift
Clicky is an open-source macOS AI tutor that floats next to your cursor, sees your screen, speaks to you, and can point at UI elements — like a pair-programming buddy that watches what you're doing in real time.
Use case
Developers and learners often get stuck and have to context-switch to ChatGPT, paste screenshots, describe their problem, and wait — losing flow entirely. Clicky eliminates that by giving you a persistent AI overlay that already sees your screen, so you can ask 'why is this TypeScript error happening?' and it answers in context, pointing at the exact line, without you lifting a finger to copy-paste anything.
Why it's trending
The demo tweet went viral this week because it nails a viscerally satisfying UX — an AI that literally points at your screen like a human tutor — hitting at the same moment developers are hungry for ambient AI tools that don't require leaving the editor. The open-source release lets the community fork and extend it immediately, which accelerated the star count.
How to use it
- Clone and set up the Cloudflare Worker proxy (never exposes keys in the binary):
cd worker && npm install, then runnpx wrangler secret put ANTHROPIC_API_KEY,npx wrangler secret put ASSEMBLYAI_API_KEY,npx wrangler secret put ELEVENLABS_API_KEY, and finallynpx wrangler deployto get your worker URL.,2. Open the Xcode project, paste your Cloudflare Worker URL into the config constant (look forWORKER_BASE_URLin the Swift source), then build and run on macOS 14.2+.,3. Grant screen recording and microphone permissions when prompted — ScreenCaptureKit requires explicit user approval in System Settings > Privacy.,4. Fastest path: use Claude Code by pasting the provided prompt into it — it will clone the repo, read CLAUDE.md, configure the worker, and walk you through Xcode setup interactively.,5. To add a custom feature (e.g., a 'review my code' button), openCLAUDE.mdto understand the architecture, then ask Claude Code to scaffold it — the worker pattern means you can add new API routes (e.g., a GPT-4o vision call) without touching the Swift binary.
How I could use this
- Build a 'blog post reviewer' fork: a macOS overlay that watches you write in your Next.js blog's CMS or markdown editor, and whispers real-time suggestions — 'this paragraph is too dense, split it' or 'you used passive voice 3 times here' — without switching apps.
- Fork Clicky into a 'portfolio interview coach': it watches you fill out job application forms or write cover letters in the browser, detects the company name from the screen, pulls job description context via a Supabase-backed lookup, and gives live coaching on tailoring your response to that specific role.
- Use Clicky's Cloudflare Worker + ScreenCaptureKit architecture as the backend pattern for a 'live coding screencaster AI' feature on your blog — a tool that records your screen while you build a tutorial, then auto-generates timestamped blog post sections by sending frame snapshots to Claude Vision, turning your dev session into a structured post draft automatically.
2. xixu-me/awesome-persona-distill-skills
3,404 stars this week · JavaScript · agent-skills awesome awesome-list persona-distill
A curated list of Agent Skills (for agentskills.io) that distill real people's communication styles, decision frameworks, and relationship dynamics into reusable AI persona prompts.
Use case
The core problem: you want an AI agent that doesn't just answer generically but responds with a specific person's reasoning style, vocabulary, and emotional register. For example, instead of 'ask ChatGPT for career advice,' you load a distilled skill of a specific mentor archetype — ex-colleague, academic advisor, or a public figure's methodology — and the agent reasons as that persona would. This is the missing layer between raw LLMs and truly personalized AI companions.
Why it's trending
The agentskills.io ecosystem is gaining traction as a structured alternative to ad-hoc system prompts, and the 'persona distillation' framing (extracting style/logic from digital traces rather than cloning a person) is hitting a cultural nerve around digital memory and AI relationships post-ChatGPT memory features launching. 3,400 stars in one week suggests a viral Chinese developer community moment around the concept of preserving/simulating meaningful relationships via AI.
How to use it
- Browse the repo categories (self-distillation, workplace relationships, public figures) and pick a .skill file from a linked GitHub repo — e.g., a 'colleague persona' skill. 2. Pull the skill's system prompt or structured config from the linked repo (most are markdown or JSON system prompt templates). 3. Load it into your agent runtime — if using agentskills.io directly, import via their skill loader; if DIY, paste the system prompt into your OpenAI/Anthropic API call's system role. 4. Test with relationship-specific prompts: 'How would this person respond to me missing a deadline?' or 'Give me feedback on this code in their style.' 5. To build your own, follow the self-distillation pattern: collect ~50 samples of the person's writing/decisions, run through an extraction prompt that identifies vocabulary patterns, decision heuristics, and emotional defaults, then encode as a reusable system prompt block.
How I could use this
- Build a 'Henry's Writing Voice' skill by feeding your existing blog posts through a distillation prompt — then use it as a system prompt when drafting new posts with AI assistance, so AI suggestions actually sound like you instead of generic GPT prose. Store the skill config in Supabase and version it as your writing evolves.
- Create a 'Senior Reviewer' persona skill distilled from 3-4 real engineering mentors' public writing (blog posts, conference talks, code review comments) — use it in a career tool that auto-generates personalized code review feedback or interview prep questions in the style of the kind of senior engineer you want to impress.
- Add a 'Blog Persona Chat' feature where readers can have a conversation with a distilled version of your past writing — pull all your blog post content into a vector store in Supabase, use a persona skill as the system prompt, and surface it as a Next.js API route + chat widget. The persona answers questions in your voice based only on things you've actually written.
3. alchaincyf/hermes-agent-orange-book
2,089 stars this week · various
A free, structured PDF guide (17 chapters) to Nous Research's Hermes Agent framework — the first open-source AI agent with a built-in self-improving learning loop and three-layer memory system.
Use case
Most developers trying to build production AI agents hit a wall when their agent loses context, can't learn from past interactions, or requires manual prompt engineering every time behavior changes. Hermes Agent solves this with automatic Skill creation and a persistent memory architecture — this guide walks you through implementing it end-to-end, from setup to multi-agent orchestration. Concrete example: instead of re-prompting your blog assistant every session, Hermes remembers your writing style preferences and evolves its behavior automatically.
Why it's trending
Hermes Agent dropped in February 2026 and is being positioned as the open-source answer to Claude Code and OpenClaw, so this guide hit at exactly the right moment when developers are evaluating which agentic framework to bet on. The bilingual PDF (Chinese + English) also pulled a large audience from the Chinese dev community simultaneously.
How to use it
- Download the English PDF from the repo and skim the Part 2 chapters (§03-06) first — these cover the learning loop and memory system, which are the architectural differentiators you need to understand before touching code.,2. Install Hermes Agent from Nous Research:
git clone https://github.com/NousResearch/hermes-agent && cd hermes-agent && pip install -e .— requires Python 3.11+ and an API key for your preferred LLM backend.,3. Follow §08 ('First Conversation') to run the default agent and observe how it creates its first Skill automatically after a repeated task pattern:hermes run --profile default,4. Use §12 ('Knowledge Assistant') as your template — it maps directly to a blog use case where the agent indexes your markdown posts and answers reader questions with memory of past queries.,5. Read §16-17 before building anything production-critical — the 'boundaries of self-improving agents' chapter gives concrete guardrails on when the learning loop can drift and how to constrain it.
How I could use this
- Build a 'Blog Writing Copilot' Supabase Edge Function backed by Hermes Agent: it ingests all of Henry's existing posts as its knowledge base, learns his tone and topic preferences over time via the Skill evolution system, and surfaces a 'continue this draft' endpoint in the Next.js editor — no re-prompting needed between sessions because Hermes persists the learned style profile.
- Create a self-improving job application agent for career tools: feed it Henry's resume + a stream of job descriptions via a Supabase table, let Hermes auto-create Skills for 'JD similarity scoring' and 'cover letter tone matching' as it processes more examples — the agent gets measurably better at tailoring applications the more jobs Henry runs through it, without manual prompt updates.
- Implement a multi-agent content pipeline for the blog using §15 ('Multi-Agent') as the blueprint: one Hermes agent handles research (web search + summarization), a second handles SEO optimization using learned keyword patterns from post performance data stored in Supabase, and a coordinator agent orchestrates the handoff — expose this as a single
/api/generate-postroute in Next.js.
4. KKKKhazix/khazix-skills
1,709 stars this week · Python
A curated collection of battle-tested AI prompts and installable Agent Skills (following the agentskills.io open standard) for deep research and long-form writing, designed to drop into Claude Code, Codex, or OpenClaw.
Use case
Most AI prompt libraries are either too generic or too fragile — they work once and fall apart in edge cases. This repo solves the problem of reproducible, structured AI workflows: the hv-analysis skill auto-crawls the web and outputs a formatted PDF research report, while khazix-writer encodes a complete editorial style guide with a 4-layer self-review system. Concrete example: instead of prompting ChatGPT from scratch every time you need a competitor analysis, you install the hv-analysis skill once and get a consistent, structured 10,000-word report on demand.
Why it's trending
The Agent Skills open standard (agentskills.io) is gaining traction this week as Claude Code and Codex usage spikes post-launch, and developers are realizing that raw prompts don't survive context switching between sessions — installable, persistent skill files solve that exactly. This repo is one of the first real-world skill collections published under that standard.
How to use it
- Clone the repo and inspect the
.skillfiles to understand the format: each skill is a structured instruction set with metadata, rules, and examples — not just a raw prompt string.,2. Install the writing skill into Claude Code by dropping the file into~/.claude/skills/or by telling Claude Code:安装这个 skill: https://github.com/KKKKhazix/khazix-skills,3. For the lightweight prompt use case, openprompts/横纵分析法.md, copy the prompt, replace the[研究对象]placeholder with your target (e.g. 'Next.js 15 vs Remix'), and paste into any model with web search enabled.,4. To build your own skill, mirror the repo's file structure: askill.jsonmanifest with name/version/description, plus ainstructions.mdwith your system prompt logic, then package as a.skillzip.,5. Use the khazix-writer skill as a template to encode YOUR own writing voice — swap out the style examples section with 3-5 of your own blog posts so the agent mimics your tone specifically.
How I could use this
- Fork khazix-writer and replace the style examples with Henry's own best blog posts, then expose it as a
/api/draftendpoint in the Next.js blog — Henry pastes a rough outline and gets a full draft in his own voice, stored as a Supabase draft with status='ai-generated' for review. - Use the hv-analysis prompt directly to generate deep-research posts about a niche tech topic (e.g. 'edge runtime tradeoffs in 2025'), then auto-publish the structured output as a blog post with sections mapped to a predefined MDX template — instant SEO-optimized long-form content pipeline.
- Build a personal 'skill registry' page on the blog where Henry publishes his own
.skillfiles (e.g. a resume-tailoring skill, a PR review skill) — visitors can one-click install them into their own Claude Code setup, turning the blog into a tool distribution platform and driving return visits.
5. yizhiyanhua-ai/fireworks-tech-graph
1,530 stars this week · Python
A Claude Code skill that converts natural language descriptions into publication-ready SVG+PNG technical diagrams across 14 types and 7 visual styles — no diagramming tools or manual drawing required.
Use case
Engineers waste hours in Lucidchart or draw.io manually placing boxes and arrows for architecture docs. This repo lets you describe a system in plain English (e.g., 'Generate a RAG pipeline with a vector store, reranker, and streaming LLM response — blueprint style') and get a 1920px PNG back in seconds. Especially useful for AI/Agent-heavy architectures where domain-specific shapes (memory layers, tool call flows, multi-agent graphs) would normally require custom templates.
Why it's trending
Claude Code's skill/plugin ecosystem is exploding right now as developers discover it can act as an agentic code environment, and this is one of the first polished, domain-specific skills targeting the AI architecture documentation gap that every LLM project team hits immediately.
How to use it
- Clone the repo and install dependencies:
pip install -r requirements.txtplus ensurersvg-convertis available (brew install librsvgon Mac). - Add the skill to your Claude Code environment by pointing it at the skill manifest file per the repo's Claude Code integration instructions.
- In a Claude Code session, invoke it with a plain English prompt:
"Generate a multi-agent orchestration diagram with a planner agent, two tool-use subagents, and a shared memory store — dark terminal style". - The skill returns
diagram.svganddiagram.png(1920px wide) in your working directory — drop the PNG directly into your blog post, README, or slide deck. - Iterate by re-prompting with style or layout changes:
"Same diagram, switch to blueprint style and add a critique/reflection loop between the planner and subagents".
How I could use this
- Auto-generate a fresh architecture diagram for every technical blog post at build time: store a
.diagram-promptfile alongside each MDX post (e.g., 'RAG pipeline with Supabase pgvector, OpenAI embeddings, and streaming response'), run this skill in a GitHub Actions step on merge, and embed the output PNG directly into the post — no more stale diagrams that don't match the text. - Build a 'System Design Explainer' page on the blog where readers can type a system description into a textarea, hit generate, and see the SVG rendered inline via a Next.js API route that shells out to this skill — turns a static blog into an interactive learning tool and doubles as a portfolio demo of your AI integration skills.
- Use the multi-agent and tool-call flow diagram types to auto-document the agentic workflows in your own AI projects: wire it into your dev workflow so that whenever you update an agent's tool list or memory architecture in code, a CI step regenerates the architecture diagram and commits it to
/docs— keeps technical documentation in sync with the actual implementation without manual effort.
6. mattmireles/gemma-tuner-multimodal
1,229 stars this week · Python
Fine-tune Gemma 4/3n on text, images, and audio locally on Apple Silicon via LoRA — no NVIDIA GPU or cloud compute required.
Use case
Developers with MacBooks who want custom fine-tuned models can't use Unsloth or standard axolotl (CUDA-only), and renting an H100 for small experiments is wasteful. This lets Henry fine-tune a Gemma model on his own blog post corpus (text CSVs) or even audio transcripts locally — producing a personalized writing-style model without sending data to a third party or paying for GPU time.
Why it's trending
Gemma 4 and 3n dropped recently with strong multimodal benchmarks, and the Mac developer community has been starved for a unified fine-tuning tool that handles all three modalities natively on MPS — this fills that gap exactly when appetite for local Gemma fine-tuning is peaking.
How to use it
- Clone and install:
git clone https://github.com/mattmireles/gemma-tuner-multimodal && cd gemma-tuner-multimodal && pip install -r requirements.txt - Run the wizard CLI to check your Apple Silicon setup and configure LoRA hyperparams:
python tune.py --wizard - Prepare a CSV with
promptandcompletioncolumns from your blog posts, then point the config at it: setmodality: text,dataset_path: ./blog_posts.csv,model: gemma-4inconfig.yaml - Start training with the live visualizer:
python tune.py --config config.yaml --visualizer— open the URL printed in terminal to watch loss curves and attention heatmaps in real time - Export the LoRA adapter and serve locally with
ollamaor integrate via LangChain for inference in your Next.js API routes
How I could use this
- Fine-tune Gemma on all of Henry's published blog posts (export from Supabase as CSV) to create a 'write in Henry's voice' assistant — wire it into a Next.js /api/draft route so the blog's editor can autocomplete paragraphs that actually sound like him, not generic GPT-4.
- Build a personalized cover letter generator by fine-tuning on (job_description, tailored_cover_letter) pairs Henry curates — the LoRA adapter runs locally so sensitive resume data never leaves his machine, and the model learns his specific framing of skills rather than generic templates.
- Use the audio+text LoRA to fine-tune on transcripts of podcasts or talks in Henry's niche (e.g., AI/web dev conferences), then build a 'ask the episode' feature on the blog where readers query specific podcast content — differentiating the blog from sites that just embed a raw player.
7. nashsu/llm_wiki
907 stars this week · TypeScript
A desktop app that uses LLMs to incrementally build a persistent, interlinked wiki from your documents — replacing one-shot RAG with a growing structured knowledge graph.
Use case
Standard RAG pipelines answer questions by re-retrieving and re-synthesizing from raw docs every time, which is stateless and slow. LLM Wiki instead processes your documents once into structured wiki pages with typed relationships and a knowledge graph, so queries hit organized knowledge rather than raw chunks. Concrete example: drop in 200 research papers on AI alignment and get a browsable wiki with topic clusters, surprising cross-paper connections flagged automatically, and gap analysis — not just a chatbot over PDFs.
Why it's trending
The AI tooling community is pushing back against naive RAG implementations in 2025 — persistent, structured knowledge graphs over LLM-generated wikis is the next architectural pattern gaining traction. The Chrome clipper + Deep Research combo also directly competes with Notion AI and Obsidian Copilot plugins, landing it in front of a large PKM audience.
How to use it
- Clone and install:
git clone https://github.com/nashsu/llm_wiki && cd llm_wiki && npm install && npm run build(Electron app — produces a cross-platform desktop binary).,2. Configure your LLM endpoint in settings — it supports any OpenAI-compatible API (OpenAI, Ollama, LM Studio), so point it at a local Ollama instance to keep costs zero.,3. Drop a folder of markdown files, PDFs, or clipped web pages into the import queue. The two-step chain-of-thought ingest will analyze structure first, then generate linked wiki pages — watch progress in the persistent queue UI.,4. Explore the 4-signal knowledge graph: navigate to Graph view to see Louvain-detected clusters, click 'Graph Insights' to surface unexpected connections between documents.,5. Install the Chrome Web Clipper extension (included in/extension) to one-click capture blog posts or docs directly into your wiki for auto-ingest.
How I could use this
- Feed all of Henry's published blog posts into LLM Wiki to auto-generate a 'Topics' knowledge graph — then expose the wiki pages as a
/knowledgeroute in the Next.js blog, giving readers a Wikipedia-style way to explore interconnected concepts across all posts without Henry manually tagging anything. - Point LLM Wiki at a folder of job descriptions, company engineering blogs, and Henry's own resume/projects. Use the gap analysis feature to identify skills Henry writes about least but that appear densely in target JD clusters — producing a prioritized learning roadmap rather than generic 'improve your resume' advice.
- Use LLM Wiki's persistent ingest queue + LanceDB vector search as the knowledge layer for an AI writing assistant feature on the blog: when Henry drafts a new post, call the wiki's search API to surface related past posts and flagged knowledge gaps, then display inline suggestions in a custom Tiptap/ProseMirror editor component backed by Supabase for storing drafts.
8. phuryn/claude-usage
878 stars this week · Python · claude-code
A zero-dependency local dashboard that parses Claude Code's JSONL logs to give you actual token counts, cost breakdowns, and session history that Anthropic's own UI hides from you.
Use case
Claude Code writes detailed usage logs to ~/.claude/ regardless of your plan, but Anthropic only surfaces a vague progress bar for Pro/Max users. If you're building AI features heavily with Claude Code and want to know whether you're burning $50/day on a specific project or which model is costing the most, this reads those local JSONL files and gives you real charts and cost estimates. Concrete example: you're iterating on a Supabase schema generator with Claude Code all week — this tells you exactly how many tokens that cost per session vs. your blog post drafting sessions.
Why it's trending
Claude Code adoption exploded in the last few months as Anthropic opened it beyond API-only users, meaning thousands of Pro/Max subscribers are now running it daily with zero visibility into actual consumption. Developers are hitting usage limits unexpectedly and this is the first clean, no-install-needed tool that surfaces what's actually happening.
How to use it
- Clone the repo:
git clone https://github.com/phuryn/claude-usage && cd claude-usage - Run a scan to parse your local JSONL logs into a SQLite DB:
python3 cli.py scan - Launch the dashboard in your browser:
python3 cli.py dashboard— it spins up a local HTTP server, no deps needed - Check today's breakdown by model in terminal without the browser:
python3 cli.py today - Query the generated
~/.claude/usage.dbdirectly with SQLite if you want to build your own reporting on top:sqlite3 ~/.claude/usage.db 'SELECT project, SUM(tokens_input + tokens_output) as total FROM sessions GROUP BY project ORDER BY total DESC'
How I could use this
- Build a 'Claude Cost per Blog Post' tracking page in your blog's admin panel — after each drafting session with Claude Code, run the scan and query the SQLite DB via a small Python script that tags sessions by git branch name, then surface the cost-per-post in your Supabase dashboard so you can see the actual AI cost of your content operation.
- Create a weekly 'AI Dev Spending Report' CLI tool for your portfolio that wraps claude-usage's SQLite output and cross-references it with your GitHub commit log — showing cost-per-feature rather than cost-per-session, which is a genuinely useful metric you could write about and open-source to get visibility.
- Use the session history data to train a personal productivity model — export your Claude Code sessions with timestamps and token counts, correlate with git commit frequency or Supabase schema changes, and build a Next.js page that visualizes your most 'expensive' coding patterns, positioning yourself as someone who thinks seriously about AI cost efficiency.
9. wxtsky/CodeIsland
869 stars this week · Swift
A macOS notch-native status panel that shows real-time activity from 9 AI coding agents (Claude Code, Codex, Cursor, etc.) so you never have to context-switch to check if an agent is stuck waiting for input.
Use case
When you're running Claude Code or Codex CLI in a background terminal while writing in another app, you have no idea if the agent finished, errored out, or is waiting for your permission to delete a file. CodeIsland surfaces that status — plus permission prompts and agent questions — directly in the MacBook notch, so you can approve a tool call or read an agent response without touching your terminal. Concrete example: Claude Code is mid-refactor and needs approval to run rm -rf dist/ — CodeIsland pops a permission dialog in the notch without you ever switching windows.
Why it's trending
Agentic coding tools (Claude Code, Codex CLI, Gemini CLI) just crossed mainstream adoption in the last 60 days, and the core UX friction — agents silently blocking on approvals in background terminals — is something every developer using these tools hits daily. This repo directly patches that pain point with a native macOS UI that feels first-party.
How to use it
- Download the latest
.dmgfrom the GitHub Releases page, drag CodeIsland.app to /Applications, and launch it — it requests Accessibility permissions on first run. - CodeIsland auto-detects installed CLI tools (Claude Code, Codex, Gemini CLI, etc.) and runs its auto-hook installer, which injects IPC hooks into each tool's config (e.g.,
~/.claude/settings.json) — check the Settings tab to confirm hooks are active. - Start a normal agent session in your terminal:
claudeorcodex— within seconds the notch panel expands showing session name, current tool call, and status badge. - When the agent hits a permission gate, a prompt appears in the notch — click Allow or Deny without switching apps.
- To build from source instead:
git clone https://github.com/wxtsky/CodeIsland && open CodeIsland.xcodeprojthen Cmd+R in Xcode (requires macOS 14+, MacBook with notch).
How I could use this
- Build a 'blog writing with AI' screencasting workflow post: use CodeIsland alongside Claude Code to generate MDX blog post drafts, screenshot the notch panel mid-session, and publish a transparent behind-the-scenes article showing exactly which tool calls the agent made to scaffold the post — this kind of process transparency is rare and highly shareable.
- Since Henry likely uses AI tools to generate resume/cover letter content, he could write a custom Claude Code hook script that pipes job description text in and generates tailored output, then use CodeIsland's live session view as a demo GIF in his portfolio to show recruiters he's fluent with agentic dev tooling — a tangible signal that goes beyond 'I use ChatGPT'.
- Extend Henry's blog's AI features by building a local Claude Code agent that watches his
/postsdirectory for new MDX drafts and auto-generates SEO metadata (title, description, OG tags) on save — CodeIsland would give him notch-level visibility into when the agent finishes each file, making the background automation feel controllable rather than like a black box.
10. QLHazyCoder/codex-oauth-automation-extension
866 stars this week · JavaScript
A Chrome extension that automates bulk OpenAI OAuth account registration flows, including CAPTCHA retrieval and email verification — essentially a bot for mass account creation.
Use case
This repo solves the problem of manually grinding through OpenAI's OAuth registration flow dozens of times — something people do to stockpile free-tier API credits or Codex access. Concrete example: a user configures their email provider (Hotmail/QQ/DuckDuckGo), sets a CPA panel URL, and lets the extension run 10 registration cycles overnight while auto-handling verification codes and birthday/age form variants.
Why it's trending
It's spiking this week almost certainly because OpenAI's Codex (the cloud coding agent) launched with free trial credits, making bulk account creation immediately valuable to people trying to maximize free access before limits kick in. The timing with 'codex' in the repo name is not a coincidence.
How to use it
⚠️ IMPORTANT: This tool automates account creation at scale in violation of OpenAI's Terms of Service (Section 2.1). Using it risks IP bans, account termination, and potential legal exposure. Do not use this against any platform without explicit permission. The steps below are for educational/awareness purposes only:
- Clone the repo and load it as an unpacked Chrome extension via chrome://extensions with Developer Mode on.
- Open the sidebar panel and configure your email provider credentials (e.g., Hotmail client ID + refresh token, or DuckDuckGo token).
- Paste your CPA panel OAuth callback URL into the config and save.
- Click 'Single Step' to test one phase of the flow, or 'Auto' to run N full registration cycles unattended.
- Monitor the built-in log panel — failed cycles auto-retry; completed accounts show generated passwords inline.
How I could use this
- Write a technical deep-dive blog post titled 'How Browser Extension OAuth Bots Actually Work' — reverse-engineer the extension's content scripts and sidebar messaging architecture (chrome.runtime, MV3 service workers) to explain the DOM automation techniques. This is high-SEO content that attracts security-minded developers without you endorsing ToS violations.
- Build a legitimate career tool contrast: a Chrome extension that automates YOUR OWN job application OAuth flows — auto-filling LinkedIn Easy Apply, Greenhouse, or Lever forms using your stored resume data from Supabase. Same extension architecture (sidebar + content scripts + form detection), but for a use case you actually own.
- Use this repo's email verification logic as a reference to build a Supabase Edge Function that polls a custom Inbucket or Mailhog instance during integration tests — auto-confirm test user signups in your blog's CI pipeline without manually clicking verification emails, cutting your Playwright/Cypress test setup time.