Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. MemPalace/mempalace
41,701 stars this week · Python · ai chromadb llm mcp
MemPalace is a local, open-source AI memory system that stores every conversation verbatim in ChromaDB with a hierarchical structure, achieving 96.6% on LongMemEval — the highest benchmark score publicly reported.
Use case
LLM context windows reset between sessions, so your AI assistant forgets that you spent three weeks debugging a race condition in your Supabase RLS policies or that you specifically chose Edge Functions over serverless for latency reasons. MemPalace solves this by storing every exchange locally in ChromaDB organized by project and memory type, then using semantic search to surface the exact past conversation when you ask a related question months later — no summarization loss, no API calls to a third-party memory service.
Why it's trending
It dropped this week claiming the highest LongMemEval score ever benchmarked (96.6%) with a fully local, free stack — directly challenging paid services like Mem0 and MemGPT at a moment when developers are actively looking for privacy-preserving alternatives to cloud-based AI memory. The honest changelog admitting AAAK compression currently regresses performance also built trust fast.
How to use it
- Install and run locally:
pip install mempalace && mempalace init— this sets up ChromaDB storage and the palace directory structure on your machine. - Configure a wing for your blog project: edit
config.yamlto add a wing namedhenry-blogwith halls likearchitecture-decisions,debugging-sessions, andai-features. - Connect via MCP (Model Context Protocol) so Claude or any MCP-compatible client auto-logs conversations: set
mcp_enabled: truein config and point your MCP client at the local MemPalace server endpoint. - Query past context programmatically:
from mempalace import Palace
palace = Palace()
results = palace.search(
query="why did I choose Supabase over PlanetScale",
wing="henry-blog",
top_k=5
)
for r in results:
print(r.timestamp, r.content)
- Pipe those results as system-prompt context into your Next.js AI route handler before calling your LLM, giving it grounded history without blowing your token budget.
How I could use this
- Build a 'Blog Decision Log' feature: every time you use Claude to make an architectural decision for your blog (auth strategy, caching layer, component structure), auto-log the full exchange to MemPalace under a
blog-architecturehall. Then expose a/decisionspage that queries MemPalace via a Next.js API route and renders a searchable timeline of every technical choice with full context — a living ADR (Architecture Decision Record) generated from real conversations. - Career tool — 'Interview Prep Memory': log every mock interview session, every system design discussion, and every LeetCode explanation you work through with an AI into a MemPalace wing called
career. Before a real interview, run a query likesearch('distributed systems tradeoffs', wing='career')and surface your own past reasoning to review — personalized spaced repetition from your actual study sessions rather than generic flashcards. - Give your blog's AI chat assistant (the one answering reader questions about your posts) persistent memory across sessions: store every reader Q&A in MemPalace under a
reader-interactionswing, then when a returning user asks a follow-up question, semantically retrieve their prior conversation and inject it as context. This turns a stateless chatbot into one that remembers 'this user was debugging Next.js middleware last month' and responds accordingly — without storing anything in Supabase or paying for a managed memory API.
2. alchaincyf/nuwa-skill
7,375 stars this week · Python
A Claude Code skill that auto-researches any public figure and distills their mental models, decision heuristics, and communication style into a queryable AI persona you can actually argue with.
Use case
The real problem: LLMs give generic advice because they average over everything. Nuwa solves this by running a structured research-then-distill pipeline on a specific person (e.g., Charlie Munger), extracting their actual frameworks (inversion, latticework of models, opportunity cost thinking) rather than surface-level quotes, so when you ask 'should I take this job offer?' you get Munger's actual decision heuristics back — not a Wikipedia summary. Concrete scenario: a solo founder who can't afford advisors uses a distilled Paul Graham persona to pressure-test their startup idea against YC's real evaluation criteria.
Why it's trending
Blew up this week riding the Claude Code skill ecosystem wave — skills.sh just launched as a marketplace for Claude Code extensions, and this repo is one of the first high-profile demos showing the format can do something genuinely useful beyond code generation. The Chinese dev community amplified it fast given the author's reach.
How to use it
- Install Claude Code and the skills.sh CLI:
npm install -g skills-cli, then authenticate with your Anthropic API key.,2. Install the nuwa skill:skills install alchaincyf/nuwa-skill— this registers the skill in your Claude Code environment.,3. Run the distillation pipeline on a target: inside Claude Code, invokenuwa distill 'Paul Graham'— it auto-searches essays, interviews, and talks, then synthesizes mental models into a structured persona file (e.g.,paul_graham.skill.md).,4. Query the distilled persona directly:nuwa ask paul_graham 'My B2B SaaS has 10 customers paying $500/mo — should I raise or stay bootstrapped?'— the skill constrains responses to PG's actual documented frameworks.,5. Iterate and export: edit the generated.skill.mdfile to add domain-specific context (e.g., 'Henry is a Next.js developer building AI tools'), then version-control it and call it via the Claude API in your own apps using the system prompt pattern the skill generates.
How I could use this
- Blog post advisor panel: Build a pre-publish review step where each draft gets routed through 2-3 distilled personas (e.g., Paul Graham for clarity, David Perell for narrative, your target reader archetype) and their critiques are displayed side-by-side in a Supabase-backed UI — readers could even vote which critique they agreed with most, turning it into engagement data.
- AI career coach with a named POV: Distill a specific hiring manager archetype (e.g., a Stripe-style engineering manager based on their public writing) and wire the resulting skill into your resume/cover letter tool so feedback isn't generic ('add metrics') but persona-specific ('this framing would fail the Stripe bar because they explicitly value systems thinking over output lists — reframe around what broke and what you changed').
- Personalized content strategy engine: Distill your own writing from your blog posts into a
henry.skill.mdfile, then run new draft ideas through it to get a consistency score — 'does this post sound like Henry or does it drift into generic dev content?' — surface this as a pre-publish Supabase Edge Function that flags stylistic drift before you hit publish.
3. garrytan/gbrain
4,718 stars this week · TypeScript
GBrain gives your AI agent a persistent, searchable memory layer backed by embedded Postgres (PGLite) so it accumulates context about your life across every conversation.
Use case
AI agents are stateless by default — every conversation starts cold with zero knowledge of who you are, your past decisions, or your ongoing projects. GBrain solves this by continuously writing structured memories (meetings, emails, ideas, calendar events) into a local PGLite database with vector embeddings, then injecting relevant context into every agent prompt before it responds. Concretely: you ask your agent 'what should I write about this week?' and it already knows your last 10 blog posts, your draft ideas folder, and the tech you've been exploring.
Why it's trending
The release of Claude Opus 4.6 and GPT-5.4 Thinking this week made long-running agentic workflows viable for the first time at this quality level, and GBrain is one of the first opinionated setups explicitly designed around those models. Developers who've been waiting for frontier-grade agents are now actually deploying them, and GBrain removes the hardest part — persistent state management.
How to use it
- Deploy an agent runtime first — use the one-click Render deploy for OpenClaw (AlphaClaw) or the Railway template for Hermes Agent, both require 8GB+ RAM.
- Paste the provided GOAL block verbatim into your agent chat window — the agent self-installs GBrain, no manual npm install needed.
- GBrain runs
gbrain initwhich spins up PGLite (embedded Postgres 17.5 via WASM) locally — no Supabase, no Docker, no connection string required. - The agent walks you through API key questions for your integrations (Gmail, Google Calendar, Twitter, etc.) and imports your existing files to seed the brain.
- Verify with
gbrain status— the agent confirms schema, embedding counts, and sync jobs are live before handing control back to you.
# After agent self-install, you can also drive it manually:
npx gbrain init # boots PGLite brain locally
npx gbrain import ./notes # bulk-embed a folder of markdown/text
npx gbrain status # shows memory count, last sync, cron jobs
How I could use this
- Wire GBrain into Henry's blog writing workflow: every published post, draft, and reader comment gets written into the brain. Then prompt the agent 'suggest my next post based on gaps in my existing content and what I've been reading this week' — it now has real recall of his entire publishing history instead of you manually pasting context each time.
- Build a career context engine: import Henry's resume, every cover letter draft, job descriptions he's saved, and interview notes into GBrain. Before generating any new cover letter or prep doc, the agent reads his accumulated job-search memory — it knows which companies he's already applied to, which talking points landed, and which skills to emphasize based on pattern-matching his successful responses.
- Use GBrain as the memory backend for a 'second brain' AI sidebar on Henry's blog: readers ask questions, the agent answers using GBrain's embeddings of all his posts and notes, and every Q&A session writes back a new memory entry. Over time the agent gets genuinely better at answering questions about Henry's specific niche rather than giving generic answers — and Henry can audit/export the full memory as a structured knowledge graph of his expertise.
4. alchaincyf/zhangxuefeng-skill
4,221 stars this week · various
张雪峰.skill — 张雪峰的认知操作系统。高考志愿/考研/职业规划的实战思维框架。由女娲.skill生成。
Use case
张雪峰.skill — 张雪峰的认知操作系统。高考志愿/考研/职业规划的实战思维框架。由女娲.skill生成。
Why it's trending
How to use it
How I could use this
5. farzaa/clicky
3,769 stars this week · Swift
Clicky is an open-source macOS AI teaching assistant that floats next to your cursor, sees your screen, and talks to you in real-time — like a live pair programmer with eyes.
Use case
Developers and learners often get stuck context-switching between tutorials, documentation, and their actual work. Clicky solves this by acting as an always-visible AI overlay that can see exactly what's on your screen and respond verbally — for example, a junior dev debugging a TypeScript error can ask 'what's wrong here?' without copy-pasting anything, and Clicky points at the issue and explains it live.
Why it's trending
The demo tweet went viral this week because it's one of the first open-source implementations of a persistent, screen-aware AI assistant on macOS using ScreenCaptureKit — and the timing with Claude's multimodal capabilities maturing makes it feel like a real product, not a toy. Developers are racing to fork and extend it.
How to use it
- Clone and scaffold using Claude Code by pasting the provided prompt — it auto-reads CLAUDE.md and walks you through Xcode + Cloudflare Worker setup hands-free.,2. Set up your Cloudflare Worker API proxy to keep keys out of the binary: run
cd worker && npm install, thennpx wrangler secret put ANTHROPIC_API_KEY(repeat for AssemblyAI and ElevenLabs keys).,3. Deploy the worker withnpx wrangler deployand copy the generated worker URL into the Xcode project's config so the Swift app knows where to send requests.,4. Open the Xcode project (requires macOS 14.2+ for ScreenCaptureKit), build and run — grant screen recording permissions when prompted.,5. Extend it: the Cloudflare Worker is the easiest hack point — swap Anthropic for any other LLM endpoint, or add a custom system prompt to make it domain-specific (e.g., a Next.js-only tutor).
How I could use this
- Fork Clicky and hardcode a custom system prompt that makes it a 'blog writing coach' — it watches you type in your Next.js blog's admin editor, and when you pause, it proactively suggests SEO improvements, clearer phrasing, or flags thin content, all spoken aloud without breaking your flow.
- Build a 'portfolio review mode': configure the Clicky overlay to activate when your browser is on your own portfolio URL, then have it roleplay as a hiring manager — it sees your live site and gives spoken feedback like 'this project description is too vague, add metrics' as you scroll through with a recruiter on a call.
- Use the open Cloudflare Worker as a blueprint to build a screen-aware Supabase query helper for your blog's admin dashboard — when you're staring at a confusing query result in the Supabase UI, the assistant sees the table output and suggests the corrected SQL or RLS policy fix verbally, saving the copy-paste-to-ChatGPT loop.
6. xixu-me/awesome-persona-distill-skills
3,163 stars this week · JavaScript · agent-skills awesome awesome-list persona-distill
A curated list of Agent Skills for 'persona distillation' — extracting communication style, decision frameworks, and interaction patterns from a person's digital footprint to build AI agents that converse like specific individuals.
Use case
The core problem: you want an AI agent that doesn't just answer questions generically, but responds in a specific person's voice, reasoning style, and with their contextual knowledge. Concrete example — instead of 'ChatGPT answers questions about my blog', you get an agent that answers exactly how Henry would, using his actual writing patterns, tech opinions, and career context extracted from his posts, GitHub activity, and chat history. This is especially useful for async mentorship bots, digital twins, or 'ask me anything' features on a personal site.
Why it's trending
This exploded this week because AgentSkills.io emerged as a distribution platform for shareable persona skill files, giving this pattern a concrete deployment target — it's no longer just a prompt engineering trick but a packageable, shareable artifact. The Chinese dev community is moving fast on this, and Western developers are just discovering the pattern.
How to use it
- Browse AgentSkills.io or the repo list to find a self-distillation skill (start with '数字人生.skills' or 'Forge Skill') — these are YAML/JSON skill definition files that define persona extraction pipelines.,2. Feed your own data into the extraction prompt — for Henry, this means exporting blog post markdown, GitHub README files, and any public writing into a single corpus:
const corpus = fs.readdirSync('./posts').map(f => fs.readFileSync(f, 'utf8')).join('\n---\n'),3. Run the skill's distillation prompt against your corpus via the OpenAI or Anthropic API to generate a structured persona file (style vectors, decision heuristics, topic affinities, tone markers).,4. Store the resulting persona JSON in Supabase and inject it as a system prompt prefix whenever your blog's chat feature is invoked:const { data } = await supabase.from('persona').select('system_prompt').single(),5. Optionally package your distilled persona as a.skillfile and publish it to AgentSkills.io so others can summon 'Henry-mode' in their own agents.
How I could use this
- Build an 'Ask Henry' chat widget on the blog homepage that's backed by a persona distilled from all of Henry's published posts — when visitors ask 'what's your take on Next.js App Router?', the agent responds in Henry's actual voice with his actual opinions, not generic ChatGPT boilerplate. Store the persona embedding in Supabase and refresh it automatically via a GitHub Action every time a new post is published.
- Create a 'Career Henry' persona skill distilled specifically from resume, cover letters, LinkedIn, and project READMEs — then wire it to a /hire page where recruiters can have a structured Q&A with an agent that knows Henry's experience in depth and can answer 'has he worked with Supabase RLS?' accurately instead of hallucinating.
- Implement a blog post drafting co-pilot that uses Henry's distilled persona as a style-consistency checker — every time a new draft is written, the agent compares it against the persona's tone markers and flags sentences that sound 'off-brand', essentially acting as Henry's own editorial voice trained on his historical writing corpus stored in Supabase.
7. LaurieWired/tailslayer
2,023 stars this week · C++
Tailslayer is a C++ library that eliminates DRAM refresh stall spikes by replicating data across independent memory channels and issuing hedged reads, returning whichever replica responds first.
Use case
High-percentile (p99/p999) latency in memory-bound workloads is often dominated by DRAM refresh pauses (~7.8µs every 64ms per row), which are invisible to application code but deadly for latency-sensitive systems. For example, a real-time inference server that keeps embeddings or KV-cache in RAM can see random 7–8µs spikes purely from DRAM housekeeping — Tailslayer eliminates those by hedging reads across channels with uncorrelated refresh schedules, so the stall on one channel doesn't matter if the other channel responds first.
Why it's trending
The repo went viral this week because it exposes undocumented DRAM channel scrambling offsets that actually work across AMD, Intel, and AWS Graviton — the kind of low-level hardware trick that systems engineers debate for years but rarely see packaged as a drop-in library. It's also directly relevant to the current wave of on-prem LLM inference optimization where tail latency in token generation is a hot problem.
How to use it
- Copy the header into your project:
cp -r include/tailslayer /your/project/include/,2. Include the single header in your C++ file:#include <tailslayer/hedged_reader.hpp>,3. Allocate replicated buffers across channels using the HedgedReader API, then write your data once — the library handles replication:HedgedReader reader; reader.allocate(data, size);,4. Replace your hot-pathmemcpy/pointer-dereference reads withreader.read(offset, dest, size)— internally this fires reads on both DRAM channels simultaneously and returns the first result.,5. Benchmark before/after withperf stator a p99 histogram tool (e.g., HdrHistogram) targeting your worst-case read latency under load to confirm tail reduction.
How I could use this
- Write a deep-dive blog post titled 'Why your RAM lies to you at p99' — benchmark a Node.js/WASM module that calls a Tailslayer-backed C++ addon for serving pre-computed AI embeddings, showing latency percentile charts before and after. This kind of low-level + web-dev crossover post performs extremely well on Hacker News and would establish technical credibility.
- Not directly applicable to a resume matcher (pure JS/TS stack), but Henry could write a companion tool: a latency profiler CLI that wraps any command and reports p50/p99/p999 memory-read latency using perf events, positioned as a developer tool article with a GitHub repo — great portfolio piece that signals systems-level awareness beyond frontend.
- For an AI blog feature: if Henry ever self-hosts a vector search index (e.g., a custom HNSW or flat index in C++ via N-API bound to Next.js), integrating Tailslayer for the embedding lookup buffer would reduce tail latency on semantic search queries. He could document the integration as a 'zero-cost p99 win for self-hosted vector search' post, directly targeting the growing audience building RAG pipelines on bare metal.
8. alchaincyf/hermes-agent-orange-book
1,838 stars this week · various
A free, comprehensive English/Chinese guide (PDF) to Nous Research's Hermes Agent framework — the first open-source agent with a built-in self-improving learning loop and three-layer memory system.
Use case
Most AI agent tutorials cover toy examples with no memory or skill persistence. Hermes Agent solves the real problem of agents that degrade or stall on novel tasks by automatically creating and evolving 'Skills' — reusable learned behaviors. Concrete example: a coding agent that initially struggles with your Supabase RLS policies learns the pattern once, persists it as a Skill, and applies it correctly on every future task without re-prompting.
Why it's trending
Hermes Agent was released by Nous Research in early 2026 and this guide dropped the English PDF translation this week, making the framework accessible to a broader audience exactly as developers are actively comparing it against Claude Code and OpenClaw for production agent workflows.
How to use it
- Download the English PDF from the repo and read Parts 1–2 (Chapters 1–6) to understand the learning loop and three-layer memory before touching any code.,2. Install Hermes Agent from Nous Research:
git clone https://github.com/NousResearch/hermes-agent && cd hermes-agent && pip install -e .— requires Python 3.11+ and an API key for your chosen LLM backend.,3. Run your first session:hermes chat --profile default— give it a multi-step task (e.g., 'scaffold a Next.js API route that queries Supabase and returns paginated results') and observe the Skill being created in.hermes/skills/.,4. Inspect the auto-generated Skill YAML in.hermes/skills/to understand what the agent learned — edit constraints or feedback fields to steer future behavior, as covered in Chapters 8–11.,5. For multi-agent setups (Chapter 15), define an orchestrator config inhermes.config.jsonwith separate agent roles (researcher, coder, reviewer) each with isolated memory layers, then invoke withhermes run --config hermes.config.json.
How I could use this
- Wire a Hermes Agent as the backend for a 'blog co-pilot' feature: on each new post draft in Henry's Next.js blog, the agent reads the Supabase posts table, learns Henry's writing style and recurring topic clusters as Skills, and proactively suggests an internal linking map — getting smarter with every post published without manual re-prompting.
- Build a self-improving resume tailoring tool: feed Hermes a job description + Henry's base resume, let it generate a tailored version, then give thumbs-up/down feedback on each output. The agent encodes what worked as a Skill (e.g., 'lead with TypeScript for fintech JDs') so by the 10th application the tailoring requires zero manual editing.
- Use Hermes's three-layer memory system to power a persistent 'learning journal' feature on the blog: short-term memory captures the current reading session, episodic memory logs which posts a returning visitor has engaged with, and semantic memory builds a concept graph stored in Supabase pgvector — then surface a 'you might have missed' sidebar that genuinely improves with site usage rather than relying on static tag matching.
9. KKKKhazix/khazix-skills
1,572 stars this week · various
A curated collection of portable, composable AI Skills (structured instruction sets) that extend AI agent capabilities in tools like Claude Code and Codex, starting with a long-form writing skill.
Use case
The core problem: prompt engineering is fragile, non-portable, and lives in your head or scattered in notes. This repo packages battle-tested domain expertise into versioned, installable .skill files that any compatible agent can load on demand. Concrete example: instead of re-pasting a 500-word writing style prompt every time you want Claude to draft a blog post in your voice, you install kaizike-writer once and invoke it with /kaizike-writer — the agent automatically applies the style rules, self-check layers, and methodology without any manual prompting.
Why it's trending
The Agent Skills open standard (agentskills.io) is gaining traction as Claude Code, Codex, and similar agentic tools mature — developers are realizing that shareable, installable skill packages are to AI agents what npm packages are to Node.js. This repo is one of the first high-quality public examples of real-world skills being open-sourced with production methodology, making it a reference implementation people are studying closely.
How to use it
- Install into Claude Code by dropping the file:
mkdir -p ~/.claude/skills && curl -L https://github.com/KKKKhazix/khazix-skills/releases/latest/download/kaizike-writer.skill -o ~/.claude/skills/kaizike-writer.skill - Or, if your agent supports URL-based install, just say:
安装这个 skill:https://github.com/KKKKhazix/khazix-skillsin Claude Code chat. - Invoke manually in any session:
/kaizike-writerfollowed by your topic or draft. - To build your own skill, clone the repo and study the folder structure inside
kaizike-writer/— it contains an instruction file, optional scripts, and reference examples. Mirror that structure in a new folder namedhenry-blog-writer/. - Test composability: call two skills in sequence, e.g.,
/henry-blog-writerto draft, then a hypothetical/seo-optimizerto refine — skills are designed to chain without conflict.
How I could use this
- Create a
henry-blog-writerskill that encodes your specific writing voice, post structure (e.g., problem → code snippet → takeaway), and a self-check checklist (Is there a concrete code example? Does the intro avoid fluff? Is the CTA specific?) — then install it in Claude Code so every draft you generate in your editor automatically matches your established style without re-prompting. - Build a
job-applicationskill that packages your resume context, a 4-layer self-check system (relevance to JD, tone, keyword density, length), and cover letter templates — so when you paste a job description and call/job-application, the agent produces a tailored cover letter and resume bullet suggestions that already know your background, instead of starting from scratch each time. - Build a
blog-post-to-tweet-threadskill that takes a finished Markdown post from your Next.js blog (via file path or stdin), applies a structured decomposition algorithm (hook tweet, 3-5 insight tweets with code blocks, CTA tweet), and outputs a ready-to-post thread — wire this into your Supabase-backed publish workflow so it auto-generates a thread draft every time a post'spublishedflag flips to true.
10. hotcoffeeshake/tong-jincheng-skill
1,558 stars this week · various
A Claude Code skill that distills a Chinese relationship guru's mental models into an AI persona you can invoke via CLI to analyze interpersonal dynamics using his specific cognitive frameworks.
Use case
This solves the problem of wanting domain-expert reasoning from a specific thinker's worldview — not generic advice — baked into your dev workflow. For example, instead of asking ChatGPT 'how do I handle a flaky person', you invoke a persona trained on 200k words of primary source material that applies specific named frameworks like 'uncertainty = disinterest' or 'humans can't withstand tests'. It's a template for how to package any thinker's epistemology as a reusable AI skill.
Why it's trending
This is trending because it's one of the first viral examples of the skills.sh / Claude Code skill ecosystem being used to package a cultural figure's reasoning style as a CLI-invokable persona — it's demonstrating a new primitive (distilled-persona-as-skill) that developers are immediately recognizing as reusable for their own domains.
How to use it
- Install the skill:
npx skills add hotcoffeeshake/tong-jincheng-skill— this registers the persona into your Claude Code skill registry.,2. Open Claude Code in your terminal and trigger the persona with a keyword: type童锦程or深情祖师爷to activate the framework mode.,3. Ask a relationship or interpersonal question directly — the skill applies the extracted mental models (not just quotes) to reason through your specific scenario.,4. Study the repo structure: the real value is seeing how 200k words of transcripts were chunked, summarized into named mental models (e.g. 'attraction > appeasement'), and formatted as a skills.sh-compatible persona file — replicate this structure for your own domain expert.,5. Fork and adapt: replace the source material with transcripts/essays from a thinker relevant to your domain (e.g. Paul Graham essays → startup advice persona, or your own blog posts → 'ask Henry' skill).
How I could use this
- Build an 'Ask Henry' Claude Code skill trained on all of Henry's own blog posts — visitors on the blog could chat with a persona that reasons in Henry's actual voice and references his specific past writing, not generic AI responses. Implement it as a Next.js API route hitting Claude with a system prompt generated from scraped blog content stored in Supabase.
- Create a 'Career Advisor' skill distilled from specific sources Henry trusts (e.g. Levels.fyi salary threads + specific engineering career essays) — invoke it in CLI during job search to get advice that applies those exact frameworks to his resume or offer negotiation, rather than generic GPT output.
- Use this repo's skill packaging pattern to build a 'Code Reviewer' persona trained on a specific senior engineer's public writing (e.g. Dan Abramov's blog, Kent C. Dodds' articles) — wire it into Henry's blog's GitHub Actions so every PR gets reviewed through that thinker's named principles (e.g. 'colocation', 'avoid hasty abstractions') with citations.