Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.
1. V4bel/dirtyfrag
2,646 stars this week · C
A deterministic, race-condition-free Linux kernel LPE chaining two page-cache write primitives — trending because the embargo broke before any patch landed for one of the two CVEs.
Use case
Security researchers and sysadmins need to assess exposure on unpatched kernels. Dirty Frag targets the xfrm (IPsec/ESP) and RxRPC subsystems to corrupt read-only page cache entries, then escalates via the same class of primitive as Dirty Pipe — but without needing a race window, making it far more reliable in practice.
Why it's trending
The embargo broke publicly on 2026-05-08 with CVE-2026-43500 still unpatched in all trees — every Linux shop running a non-mainline kernel is currently exposed with no official fix to pull.
How to use it
SKIPPED — CVE-2026-43500 is unpatched. Step-by-step exploit instructions for an active unpatched LPE are out of scope regardless of public disclosure status. Check the upstream mainline commit f4c50a4034e6 for the xfrm fix and monitor linux-distros for the RxRPC patch.
How I could use this
- Write a technical deep-dive post comparing the Dirty Pipe / Copy Fail / Dirty Frag bug class progression — page-cache write primitives as a category — this is high-SEO content that lands well with the Australian IT security job market Henry is targeting.
- Add a 'kernel CVE tracker' section to the visa/career tools: international grads doing cloud/infra roles in AU often need to demonstrate security awareness — a curated feed of high-impact kernel CVEs with plain-English severity summaries would differentiate the platform.
- Use Claude Haiku to auto-classify incoming Linux security advisories by affected subsystem and generate a one-paragraph 'what this means for your stack' summary — feed it into the existing ai-news pipeline as a dedicated security-news content type.
2. antirez/ds4
2,198 stars this week · C
A tight, Metal-native C inference engine written by antirez (Redis creator) that runs DeepSeek V4 Flash — a 284B MoE model — locally on Apple Silicon Macs with 128GB RAM via 2-bit quantization.
Use case
The real problem is that frontier-class models cost serious money at API scale and send your users' data to third-party servers. ds4 lets you serve a near-frontier 284B model from a local Mac Mini or MacBook Pro with 128GB RAM — no API keys, no per-token billing, no data leaving the machine. Concrete scenario: you run resume analysis or interview coaching that involves sensitive visa and salary data, and you don't want that payload hitting Anthropic or OpenAI's servers.
Why it's trending
antirez (Salvatore Sanfilippo, creator of Redis) wrote it from scratch in a weekend, which means the code is unusually readable for a native inference engine and the HN thread is massive. DeepSeek V4 Flash also just dropped with a 1M token context and on-disk KV cache persistence — features no other local runtime fully exploits yet.
How to use it
- Clone and build:
git clone https://github.com/antirez/ds4 && cd ds4 && make— requires Xcode CLI tools for Metal compilation on macOS.,2. Download the 2-bit quantized weights (~70GB): follow the README's Hugging Face link for the ds4-specific quant — standard GGUF Q2 will not work, ds4 uses a custom quantization layout.,3. Start the inference server:./ds4 --model ./weights/ds4-flash-2bit.bin --port 8080— it exposes an OpenAI-compatible/v1/chat/completionsendpoint.,4. Hit it like any OpenAI-compatible API from your Next.js route handler:const res = await fetch('http://localhost:8080/v1/chat/completions', { method: 'POST', body: JSON.stringify({ model: 'ds4', messages, stream: true }) }).,5. Toggle thinking mode per-request by passing a system prompt prefix — the model's thinking budget scales with problem complexity, so simple queries stay fast.
How I could use this
- Privacy-first resume analyser: visa applicants (485/482) are understandably nervous about uploading their resume and visa status to a US cloud API. Run ds4 locally on your Mac dev box, proxy requests through a self-hosted endpoint, and market the 'your data never leaves Australia' angle as a trust differentiator on the Gradland landing page.
- Long-context career document processing: ds4's 1M token window means you can feed an entire job posting corpus — say, 500 Jora listings scraped by your existing job scraper — plus the user's resume in a single prompt and ask 'which 10 roles am I most qualified for and why.' That kind of cross-document reasoning is impractical with Claude's 200k limit without chunking hacks.
- Zero-cost local dev environment for AI features: swap
ANTHROPIC_API_KEYfor aLOCAL_LLM_URL=http://localhost:8080env var in a.env.localoverride, and write your route handlers to check which base URL is configured. You get full 284B-quality responses during development without burning Claude credits — especially useful for iterating on prompt engineering for the interview coach or mock quiz generator.
3. aattaran/deepclaude
1,635 stars this week · JavaScript
A shell wrapper that hijacks Claude Code's API calls and reroutes them to DeepSeek V4 Pro or OpenRouter, keeping the full Claude Code agent loop at 17x lower cost.
Use case
Claude Code's agentic loop — multi-step file editing, bash execution, git, subagent spawning — is best-in-class, but at $15/M output tokens it gets expensive fast for long autonomous sessions. deepclaude sets ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY to point at DeepSeek's API-compatible endpoint instead, so the CLI binary never knows it's talking to a different model. Concrete scenario: a 2-hour autonomous coding session that would cost ~$8 on Anthropic costs ~$0.47 on DeepSeek V4 Pro.
Why it's trending
DeepSeek V4 Pro dropped this week with a 96.4% LiveCodeBench score at $0.87/M output — the cost/performance ratio crossed a threshold where swapping Claude's brain becomes rational for non-critical tasks. The 1,600+ stars in a week reflects developers stress-testing whether the tool loop holds up without Claude's weights behind it.
How to use it
- Get a DeepSeek API key at platform.deepseek.com, add $5 credit.
- Set env vars:
export DEEPSEEK_API_KEY="sk-..."in ~/.bashrc - Install:
chmod +x deepclaude.sh && sudo ln -s $(pwd)/deepclaude.sh /usr/local/bin/deepclaude - Run
deepclaudeinstead ofclaude— identical UX, different model. - Use
deepclaude --backend anthropicto switch back to Opus when you need it for complex reasoning.
How I could use this
- Wire deepclaude into the GitHub Actions developer workflow as a cost-efficient fallback tier: when CLAUDE_CODE_OAUTH_TOKEN quota is near-exhausted but the task doesn't require Opus-level reasoning (e.g. writing a new markdown post, updating TODO.md, fixing a lint error), route to DeepSeek before falling back to GitHub Copilot — adds a cheap middle tier between Pro quota and the Copilot handoff.
- Use deepclaude locally for the daily content generation scripts (fetch-ai-news, fetch-visa-news, githot digest) that run on a cron — these are structured extraction tasks where DeepSeek V4 Pro's coding/reasoning is more than sufficient, and running them through deepclaude rather than ANTHROPIC_API_KEY cuts the per-run API cost by ~15x, which matters when you're calling Claude on every article in a batch.
- Build a cost-aware model router into lib/subscription.ts: for users on the free tier hitting the resume analyser or cover letter tool, proxy their requests through a DeepSeek-backed endpoint (using deepclaude's ANTHROPIC_BASE_URL trick) to reduce your per-request margin cost, and reserve claude-sonnet-4-6 for paid subscribers — same UX surface, different backend based on subscription tier.
4. strukto-ai/mirage
1,376 stars this week · TypeScript · agent-sandbox agent-tools ai-agents bash
Mirage gives AI agents a single Unix-style filesystem tree that mounts S3, Google Drive, Slack, Gmail, and Redis as directories — so agents use the same read/write/list tools regardless of backend.
Use case
The real problem: every AI agent you build needs custom glue code to talk to each data source — a Slack tool, an S3 tool, a Drive tool — and none of them compose. Mirage collapses that into one virtual FS so an agent can do ls /slack/channels/general and cp /gmail/inbox/msg1.txt /s3/reports/ without knowing the underlying API. Concrete example: a resume-screening agent reads job descriptions from /drive/jd/, candidate CVs from /s3/uploads/, writes summaries back to /slack/channels/hiring — all with the same three tool calls.
Why it's trending
Agent tooling is the hottest problem in LLM engineering right now — OpenAI Agents SDK, Claude tool use, and LangGraph all landed in the past 6 months and every team is drowning in bespoke connector code. Mirage's filesystem abstraction is a clean answer to a pain point that's now widely felt.
How to use it
- Install:
npm install @struktoai/mirage-node(TS) orpip install mirage-ai(Python). - Mount sources in code:
import { createMirage } from '@struktoai/mirage-node';
const fs = await createMirage({
mounts: {
'/drive': { type: 'google-drive', credentials: process.env.GOOGLE_CREDS },
'/s3': { type: 's3', bucket: 'my-bucket', region: 'ap-southeast-2' },
}
});
- Expose the built-in tools (
mirage_read,mirage_write,mirage_list) directly to your Claude/OpenAI agent as tool definitions — the SDK generates the JSON schema for you. - Let the agent navigate:
await fs.list('/drive/resumes')returns a uniform array regardless of backend. - For Claude specifically, pass the tool definitions into the
toolsarray in yourmessages.create()call and handletool_useblocks by delegating tofs.dispatch(toolCall).
How I could use this
- Build a 'content brain' agent for the Gradland blog: mount
/s3/drafts,/drive/research, and/slack/editorialso a single Claude agent can pull a draft, cross-reference research notes, and post a summary to Slack — replacing the current ad-hocscripts/fetch-ai-news.tsshell-out with a composable pipeline that works across all content types. - Wire Mirage into the resume analyser: mount the user's Google Drive as
/driveso they can point the tool at an existing CV file path instead of pasting text — the agent reads/drive/my-cv.pdf, runs the analysis, and writes the annotated result back to/drive/gradland-feedback.md, making the tool feel native to how candidates already store their documents. - Use Mirage as the persistence layer for a stateful interview prep agent: mount Redis at
/cache/sessions/<userId>so the agent reads prior question history, difficulty progression, and weak topics from a singlefs.read()call instead of building a custom session-state API — and the same write path works whether you later swap Redis for Supabase or S3 without touching agent logic.
5. yaojingang/yao-open-prompts
1,362 stars this week · Python · ai chinese-prompts geo prompt-engineering
A curated, categorised library of 116 battle-tested Chinese AI prompts (with English mirrors) spanning work, marketing, learning, and GEO — ready to drop into any Claude/GPT pipeline.
Use case
Writing effective prompts from scratch is slow and inconsistent — this repo gives you a structured, reusable starting point for common professional scenarios. For example, instead of hand-crafting a prompt to generate a product requirements doc or a WeChat HTML article, you pull the matching template from the repo, swap in your context, and get a production-quality output in one shot. The GEO section is particularly practical: it covers prompts for Schema.org structured data, AI-search visibility audits, and content trust-signal engineering — things most developers are still figuring out on their own.
Why it's trending
GEO (Generative Engine Optimisation) is the new SEO — as ChatGPT, Perplexity, and Claude increasingly answer queries directly, developers are scrambling for structured prompt templates that help content appear in AI-generated answers rather than just Google SERPs. This repo dropped 25 GEO marketing templates and a meta-prompt system (v0.6) at exactly the moment the concept went mainstream.
How to use it
- Clone the repo and browse CATALOG.md to find prompts matching your use case — categories map cleanly to scenarios (01-ai-methods for meta-prompting, 08-ai-marketing for GEO). 2. For English usage, mirror paths under prompts-en/ — every Chinese prompt has a direct English equivalent at the same relative path. 3. Copy the raw markdown prompt body into your Claude/GPT system prompt or a reusable snippet in your codebase. 4. Use the RTF meta-prompt (prompts/01-ai-methods/rtf-meta-prompt-system-v06.md) as a generator — feed it a rough idea and it outputs a structured prompt you can save back into your own library. 5. For GEO use cases, pull templates from prompts/08-ai-marketing/ and adapt the Schema.org and content-trust-signal prompts to your domain, then wire them into a Next.js API route that runs them against your post content at build time.
How I could use this
- Wire the GEO audit prompt (prompts-en/08-ai-marketing/) into a build-time script that runs against every new Gradland blog post — it checks whether the content has enough trust signals, structured data hooks, and entity clarity to surface in Perplexity or ChatGPT answers about Australian visa + tech career topics, then writes a JSON report to the post frontmatter for editors to act on.
- Use the RTF meta-prompt system (v0.6) as the backbone for a 'Prompt Workshop' tool in Gradland's career tools section — international students paste a rough job description or cover letter goal, the meta-prompt generates a tailored prompt, and Claude executes it to produce a polished cover letter or LinkedIn summary; saves the generated prompt back to their profile for reuse.
- Adapt the Feynman-question learning prompts (prompts-en/03-ai-learning/) into Gradland's interview prep module — when a user marks a technical concept as 'shaky', the system pulls the Feynman prompt, asks Claude to generate a Socratic question sequence on that topic, and turns it into a 5-question interactive drill stored in Supabase for spaced repetition.
6. XBuilderLAB/cheat-on-content
1,200 stars this week · Shell
A Shell-based content journaling workflow that forces you to score, blind-predict, and retrospect every post so your intuition compounds over time instead of resetting each time.
Use case
Most creators publish, check numbers, feel vaguely bad or good, and repeat — gaining almost nothing because there's no structured feedback loop. This repo gives you a local log file where you score a piece before publishing (hook strength, clarity, novelty), write a blind prediction (views/engagement range), then return 3 days later to compare prediction vs. reality and update your personal rubric. Concretely: you write a post about Australian visa pathways, predict '800–1200 views, high save rate because visa anxiety is high', publish, then retrospect to see if your model was right — and why.
Why it's trending
The README is itself an example of the system — engineered to be meta-viral by making the reader feel predicted, which is a content experiment in action. It's riding the 'systems over motivation' wave that's been dominating creator Twitter and Substack circles in Q1–Q2 2026.
How to use it
- Clone and run
bash init.sh— it creates a localcontent-log/directory with a template YAML file per post.,2. Before publishing any piece, fill inscore.yml: rate hook (1–10), estimated novelty, target emotion, and write your blind prediction (expected reach range + why).,3. Publish as normal. The script drops apending_retroflag with a T+3d timestamp.,4. Runbash retro.shafter 3 days — it opens the log, prompts you to fill in actuals, then diffs your prediction vs. reality and appends alearningsfield.,5. Runbash evolve.shmonthly — it reads all retros and prints a frequency analysis of which score dimensions correlate with your actual top performers, so you can refine your rubric.
How I could use this
- Build a
/writing/retrospectivespage on Gradland that shows Henry's public prediction logs for his top posts — 'I predicted 600 reads, got 2,400, here's why' — this is genuinely differentiated content that demonstrates intellectual honesty and compounds trust with the international-student audience far better than generic takes. - Wire the scoring rubric into the githot digest pipeline: before the AI writes a githot post, have Claude score the trending repo against Henry's historical rubric (hook strength, audience relevance to AU IT grads, novelty) and only auto-publish if it clears a threshold — reducing low-quality auto-posts that dilute SEO.
- Build a Claude-powered 'content calibration' micro-tool for Gradland: users paste a LinkedIn post or resume summary they're about to publish, Claude scores it against the same dimensions (hook, clarity, emotion, novelty for hiring managers) and returns a blind prediction of likely response — turns the content rubric concept into a career tool specifically for job seekers crafting their personal brand.
7. crafter-station/petdex
1,105 stars this week · TypeScript
A community-run gallery for animated companion pets used with OpenAI's Codex CLI, solving the discovery and distribution problem for character packs.
Use case
OpenAI's Codex CLI ships with an animated pet companion feature, but there's no official store for community-made character packs. Petdex fills that gap: a developer wants a custom mascot beyond the default, visits Petdex, previews all animation states (idle, run, sleep, etc.), downloads the ZIP, and drops it into their Codex config directory. It also provides browser-side validation so contributors can check their pack format before submitting — no CLI tooling required.
Why it's trending
OpenAI relaunched Codex as a cloud-based coding agent in early May 2026, driving a surge of developers trying the CLI for the first time and discovering the pet system. Community content creation spiked immediately after launch, and Petdex became the de facto index for that content within days.
How to use it
- Clone and run locally:
git clone https://github.com/crafter-station/petdex && cd petdex && bun install && bun dev,2. Browse the gallery at localhost — each pet card shows all animation states (idle, walk, sleep, interact) via sprite preview.,3. To add your own pet: drop a correctly structured ZIP intopublic/pets/<your-pet-name>/— the folder must include amanifest.jsonand sprite sheets for each animation state.,4. Use the browser validator at/submitto check your pack structure before opening a PR — it catches missing states and malformed manifests client-side.,5. For production distribution:bun run buildregenerates the downloadable archives underpublic/packs/, including a full gallery ZIP.
How I could use this
- Create a custom Gradland mascot pet pack in the Eastern Ink × Comic Panel visual style — a small ink-brush fox or scholar character — and submit it to Petdex. Write a companion post on how you designed sprite sheets and animated states in Aseprite or Figma, targeting the 'developer creative side project' SEO cluster that converts well to blog subscribers.
- Add a lightweight animated mascot to Gradland's interview prep tool that reacts to user performance — celebrates a strong STAR answer, droops on a weak one, does a idle animation during thinking time. Use Petdex's sprite sheet format as a reference for how to structure frame-based animations in a Next.js canvas or CSS sprite component without a heavy game engine.
- Build a small AI feature: a 'visa journey companion' pet on the Visa Tracker page that changes animation state based on visa stage (lodged → processing → approved). The pet's current state is stored in Supabase alongside the visa record, and the Claude API generates a short in-character quip when the user logs a stage update — low token cost with Haiku, high perceived delight.
8. vibeforge1111/keep-codex-fast
909 stars this week · Python
A safety-first maintenance skill for OpenAI Codex that inspects bloated local state (SQLite thread metadata, stale worktrees, rotting logs) and archives rather than deletes, so you never lose context while recovering performance.
Use case
After weeks of heavy Codex use across multiple repos, the local SQLite database swells with oversized thread title/preview metadata, old worktrees pile up in the hot path, and logs grow unchecked — making chat navigation noticeably sluggish. This tool runs in inspect-only mode first so you see exactly what's grown, generates handoff docs before archiving any session, then applies changes only on explicit confirmation. Concretely: a dev who's been running Codex daily since January might have 200+ stale threads and 4GB of log files they didn't know existed.
Why it's trending
OpenAI relaunched Codex as a cloud coding agent in early May 2026 and early adopters are already hitting local state bloat after daily use — this repo hit the top of GitHub trending because it's the first tool to address the operational reality of running Codex long-term rather than just the initial setup. The 'archive, don't delete' philosophy is a direct reaction to devs getting burned by aggressive cleanup scripts that wiped thread history.
How to use it
- Clone the repo and point it at your Codex state directory:
python keep_codex_fast.py --inspect— this is read-only and produces a full report with no writes. 2. Review the report: it surfaces SQLite thread count, oversized title/preview rows, stale worktrees older than N days, large log files, and dead project config references. 3. Run maintain mode to apply safe changes:python keep_codex_fast.py --apply— backs up state, archives old sessions to a dated folder, moves stale worktrees out of the active path, rotates logs. 4. Only if you see pathological SQLite metadata bloat (title/preview rows >1KB each): add--repair-thread-metadata-bloat— this shortens display metadata while leaving full transcripts intact. 5. Schedule it:cron 0 9 * * 1 python /path/to/keep_codex_fast.py --applyfor a weekly Monday morning clean.
How I could use this
- Henry's blog already stores AI-generated content (githot, ai-news, visa-news) as markdown files — build a parallel 'content audit' script modeled on keep-codex-fast's inspect mode that reports which content/ directories have grown past a size threshold, flags posts older than 90 days with zero pageviews (via Supabase analytics), and generates a handoff doc before archiving them to an
content/archive/folder. Same 'report first, apply second' UX. - The career tools accumulate per-user session data (interview prep transcripts, resume analysis history, learning path progress) in Supabase — apply the same inspect-before-archive pattern: build an admin dashboard route at /admin/data-health that queries session row counts per user, flags accounts with >500 rows of stale history, and offers a one-click 'archive old sessions' action that moves rows to a cold
_archivetable rather than deleting, preserving the data for potential reactivation. - The keep-codex-fast 'handoff doc' concept maps directly onto a Claude-powered feature for the interview prep tool: before a user archives or ends an interview session, auto-generate a structured handoff summary (weak areas identified, questions to revisit, next prep steps) using claude-haiku-4-5-20251001, store it as a markdown blob in Supabase, and surface it the next time they start a session for the same role — giving continuity without requiring them to re-read a full transcript.
9. lightseekorg/tokenspeed
760 stars this week · Python · blackwell deepseek gpt-oss kimi
TokenSpeed is an LLM inference engine that matches TensorRT-LLM throughput on Blackwell GPUs while keeping vLLM's developer ergonomics — specifically tuned for multi-step agentic pipelines where inference latency compounds.
Use case
The real problem: when you chain LLM calls (e.g., parse resume → score skills → generate gap analysis → recommend learning path), each hop adds latency and each KV cache gets discarded between calls. TokenSpeed's scheduler models request lifecycle as a finite-state machine with typed KV cache ownership, so it safely reuses KV state across agentic turns instead of reallocating. Concrete scenario: a 6-step career coaching agent that takes 14s on vLLM runs in ~7s on TokenSpeed with the same Qwen or DeepSeek model, without writing a single line of parallelism logic.
Why it's trending
The Kimi K2.5 B200 benchmarks dropped this week showing TokenSpeed beating TensorRT-LLM on agentic workloads — the chart in the README is the thing people are sharing. Nvidia Blackwell (B200) is brand-new infrastructure and there are almost no production-ready inference engines for it yet, so anyone standing up B200 clusters right now is watching this repo closely.
How to use it
- Clone and install (preview, not yet on PyPI):
git clone https://github.com/lightseekorg/tokenspeed && pip install -e .,2. Follow the blog post at lightseek.org/blog/lightseek-tokenspeed.html to reproduce the Kimi K2.5 benchmark — this is the only fully-documented path in the preview release.,3. Use the AsyncLLM entrypoint for concurrent agentic requests:from tokenspeed import AsyncLLM; engine = AsyncLLM(model='kimi-k2.5', ...); result = await engine.generate(prompt, session_id='agent-turn-3'),4. Profile with their built-in Pareto curve tooling to find your throughput/latency tradeoff vs vLLM at your target batch size.,5. Watch the repo — DeepSeek V4 and Qwen 3.6 support are the next merges into main, so pin a specific commit if you need stability.
How I could use this
- Write a benchmarked deep-dive post: 'I ran vLLM vs TokenSpeed on a 6-step resume analysis pipeline — here are the latency waterfall charts.' Use a free-tier B200 cloud instance (Lambda Labs has them) with Qwen 2.5 7B and instrument each LLM hop with
time.perf_counter(). This exact query ('fastest inference engine for agentic LLMs 2025') has essentially no good content yet and is highly rankable. - When Gradland's AI career tools (resume analyser, interview prep) hit enough concurrent users to make Claude Haiku costs hurt, TokenSpeed + a self-hosted Qwen 2.5 14B becomes a viable swap. The AsyncLLM entrypoint accepts the same concurrent request pattern as the Anthropic SDK — you could build a thin adapter in
lib/llm.tsthat routes to either backend by env var, letting you A/B cost vs quality at runtime. - Build an 'inference stats' API endpoint that surfaces real TTFT (time-to-first-token) and tokens/sec for each AI feature in Gradland — then show a live latency badge on each tool page ('Resume analysis: avg 1.2s response'). When you eventually migrate a tool to TokenSpeed, the badge becomes proof of the speedup. It builds user trust and gives you a concrete metric to write about.
10. MayersScott/rkn-block-checker
756 stars this week · Python · censorship cli dns dpi
A Python CLI that diagnoses Russian ISP internet blocks layer by layer — not just 'site is down' but specifically whether it's DNS poisoning, TCP reset injection, TLS SNI deep-packet inspection, or an HTTP stub redirect.
Use case
Russian developers increasingly can't reach GitHub, npm, or Cloudflare, and the browser just says 'can't reach this site' with no actionable detail. This tool probes each network layer independently: it resolves DNS against both your ISP and a clean resolver, attempts a raw TCP handshake, checks if TLS completes past the SNI stage, and inspects the HTTP response. The result tells you exactly which layer the ISP is cutting — so you know whether a DNS-over-HTTPS fix is enough, or whether you actually need a full tunnel past a TSPU middlebox doing SNI filtering.
Why it's trending
RKN enforcement has intensified through early 2026 with new TSPU hardware rollouts across major Russian ISPs, making the 'just use a VPN' advice increasingly insufficient as SNI-based DPI catches unencrypted handshake metadata. Russian and post-Soviet developers are searching for diagnostic tools that explain the specific bypass needed for their ISP's enforcement method.
How to use it
- Install:
pip install rkn-block-checker - Run the default probe (uses a built-in site list):
rkn-check— prints your IP, ISP/ASN, and a per-site verdict table - Check specific domains you care about:
rkn-check --sites github.com registry.npmjs.org pypi.org - Read the verdict column:
DNS_POISONED→ use DoH;TCP_RESET→ ISP injects RST packets, need full tunnel;TLS_SNI_DPI→ SNI filtering, ECH/ESNI or tunnel needed;HTTP_STUB→ block page redirect, transparent proxy in path - Automate or integrate: run with
--jsonflag (if supported) or pipe stdout to a log for monitoring your ISP's block list evolution over time
How I could use this
- Write a post titled 'How Russian ISPs Actually Block Websites (And What Bypasses Each)' — walk through the tool's four verdict layers with Wireshark screenshots, explain TSPU architecture, and benchmark which bypass method (DoH vs ECH vs WireGuard) defeats each block type. This targets the large Russian/CIS developer diaspora in Australia who understand the problem firsthand — highly shareable on HN and r/Russia.
- Add a 'Network Readiness Check' to Gradland's onboarding flow for new international users: a lightweight client-side probe (using fetch with timeout + DNS-over-HTTPS fallback) that warns users if their current network is blocking Supabase, Stripe, or Anthropic API endpoints — relevant for users on university networks or corporate proxies that silently drop HTTPS to certain ASNs.
- Build a Claude-powered 'ISP Block Explainer' API route: accept the rkn-block-checker JSON output, pass it to claude-haiku-4-5-20251001 with a prompt that maps verdict codes to plain-English explanations and ranked remediation steps (e.g. 'Your ISP is filtering on TLS SNI. Step 1: try DNS-over-HTTPS in Firefox settings. Step 2: if that fails, your ISP has TSPU hardware — you need a full tunnel. Here are three options ranked by setup difficulty'). Package as a sharable tool page at /tools/network-check.