Skip to content
Gradland
← GitHub Hot
🔥

GitHub Hot — 7 May 2026

7 May 2026·24 min readGitHubOpen SourceTools

Top 10 repos trending on GitHub this week — what they do, why they matter, and how to use them in your projects.


1. darrylmorley/whatcable

2,096 stars this week · Swift · apple-silicon hardware-info iokit mac-app

A macOS menu bar app that reads IOKit USB-C port data and surfaces it in plain English — finally answering why your MacBook is charging at 15W through a 96W charger.

Use case

USB-C's single connector spec hides a 20-year range of cable capability: a USB 2.0 charge-only cable and a Thunderbolt 4 cable are physically identical. When your MacBook charges slowly, the bottleneck could be the cable, the charger, or the Mac itself — and macOS gives you no feedback. WhatCable reads the IOKit power-delivery and USB speed negotiation data that macOS already has internally and displays it per-port: 'Cable is limiting charging speed — cable rated 60W, charger can do 96W.'

Why it's trending

Apple's full USB-C pivot (MacBook Pro, MacBook Air, iPad Pro, iPhone 15+) means every Apple user now has a drawer of visually identical cables with wildly different specs, and the charging-speed mystery is a universal frustration. It's also a clean, study-worthy SwiftUI + IOKit integration that Swift developers are forking to understand how to query hardware registers without private APIs.

How to use it

  1. Install via Homebrew: brew install --cask whatcable — or download the zip from GitHub releases and drag WhatCable.app to /Applications.
  2. Launch the app — a cable icon appears in your menu bar.
  3. Plug in your USB-C cables/chargers and click the menu bar icon to see a per-port popover: headline (Thunderbolt / USB 2.0 / Charging only), wattage being negotiated, and a plain-English bottleneck diagnosis.
  4. For developers studying the source: the IOKit queries live in Sources/WhatCable/Hardware/ — look at how IOServiceMatching(kIOUSBDeviceClassName) is used to pull kUSBCurrentAvailable and Thunderbolt capability flags without any private APIs.
  5. To integrate similar IOKit reads into your own Swift tool: IOServiceGetMatchingServices + IORegistryEntryCreateCFProperties gives you a dictionary of all USB negotiation properties per port.

How I could use this

  1. Write a high-intent SEO post: 'Why Is My MacBook Charging Slowly? (And How to Actually Fix It)' — use WhatCable screenshots to illustrate the cable/charger/Mac bottleneck triage. 'MacBook charging slow' gets 8K+ monthly searches in AU; this post slots directly into TechPath AU's content moat and ranks against generic Apple support pages.
  2. Add a 'Dev Machine Checklist' card to the TechPath AU onboarding flow for international IT grads — include a curated list of free diagnostic tools (WhatCable, Stats, Hand Mirror) framed as 'set up your Mac for Aussie dev jobs.' Low-effort high-value content that differentiates the platform from generic job boards.
  3. Build a lightweight 'My Mac is acting weird' AI triage chatbot using Claude Haiku — user describes a symptom (slow charging, laggy screen, USB device not recognised), Claude asks 2-3 structured follow-up questions, then returns the exact tool to open and the exact setting to check. WhatCable handles charging; this becomes a template for a broader 'dev machine health' AI assistant feature.

2. aattaran/deepclaude

1,571 stars this week · JavaScript

DeepClaude replaces Claude Code's expensive backend with cheaper, high-performing alternatives like DeepSeek V4 Pro, saving costs while maintaining functionality.

Use case

This solves the problem of high costs associated with using Anthropic's Claude Code for autonomous coding tasks. For instance, if you're building a project that relies on iterative coding loops or file editing automation, DeepClaude allows you to achieve the same results at a fraction of the cost, making it ideal for budget-conscious developers or startups.

Why it's trending

The repo is trending because it offers a practical and cost-effective alternative to Claude Code's expensive API, which is a pain point for many developers. Additionally, the rising interest in autonomous coding agents and the high performance of DeepSeek V4 Pro have made this a hot topic.

How to use it

  1. Get a DeepSeek API key by signing up at platform.deepseek.com, adding $5 credit, and copying your API key.,2. Set the environment variable for your API key. For example, on macOS/Linux: echo 'export DEEPSEEK_API_KEY="sk-your-key-here"' >> ~/.bashrc && source ~/.bashrc.,3. Install the script. On macOS/Linux: chmod +x deepclaude.sh && sudo ln -s "$(pwd)/deepclaude.sh" /usr/local/bin/deepclaude.,4. Launch the tool with deepclaude in your terminal to start using the cheaper backend.,5. Use commands like deepclaude --status to check available backends or deepclaude --backend or to switch to OpenRouter for even cheaper rates.

How I could use this

    1. Integrate DeepClaude into your blog's backend to generate or refactor code snippets dynamically. For example, create a feature where users can input a problem, and the blog generates a code snippet solution using the autonomous agent.
    1. Build a resume-enhancing tool that uses DeepClaude to automatically generate optimized, ATS-friendly resumes or cover letters based on user input, leveraging the cost-effective backend for iterative text generation.
    1. Create an AI-powered feature that allows users to request custom blog posts or tutorials on specific topics. DeepClaude can handle the autonomous multi-step writing process to draft and refine the content.

3. mattpocock/dictionary-of-ai-coding

1,209 stars this week · TypeScript

A structured, open-source dictionary of AI coding terminology — written by Matt Pocock (Total TypeScript) to cut through VC-manufactured jargon and give developers plain-English definitions they can actually use.

Use case

Developers using Cursor, Copilot, or the Anthropic SDK hit terms like 'context window degradation', 'temperature', 'RAG', or 'non-determinism' and either guess wrong or waste an hour reading marketing docs. This repo gives each term a single, precise markdown file — no fluff. For example: you're debugging why your Claude resume analyzer gives different results on identical inputs — the answer is 'non-determinism', and this dictionary tells you exactly what that means and why it happens.

Why it's trending

Matt Pocock has 62k+ subscribers from Total TypeScript, so his audience followed him into AI tooling — but the timing is also perfect: Claude Code, Cursor, and GitHub Copilot are now mainstream tools and junior devs are hitting these knowledge gaps daily. The VC critique in the README ('there's a whole economy that benefits from keeping it hard to understand') is resonating hard on X/Twitter this week.

How to use it

  1. Browse the live site at aihero.dev/ai-coding-dictionary — no install needed for reading. 2. Clone the repo: git clone https://github.com/mattpocock/dictionary-of-ai-coding and inspect dictionary/*.md — each term is a standalone markdown file with frontmatter. 3. Run npm run generate to see how the README and site are compiled from those source files — the generator script is the interesting part if you want to replicate the pattern. 4. Fork it and add domain-specific terms (e.g., Australian visa-sector AI jargon) using the same dictionary/your-term.md file pattern. 5. Pull the raw markdown files via GitHub's raw URL in a fetch call to use them as a content source in your own app — no API key required.

How I could use this

  1. Add a floating 'AI Term of the Day' widget to TechPath AU's blog sidebar — each day Claude picks one term from the dictionary, writes a 2-sentence example specific to Australian tech job-seeking (e.g., 'context window' explained via a resume analyser metaphor), and caches it in Supabase. Zero extra API cost per visitor, high SEO value for 'AI glossary Australia' queries.
  2. Surface inline glossary tooltips inside the Resume Analyser and Interview Prep tools — when the AI response includes words like 'embedding similarity' or 'hallucination', wrap them in a <Tooltip> that pulls the plain-English definition from a local copy of this dictionary's markdown files. Reduces user anxiety about what the AI is actually doing and increases trust in paid features.
  3. Build a 'Can You Pass an AI Interview?' quiz page — use Claude Haiku to generate 5 multiple-choice questions from a random selection of dictionary terms, score the user, then link to deeper reading on aihero.dev. It's a natural SEO hook for international graduates who need to talk about AI tools in Australian tech interviews, and it costs under $0.001 per quiz run with Haiku.

4. yaojingang/yao-open-prompts

1,068 stars this week · Python · ai chinese-prompts geo prompt-engineering

A structured, production-ready library of 116 Chinese AI prompts (with full English mirrors) covering GEO, meta-prompt engineering, content ops, and career workflows — not a random dump of one-liners.

Use case

Most prompt repos are disorganised collections of half-tested ideas. This one is curated by a practitioner: each prompt is a standalone markdown file with a clear task frame, stripped of tutorial fluff, and categorised by real workflow (contract generation, PPT scripting, Feynman-style learning, GEO content). Concrete example: the 25-template GEO (Generative Engine Optimisation) suite lets you systematically rewrite blog content so it gets cited by ChatGPT, Perplexity, and Gemini — treating AI search the same way SEO treated Google in 2012.

Why it's trending

GEO is the hottest topic in content marketing right now — brands are realising traffic from AI-powered search is replacing organic Google clicks, and there are almost no structured prompt toolkits for it yet. This repo dropped 25 battle-tested GEO templates at exactly the right moment.

How to use it

  1. Clone or browse the repo at the prompts/ directory — each file is a self-contained markdown prompt you can copy directly into Claude, GPT-4o, or any LLM.,2. Start with prompts/01-ai-methods/rtf-meta-prompt-system-v06.md — it's a meta-prompt that takes any rough requirement and outputs a structured, reusable prompt using the RTF (Role / Task / Format) framework. Use this to refine your own prompts before hardcoding them into API calls.,3. For GEO work, pull any template from prompts/08-ai-marketing/ — each covers a specific stage (opportunity audit, content rewriting for AI citation, schema.org structured data, compliance risk). Feed your existing blog post as context.,4. For career tool copy (cover letters, resume bullets), use prompts/02-ai-work/ — the contract and sales prompt patterns translate directly to structured job-application writing with consistent tone.,5. Use the English mirror at prompts-en/ for direct copy-paste into your TypeScript API routes — no translation step needed. Store prompts as .md files in your own content/ directory and read them at build time via fs.readFileSync, keeping them version-controlled and editable without touching code.

How I could use this

  1. Feed the GEO content-rewriting templates into a 'GEO Optimizer' tool on TechPath AU — users paste a blog post or LinkedIn summary and Claude rewrites it to be citation-friendly for AI search engines like Perplexity. Highly relevant for international grads who want their profiles to surface in AI-powered recruiter searches.
  2. Use the RTF meta-prompt system (rtf-meta-prompt-system-v06.md) as the backbone of a 'Prompt Builder' page under your AI tools section — users describe what they want Claude to help them with (e.g. 'write a cover letter for a 485 visa holder applying to a Melbourne fintech'), and the meta-prompt generates a structured, reusable prompt they can save and rerun. Store saved prompts per user in Supabase.
  3. Pull the Feynman-questioning and critical-thinking learning prompts from prompts/03-ai-learning/ and wire them into the existing Learn section — after a user watches a video or completes a module, Claude uses the Feynman prompt to quiz them conversationally, then grades their explanation. This upgrades passive video consumption into active recall, which is a defensible differentiator over generic learning path tools.

5. XBuilderLAB/cheat-on-content

987 stars this week · Python

A Claude Code skill suite that replaces content gut-feel with a self-evolving personal scoring rubric — score before you post, predict engagement, review actual data 3 days later, and let the loop tighten your judgment over time.

Use case

Every content creator ships posts without a feedback loop tight enough to learn from. This repo enforces one: before publishing, you score the draft against your own rubric and commit a prediction (views, saves, shares). Three days later you debrief against real numbers. After three consecutive wrong-direction predictions, the system prompts you to update the rubric — with a safeguard that requires the new rubric to outscore the old one on all historical posts before it takes effect. Concrete example: Henry posts a githot digest, predicts 'medium reach,' it flops, but the retroactive scoring reveals the hook was too technical — rubric gets updated to weight hook accessibility higher.

Why it's trending

987 stars this week almost certainly because it's the first public Claude Code workflow that treats Claude as a persistent ops agent rather than a one-shot tool — the 'auto-evolving rubric' architecture is a novel pattern that resonates with anyone who's bounced off stateless chatbots. The 1M-follower claim (even if unverifiable) is credible enough as a hook given the system's rigor.

How to use it

  1. Clone and install the 13 skills: git clone https://github.com/XBuilderLAB/cheat-on-content.git && cd cheat-on-content && bash install.sh — symlinks skills to ~/.claude/skills/,2. Open Claude Code in your content project directory and run: 初始化 cheat-on-content — answer 5 yes/no onboarding prompts (platform, cadence, niche, 5–10 benchmark account samples),3. Before each post: 打分这篇 scripts/my-post.md to get a rubric score, then 启动预测 scripts/my-post.md to log your engagement prediction,4. After publishing: 已发布 https://your-post-url to decrement the buffer and log the publish event,5. Three days later: 复盘 videos/my-post/ to compare prediction vs actuals — if you're wrong three times in a row, the system surfaces a rubric upgrade prompt

How I could use this

  1. Run cheat-on-content against Henry's githot digests specifically: each weekly post is already a structured format (repo + explanation), so scoring them before publishing and tracking which framing angles (tutorial vs. trending-tool vs. career-relevance) actually drive traffic to TechPath AU would give a data-backed editorial formula within 6–8 posts.
  2. Port the rubric + prediction log as a lightweight Supabase table (post_id, rubric_scores jsonb, predicted_engagement int, actual_views int, actual_signups int) and surface it in a private /admin/content-lab dashboard — Henry can then correlate which blog post types actually convert to free-trial signups for the career tools, not just pageviews.
  3. Use the 'auto-evolving rubric' architecture as the design pattern for the resume analyser's scoring system: instead of a static prompt, store the rubric as a versioned row in Supabase, gate upgrades behind a backtesting pass against historical resumes, and expose a /api/resume/rubric-version endpoint so the frontend can surface 'scored with rubric v3' provenance on each analysis result.

6. crafter-station/petdex

946 stars this week · TypeScript

A community-curated gallery of animated sprite pets that live in your terminal alongside OpenAI's Codex CLI agent — think Tamagotchi for your AI coding assistant.

Use case

OpenAI's Codex CLI agent (released May 2025) supports a 'pet' companion that sits in your terminal while you code, reacting to events like task completion or errors. The problem: there was no central place to discover, preview, or share community-made pet packs. Petdex solves this — imagine browsing a gallery, previewing a pixel-art axolotl in all its idle/working/sleeping states, downloading the ZIP, and dropping it into ~/.codex/pets in 30 seconds.

Why it's trending

OpenAI's Codex agent launched to massive hype in late April/early May 2025 and the terminal pet feature became a viral side-story — developers immediately started shipping custom sprites. Petdex is the community's answer to the distribution problem, hitting 946 stars in a single week purely on momentum from the Codex launch.

How to use it

  1. Clone and run locally: git clone https://github.com/crafter-station/petdex && cd petdex && bun install && bun dev,2. Browse the gallery at localhost:3000 — each pet card shows all animation states (idle, working, error, sleep) in-browser before you commit to downloading.,3. Download a pet ZIP, unzip it into your Codex CLI pets directory (typically ~/.codex/pets/<pet-name>/), then restart Codex — it auto-detects the new pack.,4. To submit your own: drop a valid pet package folder (sprites + manifest JSON) into public/pets/ and open a PR — the browser-based validator in the repo will catch malformed manifests before CI does.,5. Generate the full gallery archive for offline use: bun run build outputs a downloadable pack under public/packs/.

How I could use this

  1. Add a 'Pet of the Week' widget to your blog's sidebar — fetch the top-starred pet from the Petdex gallery and render its idle sprite as an animated GIF with a link back. Zero backend needed, just a weekly cron in your existing fetch-ai-news GitHub Actions workflow writing a JSON file that the Next.js static page reads at build time.
  2. Build a 'My Coding Pet' card for your TechPath AU developer profile — let users pick a Codex pet from the Petdex gallery and display it on their public profile page alongside their GitHub stats. It's a low-friction personalisation hook that gives international students something fun to set up during onboarding, increasing profile completion rates.
  3. Create a custom TechPath AU branded pet pack — pixel-art character that reacts to resume analysis completion, interview prep sessions, or visa milestone events — and submit it to Petdex as open-source community content. It's a legitimate SEO/backlink play: every Codex user who downloads the 'TechPath' pet sees your brand in their terminal daily.

7. jherrodthomas/automotive-skills-suite

892 stars this week · various · apqp aspice automotive autosar

152 installable Claude slash-command skills that automate structured automotive engineering deliverables (ISO 26262, AUTOSAR, UDS, APQP) — every builder skill is paired with a confirmation reviewer that outputs KPI dashboards.

Use case

Automotive engineers spend weeks manually producing FMEA worksheets, TARA threat analyses, AUTOSAR ARXML stubs, and PPAP packages — all highly templated but cognitively expensive. This suite turns those into Claude slash commands: run /dfmea-builder with your system description and get a populated DFMEA table; run /dfmea-reviewer and get a pass/fail dashboard against AIAG-VDA criteria. Concrete scenario: a Tier-1 supplier engineer needs a first-draft Hazard Analysis and Risk Assessment (HARA) for a new ADAS feature — instead of starting from a blank Excel template, they run /hara-builder, review the output, then run /hara-reviewer to catch gaps before the safety manager sees it.

Why it's trending

Claude's Projects + custom slash commands hit mainstream adoption in early 2025, and this is one of the first domain-complete skill suites for a regulated engineering vertical — 892 stars in a week signals that automotive engineers found it and are sharing it internally. It also rides the wave of companies using AI to accelerate ISO 26262 compliance audits, which are notoriously slow and consultant-heavy.

How to use it

  1. Clone the repo and browse skills/ — each .md file is a self-contained Claude skill with a system prompt and usage instructions.
  2. In Claude.ai, open a Project and paste the skill's system prompt into the Project Instructions, or install it as a slash command if you're using Claude Code.
  3. Invoke the builder: type /hara-builder and provide your item definition (e.g., 'Lane-keeping assist ECU, operates at highway speeds, controls EPS torque').
  4. Claude returns a structured HARA table with hazardous events, severity/exposure/controllability ratings, and ASIL assignments.
  5. Immediately run the paired /hara-reviewer skill on that output — it returns a visual dashboard with KPI tiles (% items rated, missing fields, ASIL consistency checks) so you catch errors before formal review.

How I could use this

  1. Write a deep-dive Githot post titled 'The builder+reviewer pattern: why every AI artifact tool should ship in pairs' — use this repo as the anchor example, then show Henry's readers how to apply the same pattern to their own Claude Projects (e.g., a cover-letter builder paired with a cover-letter reviewer that scores against the job description). This is a genuinely novel prompt-architecture insight, not a product roundup.
  2. Build a 'Skills Assessment Prep' tool for TechPath AU using the same installable-skill pattern: an ACS RPL (Recognition of Prior Learning) Statement Builder that takes a user's work history and outputs a draft RPL statement in ACS format, paired with an ACS RPL Reviewer skill that checks for the required competency elements. This directly serves Henry's 485/PR visa audience — ACS skills assessment is the #1 blocker for international IT grads seeking PR.
  3. Replicate the builder+reviewer architecture inside TechPath's interview prep feature: when Claude generates a mock interview answer, automatically pipe it through a second Claude call acting as the 'reviewer' with a structured rubric (STAR format completeness, relevance to job level, filler-word density). Return both outputs side-by-side so the user sees the answer AND a scored critique — higher signal than a raw AI answer alone.

8. strukto-ai/mirage

861 stars this week · TypeScript · agent-sandbox agent-tools ai-agents bash

Mirage is a virtual filesystem abstraction that lets AI agents read and write S3, Google Drive, Slack, Gmail, and Redis through a single unified Unix-like API instead of per-service integration code.

Use case

The core problem: every agent that touches more than one data source ends up with a rats nest of custom adapters — one for S3, another for Drive, another for Redis. Mirage mounts all of them as a single tree (/s3/bucket, /gdrive/docs, /gmail/inbox) so the agent calls fs.read() and fs.write() regardless of what's underneath. Concrete scenario: a career assistant agent reads a user's uploaded resume from Supabase Storage, fetches the target job description from a URL, and writes a tailored cover letter back to Google Drive — one filesystem, zero per-service glue code.

Why it's trending

The OpenAI Agents SDK, Claude tool_use, and LangChain's agent tooling all reached production maturity in the last 6 months, and every serious agent builder hits the same wall: reliable, composable file I/O across backends. The claude-code topic in the repo signals it's already wired as a tool layer for Claude Code agents specifically, which is the exact use case driving adoption this week.

How to use it

  1. Install: npm install @struktoai/mirage-node
  2. Instantiate and mount sources: const \{ MirageFS \} = require('@struktoai/mirage-node'); const fs = new MirageFS(); await fs.mount('s3', new S3Adapter(\{ bucket: 'my-bucket', region: 'ap-southeast-2' \})); await fs.mount('gdrive', new GoogleDriveAdapter(\{ credentials \}));
  3. Read/write uniformly in your agent tool: const resume = await fs.read('/s3/uploads/resume.pdf'); const jd = await fs.read('/gdrive/job-descriptions/senior-dev.md'); await fs.write('/s3/output/cover-letter.md', generatedText);
  4. Register fs.read and fs.write as tool_use tools in your Claude or OpenAI Agents call — the agent decides which paths to access.
  5. Chain across sources: fs.list('/gmail/inbox') → read threads → fs.write('/s3/summaries/daily.md')

How I could use this

  1. Wire the daily githot/digest pipeline through Mirage: mount GitHub Trending API + content/githot/ as a MirageFS instance so scripts/fetch-githot.ts does fs.list('/github/trending'), fs.read('/github/trending/{repo}/readme'), and fs.write('/local/content/githot/2026-05-07.md') — eliminates the bespoke fetch logic per source and makes the pipeline trivially extensible to new sources (HN, Reddit) without touching agent code.
  2. Build a resume-tailoring agent route (app/api/resume/tailor/route.ts) backed by Mirage: mount Supabase Storage as /supabase/uploads and a scraper adapter as /web/jd for job descriptions, so the Claude tool_use call reads the user's PDF and target JD through fs.read() — the agent never needs to know whether the file came from Supabase, S3, or a URL, and swapping storage backends later is a one-line mount change.
  3. Use Mirage as the I/O layer for a multi-step AI research agent powering the visa-news and ai-news content pipelines: mount /web/rss (RSS feeds), /local/content (markdown output), and /supabase/cache (dedup tracking) so the same agent code handles fetch → deduplicate → write without any source-specific branching — and the agent's tool calls become auditable filesystem operations you can log and replay.

9. vibeforge1111/keep-codex-fast

859 stars this week · Python

A safety-first maintenance skill for OpenAI Codex that inspects, archives, and cleans accumulated local state (SQLite metadata, worktrees, logs, dead project refs) without data loss.

Use case

After weeks of heavy Codex use — long threads, multiple repos, dev servers, resumed old sessions — its SQLite thread metadata and worktree directories balloon and chat navigation visibly slows. This skill gives you an inspect-first, archive-not-delete workflow: it reports what's grown, generates handoff docs for old sessions, then applies changes only on explicit confirmation. Concrete scenario: you open Codex after a sprint and thread switching lags; $keep-codex-fast inspect shows 2GB of stale worktrees and a 40MB SQLite bloat on title/preview columns — you archive the old sessions, rotate logs, and Codex is snappy again without losing any transcript.

Why it's trending

OpenAI shipped Codex as a general-availability cloud coding agent in late April 2026, causing a surge of heavy daily users who are now hitting the state-accumulation wall for the first time. There's no official housekeeping tooling yet, so this community skill is filling that gap at exactly the right moment.

How to use it

  1. Install: clone the repo and make the skill available to Codex via your skills directory (follow the repo's install path — typically ~/.codex/skills/keep-codex-fast/).
  2. Inspect first — no writes: ask Codex Use $keep-codex-fast to inspect my local state and recommend a maintenance plan. Review the report before touching anything.
  3. Generate handoffs for sessions you want to archive: Use $keep-codex-fast --handoff-session <thread-id> — this writes a markdown summary before archiving.
  4. Apply maintenance: Use $keep-codex-fast --apply — backs up state, archives old sessions, moves stale worktrees, rotates logs, prunes dead project refs.
  5. Optional: if SQLite title/preview columns are the bottleneck, run Use $keep-codex-fast --apply --repair-thread-metadata-bloat — trims display metadata only, transcripts stay intact.

How I could use this

  1. Build a 'Codex session digest' page on the blog: after each sprint, run the skill's handoff mode and pipe the markdown output into a content/digest/ post via a GitHub Action — instant public devlog with zero extra writing.
  2. Add a 'project health snapshot' to the TechPath AU dashboard: mirror the inspect-mode report concept (stale worktrees, large logs, dead config refs) but for users' own local dev environments — surface it as a weekly career-tools nudge ('your dev setup hygiene score').
  3. Wire the --repair-thread-metadata-bloat pattern into a Claude-powered 'context compressor' API route: when a user's interview-prep or resume-analysis session history grows long, auto-summarise old turns into a compact handoff doc and trim the active context, keeping Claude calls fast and cheap without losing continuity.

10. tddworks/baguette

730 stars this week · Swift · agent cli devicefarm ios

A Swift CLI that creates, boots, and controls headless iOS simulators programmatically — screen streaming at 60fps and host-injected touch events, no Xcode GUI required.

Use case

The core problem is running iOS UI automation or AI computer-use agents against real simulator hardware without a display server or the 4GB Xcode.app GUI overhead. Concretely: you want a Claude/GPT agent to tap through an iOS app on a CI runner or a cloud Mac mini farm — baguette gives it a scriptable simulator it can stream frames from and inject taps/swipes into over a WebSocket, treating the iPhone screen as just another tool output.

Why it's trending

iOS 26 / Xcode 26 just dropped at WWDC 2025, and baguette is one of the first CLI tools targeting the new SimulatorKit APIs in that release. It's also landing exactly when 'computer-use' style AI agents (Claude, Gemini, GPT-4o) are moving from web browsers to native mobile interfaces — this is the missing plumbing layer.

How to use it

  1. Install: brew install tddworks/tap/baguette (or swift build -c release from source on macOS 15+, Xcode 26).
  2. Create a device: baguette create --name AgentPhone --runtime com.apple.CoreSimulator.SimRuntime.iOS-26-0
  3. Boot headless: baguette boot AgentPhone
  4. Start the browser-accessible web UI + MJPEG stream: baguette serve AgentPhone --port 9000 — open localhost:9000 to see the live screen.
  5. Inject a tap from your agent: baguette tap AgentPhone --x 195 --y 420 or POST {"x":195,"y":420} to the REST endpoint baguette exposes.

How I could use this

  1. Write a blog post titled 'I gave Claude a headless iPhone' — use baguette to boot a simulator, pipe the MJPEG stream as base64 frames to claude-sonnet-4-6 with tool_use, and let it navigate the iOS Settings app autonomously. Document the latency, error modes, and cost per interaction — this is a genuinely novel tutorial that will rank for 'iOS computer-use agent'.
  2. Build a 'mobile portfolio screenshotter' GitHub Action: on each push, baguette boots a simulator, installs your IPA, takes a series of automated screenshots across key screens, and commits them to the repo as up-to-date portfolio assets — every recruiter clicking your GitHub sees current, real device frames without you maintaining them manually.
  3. Add a 'Mobile UX Auditor' feature to TechPath AU's career tools: accept an iOS app bundle or TestFlight link, boot it in baguette, stream 10-15 screenshots through claude-sonnet-4-6 with a structured prompt, and return a WCAG + HIG compliance report — differentiated from web auditors and directly relevant to your iOS dev audience chasing 482/485 visas.
← All issuesGo build something