My Public Repos: Userscripts, AI Tooling, and Side Projects

I have been building and open-sourcing tools for a while now, mostly scratching my own itches. Here is a rundown of everything public on my GitHub, grouped by what they actually do.

AI Agent Tooling

agentGuidance is the centralized brain for all my Claude Code sessions. Every repo I work in fetches rules from this repo on session start via a hook. It contains the core instruction set (agent.md), topic-specific guidance files for testing, debugging, deployment, code review, and more, plus hook scripts that auto-post session output to Discord and WordPress. If you use Claude Code across multiple projects and want consistent behavior, this is the pattern to follow.

claude-bakeoff is an A/B testing framework for Claude CLI instruction environments. Run the same task under two different CLAUDE.md files, capture the outputs, and use an LLM-as-judge to score correctness, completeness, code quality, and instruction adherence. Useful for validating that a prompt change actually improves output quality before rolling it out.

Web Apps

groceryGenius is a full-stack grocery management app with price tracking, receipt entry, and map views. Built with React, Express, and PostgreSQL. Still WIP.

valueSortify is an interactive personal values card sort. 83 values, three phases (sort, rank, results), drag-and-drop reordering, keyboard shortcuts, and PDF/CSV/JSON export. Built with React and Framer Motion. Live at pezant.ca/tools/ValueSortify.

iconscribepublic is a design tool built with React, shadcn-ui, and Tailwind.

Browser Userscripts and Bookmarklets

This is the category where I have the most repos. All Tampermonkey userscripts unless noted otherwise.

youtubeSpeedSetAndRemember remembers your YouTube playback speed across sessions (up to 4x), adds bracket key shortcuts on desktop, and gives you a long-press 2x boost for Shorts on mobile.

ChatGPTCompletionChime and GeminiCompletionChime play an audible chime when ChatGPT or Gemini finishes generating a response, even when the tab is in the background. The ChatGPT version uses a finite state machine to avoid false positives with long-thinking models like o1.

reddit-bottom-sheet-blocker kills the “use the app” nag on Reddit mobile web.

rakutenOfferAutoAdder automates adding Rakuten In-Store Cash Back offers. Expands all sections, clicks every Add button with smart verification, retries missed items, and uses randomized delays to avoid rate-limiting.

aisleOffersFilterClaimandTracking enhances the Aisle offers experience with tag filtering, one-click “Quick Free” filter, batch auto-claiming, location scraping, and persistent claim history with CSV export.

markdownMakerBookmarklet converts any webpage to clean Markdown with a single click. Two modes: instant clipboard copy and visual preview. Works entirely in-browser with no dependencies, even on banking sites.

humblechoice-oneclickclaim and GOGAutoRdeem automate claiming games on Humble Choice and redeeming keys on GOG.

LIScreenshot is a LinkedIn screenshot utility.

Utilities

mic-volume-guard is a PowerShell watchdog that keeps your microphone at 100% volume on Windows. Prevents apps from silently reducing mic gain in the background.

The Common Thread

Most of these exist because I hit a friction point and decided to automate it rather than tolerate it. The userscripts save a few seconds per use, but across hundreds of uses they add up. The AI tooling came from wanting Claude Code to behave consistently and improve over time. The web apps are products I wanted to exist and could not find elsewhere.

Everything is on github.com/npezarro.

What I’ve Built With Claude Code in 4 Weeks

I started using Claude Code CLI extensively on February 27, 2026. Four weeks and roughly 400 commits later, I have an autonomous agent system that surveys my repos every 30 minutes, picks improvements, creates PRs, and posts the results to Discord for me to approve. Here is how that happened, week by week.

Week 1: Just a Coding Assistant (Feb 24 – Mar 1)

Eight commits. I used Claude Code the way most people start: fixing specific bugs. A double basePath issue in one project, a pickling error in another. Surgical, one-off fixes. Nothing systematic.

But two days in, I created a repo called agentGuidance. The idea was simple: if I am going to give Claude instructions every session, those instructions should be version-controlled and consistent. That repo would become the backbone of everything that followed.

Week 2: The Infrastructure Explosion (Mar 2-8)

101 commits across 20+ repos. This was the week I stopped using Claude Code as a tool and started building a system around it.

The first move was propagating a SessionStart hook to every repo I own. Each Claude Code session now fetches centralized rules from agentGuidance on startup. Same guardrails everywhere, maintained in one place.

That same week I built centralDiscord from scratch: a Discord bot that dispatches Claude Code jobs, manages a queue, streams output to channels, and coordinates multiple agents. It crashed on memory issues by day three, got rebuilt with proper fault tolerance, and by the end of the week had streaming progress, a kill command, and metrics tracking.

I also launched three new projects that week: a prompt library, a job scraper, and a cross-LLM context tool. The pattern was becoming clear. I was not just coding with Claude; I was building tools for working with Claude.

Key learning: Discord beat WordPress as the reporting channel. I tried auto-posting sessions as narrative blog posts, but the feedback loop was too slow. Discord gives real-time visibility into what every agent is doing.

Week 3: Stabilization (Mar 9-15)

47 commits. Half the velocity, but the work shifted from building to hardening.

The centralDiscord bot got reliability fixes, metrics backup, and failed request handling. More importantly, I started encoding post-mortems as code. After accumulating stale PRs that caused cascading merge conflicts, I wrote branch hygiene rules directly into agentGuidance. Every time something broke, the fix went into the instruction set so it would not break the same way again.

This is the week I learned that autonomous code generation creates maintenance debt fast. You need guardrails before you need features. Every failure should become a rule.

Week 4: The Autonomous Era (Mar 17-23)

217 commits. The biggest week by far, and a fundamental shift from “developer using Claude Code” to “developer orchestrating multiple Claude Code instances.”

I completed the Local Worker Bridge: an SSH reverse tunnel from my GCP VM through Windows OpenSSH into WSL, letting the Discord bot dispatch Claude jobs to my local PC. Seven distinct bugs in the tunnel chain, all found and fixed in one session. Git Bash vs WSL conflicts, Windows authorized_keys locations, WSL localhost pointing to the wrong network interface.

Then came autonomousDev: a standalone agent on a 30-minute cron that surveys all repos, picks the highest-impact improvement, executes it, creates a PR, and posts results to Discord. In its first day it produced work across six repos: security fixes, npm audit patches, 59 new tests, physics bug fixes, regex corrections, and vulnerability patches.

I built claude-bakeoff for A/B testing different instruction environments against each other. I added a #tasks channel with parameterized templates (!task pr-review, !task deploy). I added a #prompts channel that logs every prompt from every source. The daily roundup analyzes prompt patterns and suggests new task templates. The system watches its own usage and improves itself.

By the end of the week, the autonomous dev agent had completed 68 runs. It was also causing problems: two agents finishing the same work simultaneously, PRs merging without my approval. So I added collision detection and an approval gate. The system is learning its own failure modes, just like I learned mine in weeks 2 and 3.

What I Have Learned

Deduplication is essential. Autonomous agents will redo work unless you give them a shared memory of what has been done. I maintain a completed-work.md that every session checks before starting.

Every failure becomes a rule. Restart storms, divergent branches, WSL localhost misrouting: all encoded in the instruction set as prevention rules. The system gets more reliable over time because its rules grow from real incidents.

Reporting solves coordination. I have six Discord channels dedicated to different aspects of agent activity. The Stop hook ensures every session reports its work. Without visibility, autonomous agents are a black box.

Slim instructions beat dense ones. agent.md went from 511 to 178 lines this week. Dense guidance causes agents to miss rules. Modular files with clear scoping work better.

PR-based deploys over SSH commands. After race conditions with raw git commands, everything flows through GitHub PRs. Safer, more auditable, and the bot can manage merge and deploy as separate approval steps.

The Current Setup

From 8 commits fixing a basePath bug to 393 commits across 34 repos with autonomous agents, in under four weeks.

The stack: centralized guidance fetched on every session start, Stop hooks posting to Discord, autonomous agents on 30-minute cron, a Discord bot routing jobs between VM and local PC, A/B testing for instruction quality, and a job pipeline that scrapes, filters, and generates application materials daily.

It is not done. The approval gates need tightening, the Cowork reporting pipeline does not work from the Chrome extension yet, not all of the Discord reporting works as intended, and the autonomous agent still occasionally picks up work that is not worth doing. But the foundation is solid, and each session makes the system a little smarter than the last one.

Claude Code’s OAuth Login Bug on macOS, and How I Got Around It

I ran claude auth login on macOS and hit a wall. Not a permissions issue, not expired credentials. The OAuth flow itself was broken for Max subscribers.

When you run the login command, Claude Code generates an OAuth URL pointing to platform.claude.com. If you are on a Max or Pro plan (not API), there is a banner that says “Have a Pro or Max plan? Login with your chat account instead” and redirects you to claude.ai. During that redirect, the redirect_uri parameter gets mangled. It goes from:

https://platform.claude.com/oauth/code/callback

to:

https:/platform.claude.com/oauth/code/callback

One missing slash. Authorization fails with:

Redirect URI https:/platform.claude.com/oauth/code/callback is not supported by client.

Digging Into the URL

The OAuth URL that claude auth login generates has a returnTo parameter with the actual authorization endpoint encoded inside it. I decoded that, pulled out the real claude.ai/oauth/authorize URL, and rebuilt it to bypass platform.claude.com entirely.

The key was swapping the redirect_uri to http://localhost:62426/callback, which is how CLI OAuth is supposed to work anyway. The CLI spins up a local server to catch the callback. There is no reason for it to bounce through the platform domain first.

Visited the rebuilt URL directly in the browser. Authorization succeeded first try. The redirect_uri stayed intact because there was no cross-domain redirect to break it.

What is Actually Broken

The bug is in the URL encoding during the platform.claude.com to claude.ai handoff. When the platform redirects to the chat login flow, it drops a slash from the redirect URI. I confirmed this across multiple fresh claude auth login runs. The code_challenge and state parameters changed each time (proving fresh attempts), but the mangled URI was consistent.

The workaround is straightforward: skip the middleman. Go directly to claude.ai/oauth/authorize with the correct parameters and a localhost callback URI. The platform redirect is unnecessary for CLI authentication.

If You Hit This

If you are a Max or Pro subscriber getting this error on macOS:

  1. Run claude auth login and copy the URL it gives you
  2. Decode the returnTo parameter (URL decode it, or ask any LLM to do it)
  3. Rebuild it as a direct claude.ai/oauth/authorize request with redirect_uri=http://localhost:<port>/callback
  4. Visit that URL in your browser while the CLI is waiting

Fixing the Garmin Sync, Locking Down OAuth, and Adding Password Reset

Fixing the Garmin Sync, Locking Down OAuth, and Adding Password Reset

The runEval project on pezant.ca/runeval had three priorities queued up from a Gemini analysis session, all on the claude/garmin-integration branch. I picked them up and shipped all three.

P0: Garmin API 24-Hour Chunking

The root cause of the failed Garmin data fetch was straightforward — the Health API enforces a strict 86,400-second (24-hour) maximum on time-range queries, and the adapter was requesting 30 days in a single shot. Garmin was responding with a 400 Bad Request.

I replaced the old buildTimeRangeParams() function in lib/garmin/adapter.ts with a buildTimeChunks() system that splits any time range into 24-hour windows. The listActivities() method now iterates through all 30 chunks sequentially (to avoid rate limiting), deduplicates activities by ID across chunk boundaries, and logs progress at each step:

[Garmin Sync] Starting 30-day backfill in 30 chunk(s)
[Garmin Sync] Fetching chunk 1/30: 2026-02-02 → 2026-02-03
[Garmin Sync] Chunk 1: 3 activities (3 unique total)
...
[Garmin Sync] Backfill complete — 47 unique activities

The getActivitySummary() and getActivityStreams() methods got the same treatment — they also used to request 30-day ranges.

P1: OAuth Session Validation

Both Garmin and Strava OAuth flows had no session check. Anyone could hit /api/garmin/connect or /api/strava/connect and initiate the PKCE flow, and all tokens were stored against a hardcoded demo user via getDemoUser().

I rewired the entire token ownership chain. The connect routes now check for a valid user session at the top of the handler — no session means a 401 Unauthorized. The userId gets embedded into the OAuth state parameter as a base64url-encoded JSON payload alongside the CSRF token: { csrf: "randomId", userId: "cuid123" }. The callback route decodes it, validates the CSRF against the cookie, confirms the session still matches the embedde…

[Response truncated — 4038 chars total. See transcript for complete output.]


Logged on March 4, 2026 at 2026-03-04 05:06:27 — Session 55284702-f51d-4f43-873c-bdd30499110c in /home/generatedByTermius

Adding Descriptive Title Rules to the Auto-Post Writing Guide

Adding Descriptive Title Rules to the Auto-Post Writing Guide

The hook was already doing the right thing — pulling the first ## heading from the response and using it as the WordPress title. The problem was that the headings themselves were vague. “What I Changed” or “What was wrong and what I fixed” tell you nothing about the actual content.

I added a single rule to the writing style section in agent.md: the first ## heading in every response becomes the post title, so treat it like an article headline. I included examples of good titles (“Propagating Claude Code Hooks to All 30 Repos”) versus bad ones (“What I Changed”, “Summary”, “The Fix”) to make the expectation clear.

No hook changes needed — this was purely a guidance issue. The rule is live now via the SessionStart fetch, so it applies in every environment going forward.


Logged on March 3, 2026 at 2026-03-03 02:41:31 — Session f13068ad-2afe-4c7d-9d8e-dafb80306572 in /home/generatedByTermius/agentGuidance

How I Configured Claude Code to Auto-Document Every Interaction as a WordPress Post

I spent a session configuring Claude Code so that every interaction — across every repo, every environment — starts with the same global rules and automatically documents itself as a WordPress post. Here’s what I built and why.

The Problem

I have 28+ GitHub repos and use Claude Code from multiple environments: the terminal on my server, VS Code, and claude.ai/code (the browser-based sandbox). I wanted two things:

  1. Consistent agent behavior — the same rules and conventions applied everywhere, managed from a single source of truth.
  2. Automatic documentation — every Claude Code interaction logged as a private WordPress draft on my site, ready to review and publish.

Part 1: Global Rules via a Single Source of Truth

I maintain a file called agent.md in my agentGuidance repo. It defines my preferred stack, conventions, and workflow rules. The goal: every Claude Code session fetches the latest version of this file at startup, no matter which repo I’m working in.

The CLAUDE.md Layer

Every repo already had (or now has) a CLAUDE.md file in its root with a simple instruction to fetch the global rules:

curl -s https://raw.githubusercontent.com/npezarro/agentGuidance/main/agent.md

Claude Code automatically loads CLAUDE.md into context at session start. This works everywhere — terminal, IDE, and claude.ai/code. I pushed this file to all 28 repos in one batch.

The SessionStart Hook Layer

CLAUDE.md relies on the AI choosing to run the curl command. To make it truly automatic, I added a SessionStart hook — a shell command that Claude Code executes natively when a session begins, injecting the output directly into context.

In ~/.claude/settings.json (user-level, applies to all projects on this server):

{
  "hooks": {
    "SessionStart": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "bash ~/.claude/hooks/fetch-rules.sh",
            "timeout": 15000
          }
        ]
      }
    ]
  }
}

The script fetches the rules with a 10-second timeout and fails gracefully if the network is down.

Covering claude.ai/code

The user-level ~/.claude/settings.json only exists on my server. The browser-based claude.ai/code environment runs in a separate cloud sandbox. To cover that, I committed a project-level .claude/settings.json with the same SessionStart hook to all 28 repos. When any repo is cloned in that sandbox, the hook is right there.

Environment Hook fires? CLAUDE.md loaded?
Terminal / IDE on my server Yes (user-level) Yes
claude.ai/code (cloud sandbox) Yes (project-level) Yes

Part 2: Auto-Posting Interactions to WordPress

With the rules layer sorted, I wanted every Claude Code turn to automatically create a private WordPress post on this site — a living log of my AI-assisted development work.

How It Works

Claude Code supports a Stop hook that fires every time the assistant finishes responding. I wrote a bash script that:

  1. Reads the hook input (JSON on stdin with session_id, transcript_path, and last_assistant_message)
  2. Parses the transcript JSONL to find the user’s last prompt
  3. Builds a title from the first ~60 characters of the prompt
  4. POSTs a private WordPress post via the REST API with the prompt and response formatted in HTML

Authentication uses a WordPress Application Password with Basic Auth. The script has a 15-second timeout and fails silently — it never blocks the session.

The Post Format

Each auto-generated post contains:

  • The user prompt in a blockquote
  • Claude’s full response
  • Session ID and working directory as metadata

Posts are created as private by default, so I can review and edit before publishing.

The Result

With two hooks and a CLAUDE.md in every repo, I now have:

  • One file to edit (agentGuidance/agent.md) to update rules across all 28 repos and all environments
  • Automatic documentation of every Claude Code interaction as a private WordPress draft
  • Zero manual steps — everything fires on session start and session stop

To update my global AI rules, I edit one file. Every future session — on any repo, in any environment — picks it up automatically. And every interaction gets logged here as a post I can curate and share.

Building a Self-Documenting AI Workflow: Hooks, Global Rules, and Auto-Posting from Claude Code

I spent a session configuring Claude Code so that every interaction — across every repo, every environment — starts with the same global rules and automatically documents itself as a WordPress post. Here’s what I built and why.

The Problem

I have 28+ GitHub repos and use Claude Code from multiple environments: the terminal on my server, VS Code, and claude.ai/code (the browser-based sandbox). I wanted two things:

  1. Consistent agent behavior — the same rules and conventions applied everywhere, managed from a single source of truth.
  2. Automatic documentation — every Claude Code interaction logged as a private WordPress draft on my site, ready to review and publish.

Part 1: Global Rules via a Single Source of Truth

I maintain a file called agent.md in my agentGuidance repo. It defines my preferred stack, conventions, and workflow rules. The goal: every Claude Code session fetches the latest version of this file at startup, no matter which repo I’m working in.

The CLAUDE.md Layer

Every repo already had (or now has) a CLAUDE.md file in its root with a simple instruction to fetch the global rules:

curl -s https://raw.githubusercontent.com/npezarro/agentGuidance/main/agent.md

Claude Code automatically loads CLAUDE.md into context at session start. This works everywhere — terminal, IDE, and claude.ai/code. I pushed this file to all 28 repos in one batch.

The SessionStart Hook Layer

CLAUDE.md relies on the AI choosing to run the curl command. To make it truly automatic, I added a SessionStart hook — a shell command that Claude Code executes natively when a session begins, injecting the output directly into context.

In ~/.claude/settings.json (user-level, applies to all projects on this server):

{
  "hooks": {
    "SessionStart": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "bash ~/.claude/hooks/fetch-rules.sh",
            "timeout": 15000
          }
        ]
      }
    ]
  }
}

The script fetches the rules with a 10-second timeout and fails gracefully if the network is down.

Covering claude.ai/code

The user-level ~/.claude/settings.json only exists on my server. The browser-based claude.ai/code environment runs in a separate cloud sandbox. To cover that, I committed a project-level .claude/settings.json with the same SessionStart hook to all 28 repos. When any repo is cloned in that sandbox, the hook is right there.

Environment Hook fires? CLAUDE.md loaded?
Terminal / IDE on my server Yes (user-level) Yes
claude.ai/code (cloud sandbox) Yes (project-level) Yes

Part 2: Auto-Posting Interactions to WordPress

With the rules layer sorted, I wanted every Claude Code turn to automatically create a private WordPress post on this site — a living log of my AI-assisted development work.

How It Works

Claude Code supports a Stop hook that fires every time the assistant finishes responding. I wrote a bash script that:

  1. Reads the hook input (JSON on stdin with session_id, transcript_path, and last_assistant_message)
  2. Parses the transcript JSONL to find the user’s last prompt
  3. Builds a title from the first ~60 characters of the prompt
  4. POSTs a private WordPress post via the REST API with the prompt and response formatted in HTML

Authentication uses a WordPress Application Password with Basic Auth. The script has a 15-second timeout and fails silently — it never blocks the session.

The Post Format

Each auto-generated post contains:

  • The user prompt in a blockquote
  • Claude’s full response
  • Session ID and working directory as metadata

Posts are created as private by default, so I can review and edit before publishing.

The Result

With two hooks and a CLAUDE.md in every repo, I now have:

  • One file to edit (agentGuidance/agent.md) to update rules across all 28 repos and all environments
  • Automatic documentation of every Claude Code interaction as a private WordPress draft
  • Zero manual steps — everything fires on session start and session stop

To update my global AI rules, I edit one file. Every future session — on any repo, in any environment — picks it up automatically. And every interaction gets logged here as a post I can curate and share.

Downloading Data from a Locked Google Sheet

  • URL Parsing: Extracts the spreadsheetId and gid (tab ID) via Regex from the active window.location.href.
  • API Interception: Bypasses the sheet’s HTML5 <canvas> rendering engine and fetches raw data directly from Google’s internal Visualization API (/gviz/tq?tqx=out:csv).
  • Memory Blobbing: Converts the plaintext CSV response into a local, client-side memory object (Blob) and assigns it a temporary blob: URI.
  • CSP Evasion: Injects a programmatic <a> element into the DOM using document.createElement() and textContent. This circumvents Google’s strict Trusted Types Content Security Policy (CSP) that blocks innerHTML parsing.

Execution Protocol

  1. Navigate to the target Google Sheet.
  2. Select the specific tab required (ensures the gid parameter in the URL is accurate).
  3. Open browser Developer Tools: Cmd+Option+J (Mac) or Ctrl+Shift+J (Windows).
  4. Paste the script into the Console tab and execute (press Enter).
  5. Review the [SheetExtract] verbose console logs for fetch status and payload size.
  6. Click the floating green ⬇️ Download CSV button injected at the bottom right of the viewport.

console.log(“[%s] Init GViz data extraction sequence”, “SheetExtract”);
(async () => {
try {
const match = window.location.href.match(/\/d\/([a-zA-Z0-9-_]+)/);
const gidMatch = window.location.href.match(/gid=([0-9]+)/);

if (!match) {
  console.error("[%s] Fatal: Spreadsheet ID not found in URL", "SheetExtract");
  return;
}

const sheetId = match[1];
const gid = gidMatch ? gidMatch[1] : "0";
const endpoint = `https://docs.google.com/spreadsheets/d/${sheetId}/gviz/tq?tqx=out:csv&gid=${gid}`;

console.log("[%s] Target Endpoint: %s", "SheetExtract", endpoint);
console.log("[%s] Initiating fetch request...", "SheetExtract");

const response = await fetch(endpoint);
if (!response.ok) throw new Error(`HTTP Error ${response.status}`);

const csvText = await response.text();
console.log("[%s] Payload received. Size: %d bytes", "SheetExtract", csvText.length);

const blob = new Blob([csvText], { type: 'text/csv' });
const objectUrl = URL.createObjectURL(blob);

console.log("[%s] Blob URL created: %s", "SheetExtract", objectUrl);
console.log("[%s] Injecting safe DOM node (bypassing TrustedTypes sink)", "SheetExtract");

// Construct safe DOM node
const btn = document.createElement('a');
btn.href = objectUrl;
btn.download = `Sheet_${sheetId}_GID_${gid}.csv`;
btn.textContent = `⬇️ Download CSV (GID: ${gid})`;

// Apply inline styles
Object.assign(btn.style, {
  position: 'fixed',
  bottom: '24px',
  right: '24px',
  padding: '16px 24px',
  backgroundColor: '#188038',
  color: '#ffffff',
  fontFamily: 'Roboto, Arial, sans-serif',
  fontSize: '14px',
  fontWeight: '600',
  textDecoration: 'none',
  borderRadius: '8px',
  boxShadow: '0 4px 6px rgba(0,0,0,0.3)',
  zIndex: '2147483647',
  cursor: 'pointer'
});

document.body.appendChild(btn);
console.log("[%s] Success: Download button injected into viewport bottom-right.", "SheetExtract");

} catch (e) {
console.error(“[%s] Execution failure: %s”, “SheetExtract”, e.message, e);
}
})();

Chime on Replit Completion

// ==UserScript==
// @name         Replit Chat Completion Ping (Stop/Working FSM + Airy Harp Chime)
// @namespace    nicholas.tools
// @version      1.4.0
// @description  Chimes when Replit chat finishes. Detects streaming via Stop or "Working..". Uses Sound #4 (Airy harp up-gliss). Rich console logs + control panel (window.ReplitPing).
// @match        https://replit.com/*
// @grant        none
// @run-at       document-idle
// ==/UserScript==

(() => {
  "use strict";

  /* =========================
   * Config
   * ========================= */
  let DEBUG = true;          // high-level logs (state changes, detections)
  let TRACE_SCAN = false;    // very chatty: log detector scans (toggle at runtime)
  const STABILITY_MS_DEFAULT = 200; // require this much time of "no Stop & no Working" before DONE
  let STABILITY_MS = STABILITY_MS_DEFAULT;
  const POLL_MS = 250;

  // Matchers (case-insensitive)
  const STOP_TEXTS = ["stop"];        // exact match (normalized)
  const WORKING_TOKENS = ["working"]; // starts-with; allows Working., Working.., Working…

  /* =========================
   * Pretty Console Logging
   * ========================= */
  const tag = (lvl) => [
    "%cREPLIT-PING%c " + lvl + "%c",
    "background:#121212;color:#00e5ff;padding:1px 4px;border-radius:3px",
    "color:#999",
    "color:inherit"
  ];
  const cI = (...a) => DEBUG && console.log(...tag("ℹ️"), ...a);
  const cS = (...a) => DEBUG && console.log(...tag("✅"), ...a);
  const cW = (...a) => DEBUG && console.log(...tag("⚠️"), ...a);
  const cE = (...a) => DEBUG && console.log(...tag("⛔"), ...a);
  const cT = (...a) => (DEBUG && TRACE_SCAN) && console.log(...tag("🔎"), ...a);

  const nowStr = () => new Date().toLocaleTimeString();

  /* =========================
   * DOM / Text Helpers
   * ========================= */
  const isEl = (n) => n && n.nodeType === 1;
  const isVisible = (el) => {
    if (!isEl(el)) return false;
    const rect = el.getBoundingClientRect?.();
    if (!rect || rect.width === 0 || rect.height === 0) return false;
    const st = getComputedStyle(el);
    if (st.display === "none" || st.visibility === "hidden" || parseFloat(st.opacity || "1") < 0.05) return false;
    return true;
  };

  const norm = (s) => (s || "")
    .replace(/\u2026/g, "...") // ellipsis → three dots
    .replace(/[.\s]+$/g, "")   // trim trailing dots/spaces
    .trim()
    .toLowerCase();

  function findVisibleEquals(tokens) {
    const hits = [];
    const nodes = document.querySelectorAll("span,button,[role='button'],div");
    let scanned = 0;
    for (const el of nodes) {
      scanned++;
      if (!isVisible(el)) continue;
      const txt = norm(el.textContent || "");
      if (tokens.some(tok => txt === tok)) {
        hits.push(el.closest("button,[role='button']") || el);
      }
    }
    cT(`findVisibleEquals scanned=${scanned} hits=${hits.length} tokens=${tokens.join(",")}`);
    return Array.from(new Set(hits));
  }

  function findVisibleStartsWith(tokens) {
    const hits = [];
    const nodes = document.querySelectorAll("span,div,button,[role='button']");
    let scanned = 0;
    for (const el of nodes) {
      scanned++;
      if (!isVisible(el)) continue;
      const txt = norm(el.textContent || "");
      if (tokens.some(tok => txt.startsWith(tok))) {
        hits.push(el);
      }
    }
    cT(`findVisibleStartsWith scanned=${scanned} hits=${hits.length} tokens=${tokens.join(",")}`);
    return Array.from(new Set(hits));
  }

  const isStopVisible    = () => findVisibleEquals(STOP_TEXTS).length > 0;
  const isWorkingVisible = () => findVisibleStartsWith(WORKING_TOKENS).length > 0;
  const isStreamingNow   = () => isStopVisible() || isWorkingVisible();

  /* =========================
   * SOUND #4 — Airy Harp Up-Gliss (HTMLAudio WAV + WebAudio fallback)
   *  - Four soft “pluck” notes rising: C5 → E5 → G5 → C6
   *  - Triangle oscillators with fast attack & gentle decay
   *  - Subtle global low-pass to keep it airy
   * ========================= */
  function makeReplitChimeWavDataURL() {
    const sr = 44100;
    const N  = Math.floor(sr * 1.05); // ~1.05s buffer
    const data = new Float32Array(N);

    // Pluck events
    const freqs = [523.25, 659.25, 783.99, 1046.5]; // C5, E5, G5, C6
    const starts = [0.00, 0.06, 0.12, 0.18];        // seconds
    const A = 0.008;  // Attack seconds
    const D = 0.22;   // Total pluck duration (~decay to near zero)
    const amp = 0.62; // per-note peak

    const tri = (x) => (2 / Math.PI) * Math.asin(Math.sin(x));

    for (let n = 0; n < freqs.length; n++) {
      const f = freqs[n];
      const startIdx = Math.floor(starts[n] * sr);
      const len = Math.floor(D * sr);
      for (let i = 0; i < len && (startIdx + i) < N; i++) {
        const t = i / sr;
        // Envelope: quick attack → gentle exponential decay
        let env;
        if (t < A) {
          env = t / A;
        } else {
          const tau = 0.10; // decay constant
          env = Math.exp(-(t - A) / tau);
        }
        // Triangle pluck
        const s = tri(2 * Math.PI * f * t);
        data[startIdx + i] += amp * env * s;
      }
    }

    // Simple global low-pass (1st-order IIR) for softness (~4.5 kHz)
    const fc = 4500;
    const alpha = (2 * Math.PI * fc) / (2 * Math.PI * fc + sr);
    let y = 0;
    for (let i = 0; i < N; i++) {
      const x = data[i];
      y = y + alpha * (x - y);
      data[i] = y;
    }

    // Gentle soft clip
    for (let i = 0; i < N; i++) {
      const v = Math.max(-1, Math.min(1, data[i]));
      data[i] = Math.tanh(1.05 * v);
    }

    // Pack to 16-bit PCM WAV (mono)
    const bytes = 44 + N * 2;
    const dv = new DataView(new ArrayBuffer(bytes));
    let off = 0;
    const wStr = (s) => { for (let i = 0; i < s.length; i++) dv.setUint8(off++, s.charCodeAt(i)); };
    const w32 = (u) => { dv.setUint32(off, u, true); off += 4; };
    const w16 = (u) => { dv.setUint16(off, u, true); off += 2; };

    wStr("RIFF"); w32(36 + N * 2); wStr("WAVE");
    wStr("fmt "); w32(16); w16(1); w16(1); w32(sr); w32(sr * 2); w16(2); w16(16);
    wStr("data"); w32(N * 2);
    for (let i = 0; i < N; i++) {
      const v = Math.max(-1, Math.min(1, data[i]));
      dv.setInt16(off, v < 0 ? v * 0x8000 : v * 0x7FFF, true);
      off += 2;
    }

    // Base64 encode
    const u8 = new Uint8Array(dv.buffer);
    let b64 = "";
    for (let i = 0; i < u8.length; i += 0x8000) {
      b64 += btoa(String.fromCharCode.apply(null, u8.subarray(i, i + 0x8000)));
    }
    return `data:audio/wav;base64,${b64}`;
  }

  const CHIME_URL = makeReplitChimeWavDataURL();
  const primeAudioEl = new Audio(CHIME_URL);
  primeAudioEl.preload = "auto";

  const AudioCtx = window.AudioContext || window.webkitAudioContext;
  let ctx;
  const ensureCtx = () => (ctx ||= new AudioCtx());

  async function playChime(reason) {
    // Primary: HTMLAudio
    try {
      const a = primeAudioEl.cloneNode();
      a.volume = 1.0;
      await a.play();
      cS(`🔊 Chime (Airy Harp, HTMLAudio) reason=${reason} @ ${nowStr()}`);
      return;
    } catch (e1) {
      cW("HTMLAudio failed; trying WebAudio (Airy Harp)", e1);
    }

    // Fallback: WebAudio version of Sound #4 (Airy Harp)
    try {
      const AC = window.AudioContext || window.webkitAudioContext;
      const c = window.__rp_ctx || new AC();
      window.__rp_ctx = c;
      if (c.state !== "running") await c.resume();

      const t0 = c.currentTime + 0.02;

      // Output chain: low-pass for softness
      const out = c.createGain(); out.gain.setValueAtTime(0.85, t0);
      const lp  = c.createBiquadFilter(); lp.type = "lowpass"; lp.frequency.value = 4500; lp.Q.value = 0.6;
      out.connect(lp); lp.connect(c.destination);

      const pluck = (time, f) => {
        const o = c.createOscillator(); o.type = "triangle"; o.frequency.setValueAtTime(f, time);
        const g = c.createGain();
        // Envelope: fast attack, gentle decay
        g.gain.setValueAtTime(0.0001, time);
        g.gain.exponentialRampToValueAtTime(0.6,  time + 0.008);
        g.gain.exponentialRampToValueAtTime(0.001, time + 0.22);
        o.connect(g); g.connect(out);
        o.start(time); o.stop(time + 0.25);
      };

      const freqs = [523.25, 659.25, 783.99, 1046.5]; // C5, E5, G5, C6
      let cur = t0;
      for (const f of freqs) { pluck(cur, f); cur += 0.06; }

      cS(`🔊 Chime (Airy Harp, WebAudio) reason=${reason} @ ${nowStr()}`);
    } catch (e2) {
      cE("WebAudio play failed", e2);
    }
  }

  // Unlock audio on first interaction/visibility
  const unlock = async () => {
    try { await primeAudioEl.play(); primeAudioEl.pause(); primeAudioEl.currentTime = 0; cI("Audio unlocked via HTMLAudio"); } catch {}
    try { if (AudioCtx) { const c = ensureCtx(); if (c.state !== "running") await c.resume(); cI("AudioContext resumed"); } } catch {}
    window.removeEventListener("pointerdown", unlock, true);
    window.removeEventListener("keydown", unlock, true);
  };
  window.addEventListener("pointerdown", unlock, true);
  window.addEventListener("keydown", unlock, true);
  document.addEventListener("visibilitychange", () => { if (document.visibilityState === "visible") unlock(); });

  /* =========================
   * FSM (Stop/Working-driven)
   * ========================= */
  let sid = 0;
  let s = null;
  let pollId = 0;
  let lastStop = false;
  let lastWork = false;

  const STATE = { IDLE: "IDLE", STREAMING: "STREAMING", DONE: "DONE" };

  function startPoll() {
    if (pollId) return;
    pollId = window.setInterval(tick, POLL_MS);
  }
  function stopPoll() {
    if (pollId) { clearInterval(pollId); pollId = 0; }
  }

  function armStreaming(origin) {
    if (s && s.state !== STATE.DONE) return;
    s = { id: ++sid, state: STATE.STREAMING, lastStableStart: 0, sawStreaming: true };
    cI(`▶️ STREAMING s#${s.id} (origin=${origin}) @ ${nowStr()}`);
    startPoll();
  }

  function maybeComplete() {
    if (!s || s.state === STATE.DONE) return;
    const streaming = isStreamingNow();
    const now = performance.now();

    // Visibility transition logs
    const curStop = isStopVisible();
    const curWork = isWorkingVisible();
    if (curStop !== lastStop) {
      cI(`Stop visibility: ${curStop ? "ON" : "OFF"}`);
      lastStop = curStop;
    }
    if (curWork !== lastWork) {
      cI(`Working visibility: ${curWork ? "ON" : "OFF"}`);
      lastWork = curWork;
    }

    if (!streaming) {
      if (!s.lastStableStart) {
        s.lastStableStart = now;
        cI(`⏳ Stability window started (${STABILITY_MS}ms)`);
      }
      const elapsed = now - s.lastStableStart;
      if (elapsed >= STABILITY_MS) {
        s.state = STATE.DONE;
        cS(`DONE s#${s.id} (no Stop & no Working for ${Math.round(elapsed)}ms) @ ${nowStr()}`);
        playChime(`s#${s.id}`);
        stopPoll();
        s = null;
      }
    } else {
      if (s.lastStableStart) cW("Stability reset (streaming reappeared)");
      s.lastStableStart = 0;
    }
  }

  function tick() {
    const streaming = isStreamingNow();
    if (streaming && (!s || s.state === STATE.DONE)) {
      armStreaming("tick");
    }
    if (s && s.state === STATE.STREAMING) {
      maybeComplete();
    }
  }

  /* =========================
   * Observers & Event hooks
   * ========================= */
  const obs = new MutationObserver((mutations) => {
    let nudge = false;
    for (const m of mutations) {
      if (m.type === "childList") {
        for (const n of [...m.addedNodes, ...m.removedNodes]) {
          if (isEl(n)) {
            const txt = norm(n.textContent || "");
            if (STOP_TEXTS.some(s => txt.includes(s)) || WORKING_TOKENS.some(w => txt.startsWith(w))) { nudge = true; break; }
          }
        }
      } else if (m.type === "attributes") {
        const el = m.target;
        if (!isEl(el)) continue;
        const txt = norm(el.textContent || "");
        if (STOP_TEXTS.some(s => txt.includes(s)) || WORKING_TOKENS.some(w => txt.startsWith(w))) { nudge = true; }
      }
      if (nudge) break;
    }
    if (nudge) {
      cT("Mutation nudged tick()");
      tick();
    }
  });

  function start() {
    if (!document.body) {
      document.addEventListener("DOMContentLoaded", start, { once: true });
      return;
    }
    obs.observe(document.body, {
      childList: true,
      subtree: true,
      attributes: true,
      attributeFilter: ["class", "style", "aria-hidden"]
    });
    startPoll();
    cI("Armed (Stop/Working FSM). Will chime when both disappear (stable).");
    // Initial state snapshot
    lastStop = isStopVisible();
    lastWork = isWorkingVisible();
    cI(`Initial: Stop=${lastStop} Working=${lastWork}`);
    if (lastStop || lastWork) armStreaming("initial");
  }

  if (document.readyState === "loading") {
    document.addEventListener("DOMContentLoaded", start, { once: true });
  } else {
    start();
  }

  /* =========================
   * Runtime Controls (Console)
   * =========================
   * window.ReplitPing.setDebug(true|false)
   * window.ReplitPing.setTrace(true|false)
   * window.ReplitPing.setStability(ms)
   * window.ReplitPing.status()
   * window.ReplitPing.test()
   */
  window.ReplitPing = {
    setDebug(v){ DEBUG = !!v; cI(`DEBUG=${DEBUG}`); return DEBUG; },
    setTrace(v){ TRACE_SCAN = !!v; cI(`TRACE_SCAN=${TRACE_SCAN}`); return TRACE_SCAN; },
    setStability(ms){ STABILITY_MS = Math.max(0, Number(ms)||STABILITY_MS_DEFAULT); cI(`STABILITY_MS=${STABILITY_MS}`); return STABILITY_MS; },
    status(){
      const streaming = isStreamingNow();
      const state = s ? s.state : STATE.IDLE;
      const info = {
        state,
        sessionId: s?.id ?? null,
        stopVisible: isStopVisible(),
        workingVisible: isWorkingVisible(),
        streaming,
        stabilityMs: STABILITY_MS,
        pollActive: !!pollId,
        time: nowStr(),
      };
      cI("Status", info);
      return info;
    },
    async test(){ await playChime("manual-test"); return true; }
  };

})();

A tool for importing from Google Docs that doesn’t suck (Mammoth .docx converter)

I was tired of WordPress’s crappy editor so I decided to write a post in Google Docs instead.

However, when copying the content back in. All the images I carefully added did not carry back over. After trying several terrible options (Seraphinite Post .DOCX Source, WordPress.com for Google Docs, docswrite.com — this didn’t work when I tried but seems to function fine after going back and forth with their CEO) I finally tried out Mammoth and it just works which is great.

  1. Add a post
  2. Look for the Mammoth .docx converter box at the bottom
  3. Upload your file
  4. Wait for it to parse
  5. Click on Insert into editor
  6. And then wait for it to finish adding all the content into the editor (my docx took a few minutes since it had quite a few high resolution images)

Japan Recommendations

A quick note on the below content. All of the things I’ve included I personally enjoyed greatly and would enjoy to anyone who shares my tastes. Have fun in Japan!

Tokyo

Tokyo food

  • L’Effervescence


  • Oniku Karyu
  • Cokuun (Coffee Omakase)
  • I’m Donut ?
  • Pizza Bar


  • Cycle
  • IPPUKU&MATCHA

  • Blue Bottle Cafe (surprisingly good Matcha Latte)

  • Parklet Bakery


  • Iki Expresso


  • Le Petit Mec Hibiya (the best pastries I’ve had in Tokyo)

Tokyo things to do

  • Nezu Museum

  • Hamarikyu Gardens



  • Ueno Park (come at golden hour)



  • Tokyo National Museum
  • Teamlabs Borderless
  • Teamlabs Planets

Kyoto

Kyoto Food

  • Akagakiya

Kyoto things to do

  • Kyoto Golden Temple
  • Otagi Nenbutsuji Temple
  • Arashiyama Bamboo Grove (make sure you hike up into the park by the Bamboo Grove for the Valley View!)
  • Mt Inari (go early and hike to the top)
  • Kiyomizu-dera Temple (this was the most spectacular temple that I visited)

  • Ruriko-in Temple

Osaka

Osaka Food

Osaka Things to do

  • Osaka Castle (go at sunrise, its spectacular!)

Other places

Lake Kawaguchi

Onomichi

  • Shinomani-Kaido

Himeji

  • Himeji Castle



Wakayama

  • Tatago Rock

Nara

  • Nara Park


Nikko

  • Kinfuri Falls


  • Shinkyo Bridge


  • Nikko Tamozawa Imperial Villa Memorial Park




  • Nikkō Tōshogū


Tokyo

Tokyo food

  • L’Effervescence

  • Oniku Karyu
  • Cokuun (Coffee Omakase)
  • I’m Donut ?
  • Pizza Bar

  • Cycle
  • IPPUKU&MATCHA

  • Blue Bottle Cafe (surprisingly good Matcha Latte)

  • Parklet Bakery

  • Iki Expresso

  • Le Petit Mec Hibiya (the best pastries I’ve had in Tokyo)

Tokyo things to do

  • Nezu Museum

  • Hamarikyu Gardens

  • Ueno Park (come at golden hour)

  • Tokyo National Museum
  • Teamlabs Borderless
  • Teamlabs Planets

Kyoto

Kyoto Food

  • Akagakiya

Kyoto things to do

  • Kyoto Golden Temple
  • Otagi Nenbutsuji Temple
  • Arashiyama Bamboo Grove (make sure you hike up into the park by the Bamboo Grove for the Valley View!)
  • Mt Inari (go early and hike to the top)
  • Kiyomizu-dera Temple (this was the most spectacular temple that I visited)

  • Ruriko-in Temple

Osaka

Osaka Food

Osaka Things to do

  • Osaka Castle (go at sunrise, its spectacular!)

Other places

Lake Kawaguchi

Onomichi

  • Shinomani-Kaido

Himeji

  • Himeji Castle

Wakayama

  • Tatago Rock

Nara

  • Nara Park

Nikko

  • Kinfuri Falls

  • Shinkyo Bridge

  • Nikko Tamozawa Imperial Villa Memorial Park

  • Nikkō Tōshogū

Tokyo

Tokyo food

  • L’Effervescence

  • Oniku Karyu
  • Cokuun (Coffee Omakase)
  • I’m Donut ?
  • Pizza Bar

  • Cycle
  • IPPUKU&MATCHA

  • Blue Bottle Cafe (surprisingly good Matcha Latte)

  • Parklet Bakery

  • Iki Expresso

  • Le Petit Mec Hibiya (the best pastries I’ve had in Tokyo)

Tokyo things to do

  • Nezu Museum

  • Hamarikyu Gardens

  • Ueno Park (come at golden hour)

  • Tokyo National Museum
  • Teamlabs Borderless
  • Teamlabs Planets

Kyoto

Kyoto Food

  • Akagakiya

Kyoto things to do

  • Kyoto Golden Temple
  • Otagi Nenbutsuji Temple
  • Arashiyama Bamboo Grove (make sure you hike up into the park by the Bamboo Grove for the Valley View!)
  • Mt Inari (go early and hike to the top)
  • Kiyomizu-dera Temple (this was the most spectacular temple that I visited)

  • Ruriko-in Temple

Osaka

Osaka Food

Osaka Things to do

  • Osaka Castle (go at sunrise, its spectacular!)

Other places

Lake Kawaguchi

Onomichi

  • Shinomani-Kaido

Himeji

  • Himeji Castle

Wakayama

  • Tatago Rock

Nara

  • Nara Park

Nikko

  • Kinfuri Falls

  • Shinkyo Bridge

  • Nikko Tamozawa Imperial Villa Memorial Park

  • Nikkō Tōshogū

ChatGPT Chime on Chat Completion (Tampermonkey Script)

Tampermonkey script

// ==UserScript==
// @name         ChatGPT Completion Ping (Composer FSM, background-safe, no-timeout)
// @namespace    nicholas.tools
// @version      5.4.0
// @description  Chime on completion even when window/tab isn't focused. No timeout; FSM: saw Stop → Stop gone + editor empty. Poll + resilient audio.
// @match        https://chat.openai.com/*
// @match        https://chatgpt.com/*
// @grant        none
// @run-at       document-idle
// ==/UserScript==

(() => {
  "use strict";

  /* =========================
   * Logging
   * ========================= */
  const DEBUG = true;
  const log = (...a) => DEBUG && console.log("[COMP-PING]", ...a);
  const t = () => new Date().toLocaleTimeString();

  /* =========================
   * Selectors (composer only)
   * ========================= */
  const COMPOSER_EDITABLE    = '#prompt-textarea.ProseMirror[contenteditable="true"]';
  const COMPOSER_FALLBACK_TA = 'textarea[name="prompt-textarea"]';
  const SEND_BTN             = '#composer-submit-button[data-testid="send-button"]';
  const STOP_BTN             = '#composer-submit-button[data-testid="stop-button"]';

  /* =========================
   * Audio: HTMLAudio primary (WAV data URL), WebAudio fallback
   * ========================= */
  function makeChimeWavDataURL() {
    const sr = 44100, dur = 0.99;
    const notes = [
      { f: 987.77, d: 0.22 }, { f: 1318.51, d: 0.22 },
      { f: 1174.66, d: 0.20 }, { f: 1318.51, d: 0.30 },
    ];
    const gap = 0.055, amp = 0.28;
    const N = Math.floor(sr * dur);
    const data = new Float32Array(N).fill(0);
    let t0 = 0;
    for (const { f, d } of notes) {
      const nSamp = Math.floor(d * sr);
      const start = Math.floor(t0 * sr);
      for (let i = 0; i < nSamp && start + i < N; i++) {
        const env = i < 0.01*sr ? i/(0.01*sr) : (i > nSamp-0.03*sr ? Math.max(0, (nSamp - i)/(0.03*sr)) : 1);
        const s = Math.sin(2*Math.PI*f*(i/sr));
        const s2 = Math.sin(2*Math.PI*(f*1.005)*(i/sr)) * 0.6;
        data[start+i] += amp * env * (0.7*s + 0.3*s2);
      }
      t0 += d + gap;
    }
    const pcm = new DataView(new ArrayBuffer(44 + N*2));
    let off = 0;
    const wStr = (s) => { for (let i=0;i<s.length;i++) pcm.setUint8(off++, s.charCodeAt(i)); };
    const w32  = (u) => { pcm.setUint32(off, u, true); off+=4; };
    const w16  = (u) => { pcm.setUint16(off, u, true); off+=2; };
    wStr("RIFF"); w32(36 + N*2); wStr("WAVE");
    wStr("fmt "); w32(16); w16(1); w16(1); w32(sr); w32(sr*2); w16(2); w16(16);
    wStr("data"); w32(N*2);
    for (let i=0;i<N;i++) { const v = Math.max(-1, Math.min(1, data[i])); pcm.setInt16(off, v<0?v*0x8000:v*0x7FFF, true); off+=2; }
    const u8 = new Uint8Array(pcm.buffer);
    const b64 = btoa(String.fromCharCode(...u8));
    return `data:audio/wav;base64,${b64}`;
  }

  const CHIME_URL = makeChimeWavDataURL();
  const primeAudioEl = new Audio(CHIME_URL);
  primeAudioEl.preload = "auto";

  const AudioCtx = window.AudioContext || window.webkitAudioContext;
  let ctx;
  const ensureCtx = () => (ctx ||= new AudioCtx());

  async function playChime(reason) {
    try {
      const a = primeAudioEl.cloneNode();
      a.volume = 1.0;
      await a.play();
      log(`🔊 DONE (HTMLAudio) ${reason} @ ${t()}`);
      return;
    } catch {}
    try {
      const c = ensureCtx();
      if (c.state !== "running") await c.resume();
      const t0 = c.currentTime + 0.02;
      const master = c.createGain(); master.gain.setValueAtTime(0.9, t0); master.connect(c.destination);
      const lp = c.createBiquadFilter(); lp.type="lowpass"; lp.frequency.value=4200; lp.Q.value=0.6; lp.connect(master);
      const delay = c.createDelay(0.5); delay.delayTime.value=0.18;
      const fb = c.createGain(); fb.gain.value=0.22; delay.connect(fb); fb.connect(delay); delay.connect(master);
      const bus = c.createGain(); bus.gain.value=0.85; bus.connect(lp); bus.connect(delay);

      const seq = [
        { f: 987.77, d: 0.22 }, { f: 1318.51, d: 0.22 },
        { f: 1174.66, d: 0.20 }, { f: 1318.51, d: 0.30 },
      ];
      let cur = t0, gap = 0.055;
      for (const {f,d} of seq) {
        const o1=c.createOscillator(), g1=c.createGain(); o1.type="triangle"; o1.frequency.value=f;
        g1.gain.setValueAtTime(0.0001,cur); g1.gain.exponentialRampToValueAtTime(0.6,cur+0.01); g1.gain.exponentialRampToValueAtTime(0.001,cur+d);
        o1.connect(g1); g1.connect(bus); o1.start(cur); o1.stop(cur+d+0.02);

        const o2=c.createOscillator(), g2=c.createGain(); o2.type="sine"; o2.frequency.setValueAtTime(f*1.005,cur);
        g2.gain.setValueAtTime(0.0001,cur); g2.gain.exponentialRampToValueAtTime(0.35,cur+0.012); g2.gain.exponentialRampToValueAtTime(0.001,cur+d);
        o2.connect(g2); g2.connect(bus); o2.start(cur); o2.stop(cur+d+0.02);

        cur += d + gap;
      }
      log(`🔊 DONE (WebAudio) ${reason} @ ${t()}`);
    } catch {}
  }

  // Prime on user interaction
  const unlock = async () => {
    try { await primeAudioEl.play(); primeAudioEl.pause(); primeAudioEl.currentTime = 0; } catch {}
    try { if (AudioCtx) { const c = ensureCtx(); if (c.state !== "running") await c.resume(); } } catch {}
    window.removeEventListener("pointerdown", unlock, true);
    window.removeEventListener("keydown", unlock, true);
  };
  window.addEventListener("pointerdown", unlock, true);
  window.addEventListener("keydown", unlock, true);
  document.addEventListener("visibilitychange", () => { if (document.visibilityState === "visible") unlock(); });

  /* =========================
   * Composer helpers
   * ========================= */
  const isEl = (n) => n && n.nodeType === 1;
  const visible = (sel) => { const el = document.querySelector(sel); return !!(el && el.offsetParent !== null); };
  const editorEl = () => document.querySelector(COMPOSER_EDITABLE) || document.querySelector(COMPOSER_FALLBACK_TA) || null;
  function editorEmpty() {
    const el = editorEl();
    if (!el) return true;
    if (el.matches('textarea')) return (el.value || '').replace(/\u200b/g,'').trim().length === 0;
    const txt = (el.textContent || '').replace(/\u200b/g,'').trim();
    return txt.length === 0;
  }
  const isStopVisible = () => visible(STOP_BTN);

  /* =========================
   * FSM + background-safe polling (NO TIMEOUT)
   * ========================= */
  let sid = 0;
  let s = null;
  let pollId = 0;
  const STATE = { IDLE:'IDLE', ARMED:'ARMED', CLEARED:'CLEARED', STREAMING:'STREAMING', DONE:'DONE' };

  function stopPoll() { if (pollId) { clearInterval(pollId); pollId = 0; } }

  function startPoll() {
    stopPoll();
    // steady 250ms poll; browsers may throttle in background which is fine
    pollId = window.setInterval(() => tick(true), 250);
  }

  function cancelSession(reason) {
    if (!s) return;
    log(`CANCEL s#${s.id} (${reason})`);
    stopPoll();
    s = null;
  }

  function arm(reason) {
    // Cancel any previous session (no timeout; avoid multiple active)
    if (s) cancelSession("re-ARM");
    s = {
      id: ++sid,
      state: STATE.ARMED,
      sawStop: false,
      sawCleared: editorEmpty(),
      lastStopGoneAt: 0
    };
    log(`ARM s#${s.id} (${reason}) empty=${s.sawCleared} stop=${isStopVisible()} @ ${t()}`);
    startPoll();
    tick();
  }

  function transition(newState, why) {
    if (!s || s.state === STATE.DONE) return;
    if (s.state !== newState) {
      s.state = newState;
      log(`${newState} s#${s.id} (${why}) empty=${editorEmpty()} stop=${isStopVisible()} @ ${t()}`);
    }
  }

  function evaluate() {
    if (!s || s.state === STATE.DONE) return;

    // Editor cleared after send
    if (!s.sawCleared && editorEmpty()) {
      s.sawCleared = true;
      transition(STATE.CLEARED, "editor cleared");
    }

    // Streaming seen
    if (!s.sawStop && isStopVisible()) {
      s.sawStop = true;
      transition(STATE.STREAMING, "stop visible");
    }

    // Stop disappears
    if (s.sawStop && !isStopVisible() && !s.lastStopGoneAt) {
      s.lastStopGoneAt = performance.now();
      log(`STOP-GONE s#${s.id} (detected)`);
    }

    // Completion: saw Stop once AND Stop gone AND editor empty (150ms stability)
    if (s.sawStop && !isStopVisible() && editorEmpty()) {
      const stable = s.lastStopGoneAt ? (performance.now() - s.lastStopGoneAt) : 999;
      if (stable >= 150) {
        transition(STATE.DONE, "stop gone + editor empty");
        playChime(`s#${s.id}`);
        stopPoll();
        s = null;
      }
    }
  }

  const tick = () => { evaluate(); };

  /* =========================
   * Events & Observers
   * ========================= */
  document.addEventListener("click", (e) => {
    const btn = isEl(e.target) ? e.target.closest(SEND_BTN) : null;
    if (!btn) return;
    arm("send-button click");
  }, true);

  document.addEventListener("keydown", (e) => {
    const ed = isEl(e.target) && (e.target.closest(COMPOSER_EDITABLE) || e.target.closest(COMPOSER_FALLBACK_TA));
    if (!ed) return;
    if (e.key !== "Enter" || e.shiftKey || e.altKey || e.ctrlKey || e.metaKey || e.isComposing) return;
    if (document.querySelector(SEND_BTN)) arm("keyboard Enter");
  }, true);

  const obs = new MutationObserver((mutations) => {
    if (!s) return;
    for (const m of mutations) {
      if (m.type === "attributes") {
        const el = m.target;
        if (isEl(el) && (el.id === "composer-submit-button" || el.matches(COMPOSER_EDITABLE) || el.matches(COMPOSER_FALLBACK_TA))) {
          tick();
        }
      }
      if (m.type === "childList") {
        for (const n of m.addedNodes) {
          if (!isEl(n)) continue;
          if (n.matches(STOP_BTN) || n.matches(SEND_BTN) ||
              n.querySelector?.(STOP_BTN) || n.querySelector?.(SEND_BTN) ||
              n.matches(COMPOSER_EDITABLE) || n.matches(COMPOSER_FALLBACK_TA) ||
              n.querySelector?.(COMPOSER_EDITABLE) || n.querySelector?.(COMPOSER_FALLBACK_TA)) {
            tick();
            break;
          }
        }
        for (const n of m.removedNodes) {
          if (!isEl(n)) continue;
          if (n.matches(STOP_BTN) || n.matches(SEND_BTN) ||
              n.matches(COMPOSER_EDITABLE) || n.matches(COMPOSER_FALLBACK_TA)) {
            tick();
            break;
          }
        }
      }
      if (m.type === "characterData") tick();
    }
  });

  function start() {
    obs.observe(document.body, {
      childList: true,
      subtree: true,
      attributes: true,
      attributeFilter: ["data-testid","id","class","style","contenteditable","value"],
      characterData: true
    });
    log("armed (composer FSM, background-safe, no-timeout). Completes on: saw Stop → Stop gone + editor empty.");
  }

  if (document.readyState === "loading") {
    document.addEventListener("DOMContentLoaded", start, { once: true });
  } else {
    start();
  }
})();