阅读视图

发现新文章,点击刷新页面。
🔲 ☆

Problems with Vibe Coding pt.2

So, last time I said vibe coding makes me stop thinking, guess what? In certain cases, things are even worse. ## There is no flow state Normally, when I go "cave mode", a programming session looks like this: ![A flow diagram showing a rough mental model leading to code, refine, and ship, with a sharper mental model feedback loop returning from refine toward the earlier stage.]() One context, one head, uninterrupted flow. Now, with vibe coding, it looks like this: ![A flow diagram showing write prompt leading to wait, then review, with a loop through argue or re-prompt before moving on to ship.]() In cave mode, building the mental model doesn't really stop at the rough mental model. I start with one, sure, and when I get stuck I go back to it. But I never leave the context. Every line I write tests an assumption, every edge case sharpens the mental model a bit more. Building the mental model and coding are the same thread. Vibe coding cuts that thread. I'm not the one typing, so the only window where my mental model can grow is the plan. After that the agent goes off and writes in my name, and my understanding just stops while the codebase doesn't. ## The plan can't see what matters On small and contained work, AI really is faster, but on anything inside a big monolith, the speedup vanishes and the work just changes shape: the part where I build the mental model is gone, replaced by review, argue, and frankly, cursing the shit out of it. Devs know 🤷‍♂️. Like I said in pt.1, plan mode isn't upfront thinking. On the surface it looks like it is, the agent reads some files, gives me a plan, I approve, then I grab a coffee or something and let it cook, chill as fuck, right? But that plan was built from whatever fit in the context window, and even today's LLM context windows are still severely limited. They can't hold a whole codebase, not to mention that some coding agents tend to load even less. So the plan can't see the invariants in my head, the reasoning buried in commit messages, the constraints discussed in Lark threads months ago. Once I approve, a wrong premise could quietly become the foundation. ## And review won't save you ``` src/api/handlers/orders.ts | 892 +++++++++++++++++++++++----------- src/services/inventory.ts | 421 ++++++++++++++++++++-------------- src/db/migrations/0042_orders.sql | 287 ++++++++++++++++++++ src/components/Dashboard.tsx | 248 +++++++++++++++-------- ... 138 files changed, 4577 insertions(+), 3212 deletions(-) ``` That's what every argue cycle later is trying to correct, and the cost is higher than it looks, because I can't build the mental model at the same time. Prompt-writing and problem-modeling fight for the same brain space. Writing a prompt pulls what's in my head out into words, while building a mental model pulls the problem in. I can't do both at once, and every prompt-wait-output cycle wipes out a bit of the mental model I was holding. And even if the mental model is somewhat clear, on a large codebase or a huge monolith, I probably won't review the code line by line, in most cases, I won't even look at a single line, yet still hope that everything is working as planned. Then I deliver it to the test team, and there goes another nightmare. ## AI is a junior, and stays a junior It clicked when I started thinking of AI as a junior. Except this junior writes way more lines than any human, never asks when in doubt, makes things up confidently, doesn't remember what I taught yesterday, and doesn't reflect on mistakes. Teaching a junior is tiring, but at least it goes somewhere. They grow up, and after a year they're part of the system rather than a load on it. Teaching AI goes nowhere: every session starts cold, every fix gets re-broken next week, and the review load that would normally spread across ten juniors all lands on one senior. ## The team ceiling becomes the AI ceiling If the senior is the only one still holding the mental model, and the senior gets ground down by argue-loops, the team's correction capacity drops to whatever the AI itself can do. Which is: surface-level pattern matching on whatever happens to be in the context window. AI doesn't see invariants. It can't see why a try/catch was added after an incident like 2 years ago, or why a field is non-nullable because a downstream pipeline crashes if it isn't, or why two endpoints exist for what looks like the same data because one is for one client and the other is for another client. These invariants live in heads, in Lark, in commit messages, in the people who left last quarter. AI plans look reasonable and miss them anyway. Then the plan gets approved, shipped, and rewritten three months later by a different agent that also doesn't see them. ![A screenshot of a large system diagram with many service nodes and connections, representing the kind of real-world complexity a plan is supposed to account for.]() I've watched modules go through three "definitive" rewrites in a year for this exact reason. Every rewrite was locally correct, every rewrite was globally wrong, and none of them talked to each other. --- ## We're trapped The cost of vibe coding isn't the bad code it produces, bad code is recoverable. The cost is what happens to me while supervising it. Reflection time gets eaten, mental models stop forming, and six months in the senior is busier and producing shallower work, while the team's actual ceiling is silently dropping toward the coding agent's. I know there's a balance to strike with this stuff, and I just need time to find it.
🔲 ☆

大事件

大事件

对于这样的事情,大家似乎经历了一周多,才坦然接受。

对于在现场的我而言,确实从一开始不以为意,到突然的寂静,寂静的有一丝恐惧和茫然,留下了特别深的印象。

对于这样的事情,大家有所耳闻,但是没有发生在身边的时候,还是多少不会太忧虑。 很多年前,在双减发生的时候,我和小伙伴还没有那么明显的感触。而这次,这种所有铺垫,到官宣的一刻,还是会带来一定的冲击。

由于目前不具备任何官方的消息,这里也不能说发生了什么。暂且就叫大事件吧。

AI 焦虑

从去年大家密集接触 AI 工具,从数据分析,文档撰写,代码编写等等。无一例外感受到好处,好景不长。大家在今年开始听到了很多大的互联网公司因为 AI 的人才优化,AI 进步的太快了,以致于很多人都还沉浸 AI 提升的喜悦之中,而忽略了可替代的一项。

似乎我们处在变革的年代,能做的除了接受,也做不了什么了。个人相比时代,真的太渺小。多少年前的疫情,给大家都上了一课。乐观看待身边所发生的事情,淡然接受比什么都好。机会是终究是公平的。

工作还快乐么?

AI 编写代码真的太快了,质量高切速度快。回想到十多年前的某个下午,打开记事本,一行行敲到着 HTML 代码,还用着笨拙的 Table, 随后用 360 安全浏览器,打开,那种很原始,很复古的样子,给个人带来的冲击和愉悦,似乎现在很少就有了。

对于角色的变化,阵痛期和迷茫期都是有的,曾经有一段时间,我们还保留着 CSS 重构工程师的 Title, 就是切图的工种,他们享受将 PSD 还原成可交互网页的成就感。似乎,这个工种慢慢消失了,都记不清具体什么时候就被掩盖在历史的长河了,对于 PM ,无疑是兴奋地,他们终于可以摆脱喜欢推诿的工程师的接口了,就是干。

工程师失去了这种快乐,如何去寻找新的快乐?转型期,痛切深沉着!

Redesign your Life

面临着工作的转型,生活似乎也要有了变化。你不再依靠传统的搜索引擎了,AI 提供了很多高精准的答案。在过去长达十年多的高高速互联网发展,人民习惯了卷,对于9-10点的下班,认为这才是正常的生活方式。累的好多人抑郁了,还要坚持回复钉钉消息。反而现在在这个新的时代,效率已然卷不过 AI 了,我们可能反而需要静下心来,规划规划。慢节奏难道不是大自然进化的一个发展方向么?

好多人在消化后,重燃了曾经的计划,有人觉得,为什么不区别的城市呆一呆?去深刻体会下别的城市柴米油盐,昼出夜归。有的人觉得,也该好好练下身体,不就是慢跑么?马拉松?山区徒步么?先走走国内的徒步路线,去武功山,去虎跳峡,去三峡之巅,原来偌大的中国,自己都没有留下几个脚印?有人觉得,去重新读书,学一学新的知识,或许更好,我们不是老怀念学生时代么?

小时候,我喜欢看通灵王,特别喜欢麻仓叶经常说到:

船到桥头自然直

🔲 ☆

Problems with Vibe Coding

The more I use coding agents at work, the more I notice one thing: they make you stop thinking too early. I am not saying that we should avoid using coding agents. I use these mfs a lot for bug fixes, rushed features, and dirty work. The speedup is real, and kinda insane, but something feels off. ## "Code first, think later" In traditional development, the order is pretty clear. You think about the interface, the data structure, boundary cases, and tests first, and then you write code. Slower, definitely, but at least you have a whole picture in your head and a much better chance of knowing why the thing works. Now it is the opposite. You ask a coding agent to build on top of an existing project, and let me guess, the prompt is gonna be something like: > "Hey my boss just told me to do this, implement it." Then you drop in some markdown or whatever docs look like a feature request, wait for a while, and boom, the page renders, the API responds. It works. Once it works, you probably will not step back and look at the whole thing the coding agent just generated. Then these kinda so-called feature requirements or bug fixes keep coming one by one, again and again, and congrats, now you have a "shit mountain". | Code base size | Human-led AI coding | Pure vibe coding | | --- | --- | --- | | Very small | 12 | 7 | | Small | 22 | 12 | | Medium | 32 | 24 | | Large | 42 | 56 | | Very large | 52 | 94 | ## Plan mode != Upfront thinking Upfront thinking is not just writing a todo list or turning on plan mode in a coding agent. Real upfront thinking is modeling. What is the core object? Which values are real state and which are derived? Where is the module boundary? If we get three similar requirements next month, can this shape absorb them? At which layer should the tests exist? And at which point will this system break? This part is not sexy. It kills your brain cells like hell, and it will not show up in a weekly report. But it is what makes engineering hold up over time. ## Technical debt, still paid by humans Current coding agents are pretty much all LLM-based. They all have context limits, which means they forget things. Today you patch a module and push with one commit that looks like `+18914 / -7986`. Tomorrow you say, "we can clean it up later." But cleanup rarely happens, especially when you are working in a team. The same module might even get reused by other humans or coding agents in no time. And nobody will know why something exists or why a simple refactor breaks everything after just a few weeks. Then you start using a coding agent to debug it. It reads the git history, sees the commit author, and says, "huh, it was you!" 🤡 Nah fuck off. ## Old-school still teaches I keep seeing people online claiming that zero basics and a few weeks of vibe coding are enough to build production-ready apps. Some even have the nerve to walk into interviews like that 🤷. I do not deny that people learn fast, but getting something to run and building a system are two different things. Interface design, data modeling, complexity control, and technical debt intuition haven't really become outdated. If anything, they matter even more now. Implementation is cheaper. So the decisions that keep a system clean are more valuable. Coding agents can write a lot of code, but they do not know what to keep simple, what to not do, and what will be painful to change later. ## Final thoughts I will keep using these tools. They are too useful to ignore. But the easier it becomes to say "just make it work first", the more I need to remind myself not to skip the thinking part.
🔲 ☆

可能是最后一次更换博客引擎

时间线还是值得记一下: - 2017 年,PHP - 2018 年,Jekyll - 2019 年,Hexo - 2024 年,Astro - 2026 年,Self-Built 这件事其实也不是突然发生的。最近几个月,如果你能看到这个博客仓库的提交记录,大概能看出来我一直在删东西:删不必要的样式,删不必要的依赖,删不必要的中间层,上周甚至连 Tailwind 也一起剔掉了(支持裁员 🤡)。 结果删到最后,我发现最大的那层反而还在,就是框架本身。既然都已经做减法做到这里了,那继续在框架上修修补补就没什么意思了,干脆把框架也干掉。 于是我的博客从 Astro 换成了自建引擎,底层是 Bun。 ## 性能 让 Astro 版本和现在这套引擎在同一台机器上跑同样的 build,对比结果: | Metric | Astro | Self-Built | Delta | | --- | --- | --- | --- | | node_modules | 461 MB | 243 MB | -47% | | Build Time | 12.1s | 702ms | -94% | | Build Output | 1.9 MB | 1.8 MB | -4% | | Homepage Size (brotli) | 17.9 KB | 9.7 KB | -46% | | Homepage Files | 5 | 3 | -40% | 本地构建之前是1.6秒,后来我又把 HTML 压缩整个拿掉了,结果是构建出来的 HTML 体积涨了 3.5%,全局构建时间从 1.6 秒降到了 700 毫秒左右。如果把 Shiki 也完全抽掉,构建时间会变成 110ms 左右(对比Astro减少了99%的构建时间),但是那样的话博客就没有好看的代码高亮了,700ms 的构建时间完全可以接受(傲娇脸)。Cloudflare Workers 上,Astro 版本的 build stage 大约要 28 秒,现在这套自建引擎只要 2 秒。 `package.json` 里的依赖条目也少了挺多。`dependencies + devDependencies` 从 `25` 个降到 `11` 个;如果只看运行时 `dependencies`,则只有3个,有两个甚至和前端不相关。 ```json { "dependencies": { "@upstash/qstash": "^2.9.0", "@upstash/redis": "^1.36.1", "pangu": "^7.2.0" }, "devDependencies": { "@biomejs/biome": "^2.3.13", "@types/bun": "^1.3.5", "chalk": "^5.6.2", "dotenv": "^17.2.3", "enquirer": "^2.4.1", "markdown-it": "^14.1.0", "shiki": "^4.0.2", "wrangler": "^4.70.0" } } ``` 这些数字说明了一件很直接的事:Astro 在我的博客上做了太多我根本不需要的工作。 新的引擎其实很简单:`markdown-it` 负责解析 Markdown,`shiki` 负责代码高亮,模板函数负责拼页面,`Bun.serve()` 负责本地开发,构建脚本负责输出静态文件。没有 Vite,没有 Rollup,没有 hydration,也没有额外的内容系统。 还有一个很实际的变化是,构建产物终于变得可预测了。以前首页表面上看也就一个页面,但背后还有 island runtime、renderer chunk、共享 chunk 这些东西,真实体积并不总是直观看得出来。现在这套就直接多了,首页就是明明白白的 HTML、CSS 和 JS 三个文件,没有别的 runtime 藏在后面。 这次顺手还解决了一个 Astro 时代一直很别扭的问题,就是 `atom.xml`。以前 MDX 正文并不能很自然地直接流进 feed 里,结果 feed 一直是一条额外维护的支线:自定义组件要手工转,HTML 要额外清洗,URL 要额外补。现在,正文本身就是 Markdown,feed 直接吃 Markdown,只有遇到自定义内容块时才退化成 Markdown 友好的版本。页面怎么渲染,feed 怎么降级,都是同一层解释器在决定,而不是正文一套逻辑,RSS 再偷偷长出一套逻辑。 ## 动机 我越来越明显地能感觉到:AI 已经在改变抽象层的成本结构。过去很多框架提供的工程收益,在博客这种低复杂度场景里,开始没有以前那么划算了。 这次重构本身,主要也是 Codex 做完的。它花了大概 3 个小时,把整个站点从头到尾重写了一遍。源码层面的变更大概是新增了 `6888` 行代码,删除了 `6344` 行代码。这件事让我重新想了一遍框架的价值。过去这笔交易是成立的:用一点性能,换一套更容易维护的工程结构,模板系统、组件模型、路由约定、内容 schema,这些东西本质上都是为了帮助人更稳定地理解和修改代码。 但 AI coding 打破了这里的平衡。对 Codex 这类 coding agent 来说,一个手写的 HTML 模板函数并不比一个 Astro 组件更难理解。它可以直接顺着 Markdown、模板、样式和脚本一路往下读,再改具体环节。很多原本为了“降低人类维护成本”而存在的抽象,在这种场景下没有以前那么必要了。 但是这不等于框架没用了。恰恰相反,有了 AI 之后,我反而觉得框架真正该解决的问题更清楚了:不是继续发明一套更花哨的模板语法,而是把边界、约束、验证、缓存、产物组织这些事情做扎实。语法糖 AI 可以学得很快,但边界不清、产物不可预测、降级全靠补丁,这些才是真问题。复杂应用、多人协作、长生命周期产品,框架依然值钱。 ## 最后 所以这次大概真的不用再换了,系统已经简单到不太值得继续折腾,接下来真正应该花心思的,就是博客内容了。 最后请欣赏这曼妙的build输出 ⚡: ![video]()
🔲 ☆

Desktop notifications for Codex CLI and Claude Code

## Context This setup was tested on my own machine with: - `Codex CLI 0.113.0` - `Claude Code 2.1.72` - `macOS 26.3.1 (25D2128)` - `arm64` Apple Silicon ![A macOS notification from Codex CLI with the subtitle Notification setup]() --- ## Start with Claude Code’s official setup `Claude Code` already documents the two parts you need: - terminal notifications and terminal integration - hooks for `Notification` and `Stop` - [Hooks reference](https://code.claude.com/docs/en/hooks) - [Hooks guide](https://code.claude.com/docs/en/hooks-guide) - [Terminal config](https://code.claude.com/docs/en/terminal-config) That is the right place to start. On macOS, the most obvious first implementation is also the simplest one: a tiny `osascript` wrapper. File: `$HOME/.claude/notify-osascript.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" osascript -e "display notification \"$MESSAGE\" with title \"Claude Code\"" >/dev/null 2>&1 || true ``` And wire it into Claude’s hooks: ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Claude Code needs your attention'" } ] } ] } } ``` This worked, but only technically. - Clicking the notification did not cleanly bring me back to the terminal app. - There was no grouping, so notifications piled up. - Once terminal-native notifications entered the picture, especially in `Ghostty`, duplicate alerts got annoying. That was the point where `terminal-notifier` became the better base layer. ## Why I switched to terminal-notifier The official repo is here: - [julienXX/terminal-notifier](https://github.com/julienXX/terminal-notifier) Install it with Homebrew: ```bash brew install terminal-notifier ``` Then verify it: ```bash which terminal-notifier terminal-notifier -help | head ``` The three features that made it worth switching: - `-activate`, so clicking the notification can bring my terminal app to the front - `-group`, so I can keep one live notification per project instead of stacking old ones - better control over subtitle, sound, and macOS notification-center behavior --- ## A shared notification helper Before touching either tool, create one shared helper: ```bash mkdir -p "$HOME/.local/bin" ``` File: `$HOME/.local/bin/mac-notify.sh` ```bash #!/bin/bash set -euo pipefail TITLE="${1:?title is required}" MESSAGE="${2:-}" SUBTITLE="${3:-}" GROUP="${4:-}" SOUND="${5:-Submarine}" case "${TERM_PROGRAM:-}" in ghostty) BUNDLE_ID="com.mitchellh.ghostty" ;; iTerm.app) BUNDLE_ID="com.googlecode.iterm2" ;; Apple_Terminal) BUNDLE_ID="com.apple.Terminal" ;; vscode) BUNDLE_ID="com.microsoft.VSCode" ;; cursor) BUNDLE_ID="com.todesktop.230313mzl4w4u92" ;; zed) BUNDLE_ID="dev.zed.Zed" ;; *) BUNDLE_ID="" ;; esac TERMINAL_NOTIFIER="" if [ -x /opt/homebrew/bin/terminal-notifier ]; then TERMINAL_NOTIFIER="/opt/homebrew/bin/terminal-notifier" elif command -v terminal-notifier >/dev/null 2>&1; then TERMINAL_NOTIFIER="$(command -v terminal-notifier)" fi if [ -n "$TERMINAL_NOTIFIER" ]; then ARGS=( -title "$TITLE" -message "$MESSAGE" -sound "$SOUND" ) if [ -n "$SUBTITLE" ]; then ARGS+=(-subtitle "$SUBTITLE") fi if [ -n "$GROUP" ]; then ARGS+=(-group "$GROUP") fi if [ -n "$BUNDLE_ID" ]; then ARGS+=(-activate "$BUNDLE_ID") fi "$TERMINAL_NOTIFIER" "${ARGS[@]}" exit 0 fi SAFE_MESSAGE="${MESSAGE//\\/\\\\}" SAFE_MESSAGE="${SAFE_MESSAGE//\"/\\\"}" SAFE_SUBTITLE="${SUBTITLE//\\/\\\\}" SAFE_SUBTITLE="${SAFE_SUBTITLE//\"/\\\"}" osascript -e "display notification \"$SAFE_MESSAGE\" with title \"$TITLE\" subtitle \"$SAFE_SUBTITLE\" sound name \"$SOUND\"" >/dev/null 2>&1 || true ``` Make it executable: ```bash chmod +x "$HOME/.local/bin/mac-notify.sh" ``` I scope notification groups by tool and project, not by message. That gives me one live `Claude Code` notification and one live `Codex CLI` notification per repo instead of a growing stack. ### How click-to-focus works The key line is: ```bash -activate "$BUNDLE_ID" ``` `terminal-notifier` accepts a macOS bundle id and activates that app when the notification is clicked. I map the common values from `TERM_PROGRAM`: - `com.mitchellh.ghostty` - `com.googlecode.iterm2` - `com.apple.Terminal` - `com.microsoft.VSCode` - `com.todesktop.230313mzl4w4u92` for Cursor - `dev.zed.Zed` This does not target one exact split or tab. It just brings the app to the front, which is good enough for this workflow. --- ## Claude Code: attention notifications and completion notifications I split notifications into two categories: - `Notification`: Claude needs me to do something, like approve a permission request or answer a prompt - `Stop`: the main agent finished responding - `Notification` for permission prompts or other attention-needed states - `Stop` for completion ### Claude notification script File: `$HOME/.claude/notify.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" PROJECT_DIR="${PWD:-$HOME}" PROJECT_NAME="$(basename "$PROJECT_DIR")" [ "$PROJECT_NAME" = "/" ] && PROJECT_NAME="Home" PROJECT_HASH="$(printf '%s' "$PROJECT_DIR" | shasum -a 1 | awk '{print $1}' | cut -c1-12)" GROUP="claude-code:${PROJECT_HASH}" "$HOME/.local/bin/mac-notify.sh" "Claude Code" "$MESSAGE" "$PROJECT_NAME" "$GROUP" ``` ```bash chmod +x "$HOME/.claude/notify.sh" ``` ### Claude hooks configuration File: `$HOME/.claude/settings.json` ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "permission_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Permission needed'" } ] }, { "matcher": "idle_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Waiting for your input'" } ] } ] } } ``` If you do not care about different notification types, an empty matcher `""` is enough. One detail worth remembering: Claude snapshots hooks at startup. If changes do not seem to apply, restart the session. Also check macOS notification permissions if nothing shows up. --- ## Codex CLI: completion notifications For `Codex CLI`, the mechanism is not `hooks`. It is `notify`. Official docs: - [Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [Configuration Reference](https://developers.openai.com/codex/config-reference) As of `2026-03-10`, Codex documents external `notify` for supported events like `agent-turn-complete`. So in practice: - completion notifications: yes - Claude-style permission notifications through the same external script: no Approval reminders in Codex are a separate `tui.notifications` problem. ### Codex notify script File: `$HOME/.codex/notify.sh` ```bash #!/bin/bash set -euo pipefail PAYLOAD="${1:-}" [ -n "$PAYLOAD" ] || exit 0 python3 - "$PAYLOAD" <<'PY' import json import pathlib import sqlite3 import subprocess import sys import zlib from datetime import datetime, timezone CODEX_HOME = pathlib.Path.home() / '.codex' def log_skip(reason: str, payload: dict, **extra: object) -> None: log_path = CODEX_HOME / 'notify-filter.log' data = { 'ts': datetime.now(timezone.utc).isoformat(), 'reason': reason, 'client': payload.get('client'), 'thread-id': payload.get('thread-id'), 'cwd': payload.get('cwd'), } data.update(extra) with log_path.open('a', encoding='utf-8') as fh: fh.write(json.dumps(data, ensure_ascii=True) + '\n') def get_thread_originator(thread_id: str) -> tuple[str, str]: db_path = CODEX_HOME / 'state_5.sqlite' if not db_path.exists(): return '', '' try: with sqlite3.connect(db_path) as conn: cur = conn.cursor() cur.execute('select rollout_path, source from threads where id = ?', (thread_id,)) row = cur.fetchone() except Exception: return '', '' if not row: return '', '' rollout_path, source = row if not rollout_path: return '', source or '' try: first_line = pathlib.Path(rollout_path).read_text(encoding='utf-8', errors='ignore').splitlines()[0] payload = json.loads(first_line).get('payload', {}) except Exception: return '', source or '' return (payload.get('originator') or '').strip(), source or '' try: payload = json.loads(sys.argv[1]) except Exception: raise SystemExit(0) if payload.get('type') != 'agent-turn-complete': raise SystemExit(0) client = (payload.get('client') or '').strip().lower() if client and ('app' in client or client == 'appserver'): log_skip('skip-app-client', payload) raise SystemExit(0) thread_id = (payload.get('thread-id') or '').strip() if thread_id: originator, source = get_thread_originator(thread_id) if originator == 'Codex Desktop': log_skip('skip-desktop-originator', payload, originator=originator, source=source) raise SystemExit(0) cwd = payload.get('cwd') or '' subtitle = pathlib.Path(cwd).name if cwd else 'Task completed' message = (payload.get('last-assistant-message') or 'Task completed').replace('\n', ' ').strip() if not message: message = 'Task completed' if cwd: group = 'codex-cli:' + format(zlib.crc32(cwd.encode('utf-8')) & 0xFFFFFFFF, '08x') else: group = 'codex-cli:' + (payload.get('thread-id') or 'default') subprocess.run( [ str(pathlib.Path.home() / '.local' / 'bin' / 'mac-notify.sh'), 'Codex CLI', message[:180], subtitle, group, ], check=False, ) PY ``` ```bash chmod +x "$HOME/.codex/notify.sh" ``` ### Codex config File: `$HOME/.codex/config.toml` ```toml notify = ["/Users/you/.codex/notify.sh"] ``` Use any absolute path you want. I keep the script under `~/.codex/`. --- ## If you use Ghostty, disable terminal-native desktop notifications I hit one more annoying edge case in `Ghostty`: duplicate notifications. What happened was: - my script sent a notification through `terminal-notifier` - `Ghostty` also surfaced a terminal-native desktop notification That produced two macOS notifications for one event. On my machine, the clean fix was to keep `terminal-notifier` as the only notification channel and disable Ghostty’s terminal-native desktop notifications: File: `~/Library/Application Support/com.mitchellh.ghostty/config` ```plaintext desktop-notifications = false ``` Why I prefer this setup: - `terminal-notifier` gives me `-activate`, so click-to-focus still works - `terminal-notifier` gives me `-group`, so notifications stay scoped per project - both `Claude Code` and `Codex CLI` behave the same way Ghostty’s config docs describe `desktop-notifications` as the switch that lets terminal apps show desktop notifications via escape sequences such as `OSC 9` and `OSC 777`. Turning it off avoids the extra notification layer. --- ## If you also use Codex App This is the part that bit me. At first I assumed filtering by the `client` field would be enough. It was not. On my machine, some sessions started from `Codex App` looked like this in local session metadata: ```json { "originator": "Codex Desktop", "source": "vscode" } ``` That creates a duplicate-notification problem: - Codex App shows its own notification - the local CLI `notify` script can still fire - I get duplicate notifications for the same task So the script does two things: 1. fast path: skip obvious app-like `client` values 2. fallback: read `thread-id` from the `notify` payload, query `~/.codex/state_5.sqlite`, load the first `session_meta` line, and skip if `originator == "Codex Desktop"` That is why the script above checks local thread metadata instead of trusting only `client`. I also log skipped events to: ```text ~/.codex/notify-filter.log ``` That makes debugging much easier if Codex changes its session metadata format later. > This part is based on observed local behavior, not on a stable public contract from the docs. If OpenAI changes how Codex App identifies local sessions in future versions, the filter may need a small update. --- ## References - [OpenAI Codex Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [OpenAI Codex Configuration Reference](https://developers.openai.com/codex/config-reference) - [Anthropic Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks) - [Anthropic Claude Code Hooks Guide](https://code.claude.com/docs/en/hooks-guide) - [Anthropic Claude Code Terminal Configuration](https://code.claude.com/docs/en/terminal-config) - [terminal-notifier](https://github.com/julienXX/terminal-notifier)
🔲 ☆

Dating App Sucks Pt.2

Ok here we go again. I think I've finally figured out the scariest thing about dating apps. They do actually turn finding love into a fucking job search. > Every date feels like a business meeting or something, no sparks, pure cringe. Think about it, we fill out our "resumes" with our best photos and wittiest bios. We list our "desired positions" in the filters. We swipe through "candidates" hoping to get a "decent offer". The whole thing is an HR pipeline with better lighting. But love is the exact opposite of a job search, which follows logic. Love? Personally, I think there is no logic in love. Love is a bias, a fucking tyranny. The bias is that you only want one specific person to do the things literally anyone could do. The tyranny is that you pour all your emotions, irrationally, recklessly, entirely onto another human being. And dating apps have always given me this weird feeling, love obtained through this process feels so bland it's almost offensive. If I were a planet, this whole approach would be like some engineer calculated the perfect speed, angle, and mass, then launched another planet at precisely the right time so we'd form a nice, stable binary star system. How romantic. How efficient, how abso-fucking-lutely dead inside. What I want is a rogue planet hurtling toward me at full speed out of nowhere in the middle of the void. The moment we touch, atoms from two entirely separate worlds are forced into lattices they were never meant to share. Molecular bonds snap, shatter, and reform into something unrecognizable. The pressure breeds temperatures that fuse nuclei into heavy, unnamed elements that no periodic table has ever seen, existing for a few picoseconds before decaying into something else entirely. Oceans of molten rock erupt outward, entire crusts peeled off like skin, shockwaves rippling through mantles at speeds no device could ever measure. What used to be two worlds is now a single, blinding wound in space. Some debris escapes into strange new orbits. The rest? Fuses together so tightly that nothing, not time, not entropy, can pull it apart, until our one last atom is annihilated with the heat death of the universe. I'm not saying dating apps are pure evil, you could still meet someone real on there, the odds exist. But what's truly terrifying about these things is that they teach you how to NOT invest. Everyone on there wants low-risk love. A guaranteed return with minimal downside. But since when has that ever been how love works? I've seen people around me become professional swipers. Always chatting, always got girls around them. And then what? This one's family background isn't great. That one's not pretty enough. Another one said something weird at dinner that gave them the "ick". Next. Next. Next. Bro, stop cos-ing a fucking conveyor belt. Being overly rational in love is a slow way to lose everything. The second anything feels slightly off, they're gone. No friction allowed. But no friction means no sparks either. They end up like the guy in Socrates' wheat field parable, walking through the field, always convinced a bigger stalk is just ahead, waiting, but never actually picking one. And the field does end. It very much does end. Uninstalled.
🔲 ☆

The Cursor Moment in Music Production

I've been thinking about this for a while. Cursor didn't change programming because it could write code. It changed programming because it **made real work faster while keeping every line editable**. That's the key. So when does music production get its Cursor moment? --- ## My imagination of this AI DAW Not magic. **Delegation with control.** And the difficulty scales fast. **Level 1:** "Generate a 4-bar piano MIDI with emotion." Already harder than it sounds. Pitch, velocity, note length, micro-timing, articulation, envelopes. Emotion isn't metadata, it's embedded in low-level decisions. Like writing a small utility function. Simple scope, high quality bar. **Level 2:** "Generate a 16-bar violin MIDI that matches the drum." Everything from Level 1, plus context. Groove awareness, phrasing, rhythmic interaction. The model has to listen. Like adding a feature that integrates with an existing module. **Level 3:** "Generate a sequence using Serum, make bars 8-16 flow, sidechain from track 2, match the vibe." This is the inflection point. Now the AI needs full DAW access, third-party plugin knowledge, routing logic, arrangement continuity, aesthetic coherence. Plugins become libraries. You need something like documentation context for tools, not just parameters. Multi-system engineering, not generation. **Level 4:** "Generate a clean vocal track based on the whole song." Lyrics, melody, phrasing, emotion, refinement at the word and timing level. Like adding a major feature to a large codebase. One-shot attempts will fail. But with a human-in-the-loop, reviewing, steering, refining, this becomes feasible. The AI drafts, the human produces. **Level 5:** "Give me fire." This must fail. Just like "make me Facebook" in coding, the spec is undefined. Taste *is* the task. Neither humans nor AI can guarantee this. --- ## Who could even make this Building a brand-new DAW? Dead end. Producers are deeply locked into their tools. Switching DAWs isn't like switching editors, it means relearning muscle memory, mental models, creative habits. For many producers, it's practically impossible. So any Cursor moment has to either sit on top of existing DAWs or deeply integrate with them. **Splice** has an interesting edge. Not DAW engineering, but data. Cross-DAW usage, massive libraries, user behavior at scale. Its natural position is as an intelligence layer, something like an LLM for music production that other tools call into. **Apple and Logic Pro** already ship features that hint at an agentic future. Session players that suggest MIDI, react to reference audio, generate parts from scratch. Apple has vertical integration: hardware, OS, DAW. It can ship something real. But it's also closed. A Cursor-like ecosystem thrives on extensibility, not just polished features. **Ableton Live** is interesting for a specific reason: its project files are XML-based. That means sessions are structured, serializable, writable. In principle, an AI can already read and modify a Live project the way a coding agent reads and edits source files. The blocker isn't the format. It's the model. The IDE substrate exists. What's missing is a music-production-focused base model that understands intent, taste, and workflow, not just structure. --- ## Why this hasn't happened yet Three hard blockers. **No clear "text of music."** Code has text. Serializable, diffable, composable. That's why LLMs worked so well, so fast. Music doesn't have a single equivalent. MIDI is editable but incomplete. Audio is complete but opaque. Without a clear fundamental representation, everything above it becomes fragile. **No production-native base model.** Cursor didn't invent intelligence. It orchestrated a strong base model. There's no equivalent yet for music production, one that understands MIDI, audio, arrangement, plugins, mixing, and taste as a unified domain. Current models generate outputs, they don't reason inside workflows. **Locked ecosystems.** There's no VS Code-level DAW that is open and dominant. DAWs are closed, fragmented, deeply personal. That pushes AI to the plugin layer, where integration is safer and optional. That's why we see "AI inside plugins" everywhere, and almost nowhere at the DAW core. --- ## Where we are now Here's something I made in 30 minutes, mostly Splice samples: [Piece made in 30 min]() ![Ableton Live project screenshot]() This is probably the fastest way to get something production-ready with a traditional workflow. But let's be honest, the vibe factor is low. It's assembled, not created. What's not AI here? Pretty much everything that matters. Sidechain, dialed in by hand. Reverb, tweaked until it sat right. Arrangement, mixing, mastering, all me. And what kind of AI do we have now in music production? Splice. It uses AI to find sounds faster, made matching tones easier. Real gains, but still operating inside the traditional production phase. Plugins. Pitch correction, noise reduction, vocal tuning, loudness matching. These are genuinely useful. They save time. But they don't change how music is made, they just speed up tasks inside the same old workflow. Then there are the full-track generators. Suno, Udio, you name it. They can spit out complete songs, sometimes with lyrics, and honestly the results are surprisingly not bad. For background music, promotional videos, that kind of stuff? They work. Fast, cheap, good enough. But at this point they skip something critical: **production**. ### The paradox Prompt → final audio. No MIDI, no arrangement control, no micro-timing, no note-level editing. You get output, not a workspace. It's closer to collage than composition. Yes, I know Suno has its own studio app, the thing is, if Suno really wants to enter serious production, it needs fine-grained control. How much sidechain pump fits my taste? How hard should the compressor hit before the vocal sounds perfect? What's the right master loudness? Should I apply true peak limiting? But the moment you add those controls, you're building a DAW. And those controls still require professional knowledge to use well. A beginner and a seasoned producer using the same AI tool will produce vastly different results. Just like a junior dev and a senior dev using Cursor. The tool accelerates. It doesn't replace judgment. ### The taste In code, taste is often invisible to users. A shitty function and a beautifully designed one can produce the same result: it works. Architecture and elegance mostly matter to developers. Music doesn't work like that. Humans are extremely sensitive to sound. Timing, tone, balance, texture, these are immediately perceptible. Choose a bad string library? Listener knows instantly. There's no abstraction layer that hides bad taste. And there's no "ship now, fix later" model. Software can be patched. Music can't. Once it's released, it's frozen. The first version is the version. ### The moving target There's something even deeper. Code is functional, music is cultural. Assembly from 1950 still runs correctly today. Music from 1950? Technically fine, culturally dated. Code has a stable target: correctness. Music's target drifts with time, with generations, with vibes no one can fully articulate. AI learning code is learning "what works." AI learning music is learning "what felt good to people in the past." But taste keeps moving. Training data is always yesterday. The ground truth itself is in motion. That's why skipping the production phase works for low-stakes content, but fails for anything serious. --- ## So when? Not when AI makes better songs. Not when generation gets faster. It happens when AI can **work inside music production**, not around it. When it accelerates real workflows, preserves taste, and allows refinement down to the smallest unit. I've been wanting to build this myself. A VST that generates context-aware MIDI, something that listens to what's already in the session and proposes what comes next. A few years ago, the blocker was obvious: I don't write C++, I don't know JUCE. Now? The blocker has shifted. I could probably vibe-code my way through the plugin architecture. But training a model that actually understands musical context? That's where it gets hard. Really hard.
🔲 ☆

记一次警察访谈

记一次警察访谈

万万没想到我也经历了一次警察的 1 on 1。

当然不是什么和自己相关的事情,而是今年三月初回北京航班上遇见事情的后续。

事件起因是,我们在航班起飞前,空乘过来咨询我前面座位的人员,是否捡到一个平板或者手机。我们原本以为只是一起简单的问询,但是没过几分钟,发觉警察上来了。警察也是走到这个乘客面前,咨询是否捡到一个平板,平板和手机都描述成什么样子。但是这个乘客依旧否认。我们原本以为事情就这也过去了,但是飞机迟迟没有起飞,直到空乘通知,由于意外发生,飞机将延迟起飞。

消息没过多久,警察就又上飞机,直接走到乘客面前,再次确认是否捡到或者错那,随着乘客依旧否认,警察直接问询是否配合搜查行李,随后警察就开始一阵搜索,几个行李都翻了个便,这个时候又上来一名警察,叫检查座位。就在这个时候,我往下看,发现前面座位底下确实有个像平板的东西,我就摸了摸,然后立马拿了上来,反馈给了警官。

警察,然后叫继续看下,手机电筒照了下,发现还有手机和机票。交给警察后,警察问乘客知道这个是哪里的来的么?双方一致摇头,不知道这个是哪里来的。

我们这个时候还说,是不是上一位乘客忘记了,连机票都丢了。

原本我们以为事情告一段落,结果没过几分钟,警察又上来,还持枪上来,明确要求两位乘客下机配合调查。就这样,两位乘客联通警察一起下飞机了,然后我被登记说后面会有个笔录,留下了联系方式。

几周后,也就是今天,警察来京了。由于可能事情比较确凿,所以就简单了问询了我记录的事情经过,以及辨认下嫌疑人。随后就是材料的签字和按手印,差不多前前后后也就40多分钟左右。

随后,警察再次告知了对证人的保密,就离开了。

好吧,记录下,人生的另外一种第一次吧。

🔲 ☆

可能是最后一次更换博客引擎

时间线还是值得记一下: - 2017 年,PHP - 2018 年,Jekyll - 2019 年,Hexo - 2024 年,Astro - 2026 年,Self-Built 这件事其实也不是突然发生的。最近几个月,如果你能看到这个博客仓库的提交记录,大概能看出来我一直在删东西:删不必要的样式,删不必要的依赖,删不必要的中间层,上周甚至连 Tailwind 也一起剔掉了(支持裁员 🤡)。 结果删到最后,我发现最大的那层反而还在,就是框架本身。既然都已经做减法做到这里了,那继续在框架上修修补补就没什么意思了,干脆把框架也干掉。 于是我的博客从 Astro 换成了自建引擎,底层是 Bun。 ## 性能 让 Astro 版本和现在这套引擎在同一台机器上跑同样的 build,对比结果: | Metric | Astro | Self-Built | Delta | | --- | --- | --- | --- | | node_modules | 461 MB | 243 MB | -47% | | Build Time | 12.1s | 702ms | -94% | | Build Output | 1.9 MB | 1.8 MB | -4% | | Homepage Size (brotli) | 17.9 KB | 9.7 KB | -46% | | Homepage Files | 5 | 3 | -40% | 本地构建之前是1.6秒,后来我又把 HTML 压缩整个拿掉了,结果是构建出来的 HTML 体积涨了 3.5%,全局构建时间从 1.6 秒降到了 700 毫秒左右。如果把 Shiki 也完全抽掉,构建时间会变成 110ms 左右(对比Astro减少了99%的构建时间),但是那样的话博客就没有好看的代码高亮了,700ms 的构建时间完全可以接受(傲娇脸)。Cloudflare Workers 上,Astro 版本的 build stage 大约要 28 秒,现在这套自建引擎只要 2 秒。 `package.json` 里的依赖条目也少了挺多。`dependencies + devDependencies` 从 `25` 个降到 `11` 个;如果只看运行时 `dependencies`,则只有3个,有两个甚至和前端不相关。 ```json { "dependencies": { "@upstash/qstash": "^2.9.0", "@upstash/redis": "^1.36.1", "pangu": "^7.2.0" }, "devDependencies": { "@biomejs/biome": "^2.3.13", "@types/bun": "^1.3.5", "chalk": "^5.6.2", "dotenv": "^17.2.3", "enquirer": "^2.4.1", "markdown-it": "^14.1.0", "shiki": "^4.0.2", "wrangler": "^4.70.0" } } ``` 这些数字说明了一件很直接的事:Astro 在我的博客上做了太多我根本不需要的工作。 新的引擎其实很简单:`markdown-it` 负责解析 Markdown,`shiki` 负责代码高亮,模板函数负责拼页面,`Bun.serve()` 负责本地开发,构建脚本负责输出静态文件。没有 Vite,没有 Rollup,没有 hydration,也没有额外的内容系统。 还有一个很实际的变化是,构建产物终于变得可预测了。以前首页表面上看也就一个页面,但背后还有 island runtime、renderer chunk、共享 chunk 这些东西,真实体积并不总是直观看得出来。现在这套就直接多了,首页就是明明白白的 HTML、CSS 和 JS 三个文件,没有别的 runtime 藏在后面。 这次顺手还解决了一个 Astro 时代一直很别扭的问题,就是 `atom.xml`。以前 MDX 正文并不能很自然地直接流进 feed 里,结果 feed 一直是一条额外维护的支线:自定义组件要手工转,HTML 要额外清洗,URL 要额外补。现在,正文本身就是 Markdown,feed 直接吃 Markdown,只有遇到自定义内容块时才退化成 Markdown 友好的版本。页面怎么渲染,feed 怎么降级,都是同一层解释器在决定,而不是正文一套逻辑,RSS 再偷偷长出一套逻辑。 ## 动机 我越来越明显地能感觉到:AI 已经在改变抽象层的成本结构。过去很多框架提供的工程收益,在博客这种低复杂度场景里,开始没有以前那么划算了。 这次重构本身,主要也是 Codex 做完的。它花了大概 3 个小时,把整个站点从头到尾重写了一遍。源码层面的变更大概是新增了 `6888` 行代码,删除了 `6344` 行代码。这件事让我重新想了一遍框架的价值。过去这笔交易是成立的:用一点性能,换一套更容易维护的工程结构,模板系统、组件模型、路由约定、内容 schema,这些东西本质上都是为了帮助人更稳定地理解和修改代码。 但 AI coding 打破了这里的平衡。对 Codex 这类 coding agent 来说,一个手写的 HTML 模板函数并不比一个 Astro 组件更难理解。它可以直接顺着 Markdown、模板、样式和脚本一路往下读,再改具体环节。很多原本为了“降低人类维护成本”而存在的抽象,在这种场景下没有以前那么必要了。 但是这不等于框架没用了。恰恰相反,有了 AI 之后,我反而觉得框架真正该解决的问题更清楚了:不是继续发明一套更花哨的模板语法,而是把边界、约束、验证、缓存、产物组织这些事情做扎实。语法糖 AI 可以学得很快,但边界不清、产物不可预测、降级全靠补丁,这些才是真问题。复杂应用、多人协作、长生命周期产品,框架依然值钱。 ## 最后 所以这次大概真的不用再换了,系统已经简单到不太值得继续折腾,接下来真正应该花心思的,就是博客内容了。 最后请欣赏这曼妙的build输出 ⚡: ![video]()
🔲 ☆

Desktop notifications for Codex CLI and Claude Code

## Context This setup was tested on my own machine with: - `Codex CLI 0.113.0` - `Claude Code 2.1.72` - `macOS 26.3.1 (25D2128)` - `arm64` Apple Silicon ![A macOS notification from Codex CLI with the subtitle Notification setup]() --- ## Start with Claude Code’s official setup `Claude Code` already documents the two parts you need: - terminal notifications and terminal integration - hooks for `Notification` and `Stop` - [Hooks reference](https://code.claude.com/docs/en/hooks) - [Hooks guide](https://code.claude.com/docs/en/hooks-guide) - [Terminal config](https://code.claude.com/docs/en/terminal-config) That is the right place to start. On macOS, the most obvious first implementation is also the simplest one: a tiny `osascript` wrapper. File: `$HOME/.claude/notify-osascript.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" osascript -e "display notification \"$MESSAGE\" with title \"Claude Code\"" >/dev/null 2>&1 || true ``` And wire it into Claude’s hooks: ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Claude Code needs your attention'" } ] } ] } } ``` This worked, but only technically. - Clicking the notification did not cleanly bring me back to the terminal app. - There was no grouping, so notifications piled up. - Once terminal-native notifications entered the picture, especially in `Ghostty`, duplicate alerts got annoying. That was the point where `terminal-notifier` became the better base layer. ## Why I switched to terminal-notifier The official repo is here: - [julienXX/terminal-notifier](https://github.com/julienXX/terminal-notifier) Install it with Homebrew: ```bash brew install terminal-notifier ``` Then verify it: ```bash which terminal-notifier terminal-notifier -help | head ``` The three features that made it worth switching: - `-activate`, so clicking the notification can bring my terminal app to the front - `-group`, so I can keep one live notification per project instead of stacking old ones - better control over subtitle, sound, and macOS notification-center behavior --- ## A shared notification helper Before touching either tool, create one shared helper: ```bash mkdir -p "$HOME/.local/bin" ``` File: `$HOME/.local/bin/mac-notify.sh` ```bash #!/bin/bash set -euo pipefail TITLE="${1:?title is required}" MESSAGE="${2:-}" SUBTITLE="${3:-}" GROUP="${4:-}" SOUND="${5:-Submarine}" case "${TERM_PROGRAM:-}" in ghostty) BUNDLE_ID="com.mitchellh.ghostty" ;; iTerm.app) BUNDLE_ID="com.googlecode.iterm2" ;; Apple_Terminal) BUNDLE_ID="com.apple.Terminal" ;; vscode) BUNDLE_ID="com.microsoft.VSCode" ;; cursor) BUNDLE_ID="com.todesktop.230313mzl4w4u92" ;; zed) BUNDLE_ID="dev.zed.Zed" ;; *) BUNDLE_ID="" ;; esac TERMINAL_NOTIFIER="" if [ -x /opt/homebrew/bin/terminal-notifier ]; then TERMINAL_NOTIFIER="/opt/homebrew/bin/terminal-notifier" elif command -v terminal-notifier >/dev/null 2>&1; then TERMINAL_NOTIFIER="$(command -v terminal-notifier)" fi if [ -n "$TERMINAL_NOTIFIER" ]; then ARGS=( -title "$TITLE" -message "$MESSAGE" -sound "$SOUND" ) if [ -n "$SUBTITLE" ]; then ARGS+=(-subtitle "$SUBTITLE") fi if [ -n "$GROUP" ]; then ARGS+=(-group "$GROUP") fi if [ -n "$BUNDLE_ID" ]; then ARGS+=(-activate "$BUNDLE_ID") fi "$TERMINAL_NOTIFIER" "${ARGS[@]}" exit 0 fi SAFE_MESSAGE="${MESSAGE//\\/\\\\}" SAFE_MESSAGE="${SAFE_MESSAGE//\"/\\\"}" SAFE_SUBTITLE="${SUBTITLE//\\/\\\\}" SAFE_SUBTITLE="${SAFE_SUBTITLE//\"/\\\"}" osascript -e "display notification \"$SAFE_MESSAGE\" with title \"$TITLE\" subtitle \"$SAFE_SUBTITLE\" sound name \"$SOUND\"" >/dev/null 2>&1 || true ``` Make it executable: ```bash chmod +x "$HOME/.local/bin/mac-notify.sh" ``` I scope notification groups by tool and project, not by message. That gives me one live `Claude Code` notification and one live `Codex CLI` notification per repo instead of a growing stack. ### How click-to-focus works The key line is: ```bash -activate "$BUNDLE_ID" ``` `terminal-notifier` accepts a macOS bundle id and activates that app when the notification is clicked. I map the common values from `TERM_PROGRAM`: - `com.mitchellh.ghostty` - `com.googlecode.iterm2` - `com.apple.Terminal` - `com.microsoft.VSCode` - `com.todesktop.230313mzl4w4u92` for Cursor - `dev.zed.Zed` This does not target one exact split or tab. It just brings the app to the front, which is good enough for this workflow. --- ## Claude Code: attention notifications and completion notifications I split notifications into two categories: - `Notification`: Claude needs me to do something, like approve a permission request or answer a prompt - `Stop`: the main agent finished responding - `Notification` for permission prompts or other attention-needed states - `Stop` for completion ### Claude notification script File: `$HOME/.claude/notify.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" PROJECT_DIR="${PWD:-$HOME}" PROJECT_NAME="$(basename "$PROJECT_DIR")" [ "$PROJECT_NAME" = "/" ] && PROJECT_NAME="Home" PROJECT_HASH="$(printf '%s' "$PROJECT_DIR" | shasum -a 1 | awk '{print $1}' | cut -c1-12)" GROUP="claude-code:${PROJECT_HASH}" "$HOME/.local/bin/mac-notify.sh" "Claude Code" "$MESSAGE" "$PROJECT_NAME" "$GROUP" ``` ```bash chmod +x "$HOME/.claude/notify.sh" ``` ### Claude hooks configuration File: `$HOME/.claude/settings.json` ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "permission_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Permission needed'" } ] }, { "matcher": "idle_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Waiting for your input'" } ] } ] } } ``` If you do not care about different notification types, an empty matcher `""` is enough. One detail worth remembering: Claude snapshots hooks at startup. If changes do not seem to apply, restart the session. Also check macOS notification permissions if nothing shows up. --- ## Codex CLI: completion notifications For `Codex CLI`, the mechanism is not `hooks`. It is `notify`. Official docs: - [Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [Configuration Reference](https://developers.openai.com/codex/config-reference) As of `2026-03-10`, Codex documents external `notify` for supported events like `agent-turn-complete`. So in practice: - completion notifications: yes - Claude-style permission notifications through the same external script: no Approval reminders in Codex are a separate `tui.notifications` problem. ### Codex notify script File: `$HOME/.codex/notify.sh` ```bash #!/bin/bash set -euo pipefail PAYLOAD="${1:-}" [ -n "$PAYLOAD" ] || exit 0 python3 - "$PAYLOAD" <<'PY' import json import pathlib import sqlite3 import subprocess import sys import zlib from datetime import datetime, timezone CODEX_HOME = pathlib.Path.home() / '.codex' def log_skip(reason: str, payload: dict, **extra: object) -> None: log_path = CODEX_HOME / 'notify-filter.log' data = { 'ts': datetime.now(timezone.utc).isoformat(), 'reason': reason, 'client': payload.get('client'), 'thread-id': payload.get('thread-id'), 'cwd': payload.get('cwd'), } data.update(extra) with log_path.open('a', encoding='utf-8') as fh: fh.write(json.dumps(data, ensure_ascii=True) + '\n') def get_thread_originator(thread_id: str) -> tuple[str, str]: db_path = CODEX_HOME / 'state_5.sqlite' if not db_path.exists(): return '', '' try: with sqlite3.connect(db_path) as conn: cur = conn.cursor() cur.execute('select rollout_path, source from threads where id = ?', (thread_id,)) row = cur.fetchone() except Exception: return '', '' if not row: return '', '' rollout_path, source = row if not rollout_path: return '', source or '' try: first_line = pathlib.Path(rollout_path).read_text(encoding='utf-8', errors='ignore').splitlines()[0] payload = json.loads(first_line).get('payload', {}) except Exception: return '', source or '' return (payload.get('originator') or '').strip(), source or '' try: payload = json.loads(sys.argv[1]) except Exception: raise SystemExit(0) if payload.get('type') != 'agent-turn-complete': raise SystemExit(0) client = (payload.get('client') or '').strip().lower() if client and ('app' in client or client == 'appserver'): log_skip('skip-app-client', payload) raise SystemExit(0) thread_id = (payload.get('thread-id') or '').strip() if thread_id: originator, source = get_thread_originator(thread_id) if originator == 'Codex Desktop': log_skip('skip-desktop-originator', payload, originator=originator, source=source) raise SystemExit(0) cwd = payload.get('cwd') or '' subtitle = pathlib.Path(cwd).name if cwd else 'Task completed' message = (payload.get('last-assistant-message') or 'Task completed').replace('\n', ' ').strip() if not message: message = 'Task completed' if cwd: group = 'codex-cli:' + format(zlib.crc32(cwd.encode('utf-8')) & 0xFFFFFFFF, '08x') else: group = 'codex-cli:' + (payload.get('thread-id') or 'default') subprocess.run( [ str(pathlib.Path.home() / '.local' / 'bin' / 'mac-notify.sh'), 'Codex CLI', message[:180], subtitle, group, ], check=False, ) PY ``` ```bash chmod +x "$HOME/.codex/notify.sh" ``` ### Codex config File: `$HOME/.codex/config.toml` ```toml notify = ["/Users/you/.codex/notify.sh"] ``` Use any absolute path you want. I keep the script under `~/.codex/`. --- ## If you use Ghostty, disable terminal-native desktop notifications I hit one more annoying edge case in `Ghostty`: duplicate notifications. What happened was: - my script sent a notification through `terminal-notifier` - `Ghostty` also surfaced a terminal-native desktop notification That produced two macOS notifications for one event. On my machine, the clean fix was to keep `terminal-notifier` as the only notification channel and disable Ghostty’s terminal-native desktop notifications: File: `~/Library/Application Support/com.mitchellh.ghostty/config` ```plaintext desktop-notifications = false ``` Why I prefer this setup: - `terminal-notifier` gives me `-activate`, so click-to-focus still works - `terminal-notifier` gives me `-group`, so notifications stay scoped per project - both `Claude Code` and `Codex CLI` behave the same way Ghostty’s config docs describe `desktop-notifications` as the switch that lets terminal apps show desktop notifications via escape sequences such as `OSC 9` and `OSC 777`. Turning it off avoids the extra notification layer. --- ## If you also use Codex App This is the part that bit me. At first I assumed filtering by the `client` field would be enough. It was not. On my machine, some sessions started from `Codex App` looked like this in local session metadata: ```json { "originator": "Codex Desktop", "source": "vscode" } ``` That creates a duplicate-notification problem: - Codex App shows its own notification - the local CLI `notify` script can still fire - I get duplicate notifications for the same task So the script does two things: 1. fast path: skip obvious app-like `client` values 2. fallback: read `thread-id` from the `notify` payload, query `~/.codex/state_5.sqlite`, load the first `session_meta` line, and skip if `originator == "Codex Desktop"` That is why the script above checks local thread metadata instead of trusting only `client`. I also log skipped events to: ```text ~/.codex/notify-filter.log ``` That makes debugging much easier if Codex changes its session metadata format later. > This part is based on observed local behavior, not on a stable public contract from the docs. If OpenAI changes how Codex App identifies local sessions in future versions, the filter may need a small update. --- ## References - [OpenAI Codex Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [OpenAI Codex Configuration Reference](https://developers.openai.com/codex/config-reference) - [Anthropic Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks) - [Anthropic Claude Code Hooks Guide](https://code.claude.com/docs/en/hooks-guide) - [Anthropic Claude Code Terminal Configuration](https://code.claude.com/docs/en/terminal-config) - [terminal-notifier](https://github.com/julienXX/terminal-notifier)
🔲 ☆

Dating App Sucks Pt.2

Ok here we go again. I think I've finally figured out the scariest thing about dating apps. They do actually turn finding love into a fucking job search. > Every date feels like a business meeting or something, no sparks, pure cringe. Think about it, we fill out our "resumes" with our best photos and wittiest bios. We list our "desired positions" in the filters. We swipe through "candidates" hoping to get a "decent offer". The whole thing is an HR pipeline with better lighting. But love is the exact opposite of a job search, which follows logic. Love? Personally, I think there is no logic in love. Love is a bias, a fucking tyranny. The bias is that you only want one specific person to do the things literally anyone could do. The tyranny is that you pour all your emotions, irrationally, recklessly, entirely onto another human being. And dating apps have always given me this weird feeling, love obtained through this process feels so bland it's almost offensive. If I were a planet, this whole approach would be like some engineer calculated the perfect speed, angle, and mass, then launched another planet at precisely the right time so we'd form a nice, stable binary star system. How romantic. How efficient, how abso-fucking-lutely dead inside. What I want is a rogue planet hurtling toward me at full speed out of nowhere in the middle of the void. The moment we touch, atoms from two entirely separate worlds are forced into lattices they were never meant to share. Molecular bonds snap, shatter, and reform into something unrecognizable. The pressure breeds temperatures that fuse nuclei into heavy, unnamed elements that no periodic table has ever seen, existing for a few picoseconds before decaying into something else entirely. Oceans of molten rock erupt outward, entire crusts peeled off like skin, shockwaves rippling through mantles at speeds no device could ever measure. What used to be two worlds is now a single, blinding wound in space. Some debris escapes into strange new orbits. The rest? Fuses together so tightly that nothing, not time, not entropy, can pull it apart, until our one last atom is annihilated with the heat death of the universe. I'm not saying dating apps are pure evil, you could still meet someone real on there, the odds exist. But what's truly terrifying about these things is that they teach you how to NOT invest. Everyone on there wants low-risk love. A guaranteed return with minimal downside. But since when has that ever been how love works? I've seen people around me become professional swipers. Always chatting, always got girls around them. And then what? This one's family background isn't great. That one's not pretty enough. Another one said something weird at dinner that gave them the "ick". Next. Next. Next. Bro, stop cos-ing a fucking conveyor belt. Being overly rational in love is a slow way to lose everything. The second anything feels slightly off, they're gone. No friction allowed. But no friction means no sparks either. They end up like the guy in Socrates' wheat field parable, walking through the field, always convinced a bigger stalk is just ahead, waiting, but never actually picking one. And the field does end. It very much does end. Uninstalled.
🔲 ☆

重庆

重庆

重庆

借着今年春节居家办公,赶上这个周末,我去重庆来了一次 24H 特种兵旅行。

现在火车非常方便了,自己的县城也有了火车站,不过我是送完小青橙到阆中后,再乘着火车前往重庆的。由于行程匆匆,自己都是快速订票,只有卧铺,不过已经很好了,因为我返程的站票。

原本计划的7点10分起床,但是还是拖延到7点四十才起来,然后给娃穿衣服喂饭,弄到了8点20才从家里面出发。

不过最近去往阆中的高速已经不堵了25分钟便开到了小青橙姥姥家,吧他顺利交给了姥姥手里。随后,立即驱车前往火车站,时间还算合适,9点21出发,差不多9点30多就到火车站了。

许久没做卧铺,瞬间勾起了我上大学的回忆。其实我大学绝大多数也是做的硬座,唯独毕业夏天,暑假人少,要不就奖励一下自己。还有一次, 便是快毕业的寒假,去北京实习,50多个小时的火车,选择了一次卧铺。两次卧铺,一次是中间位置,一次最上面的位置,躺着肯定还是比坐着舒服。由于这次只有三个小时车程,我也没趟多久变下来。这次是 K 字头的火车,普快,车厢里熟悉的声音就来了,聊天的,刷短视频的,吃饭的,和自己记忆中的保持一致。总感觉 K 字头的车,虽然慢,但是大家心却似乎轻松愉悦。G字的车,快是快,充满了一种急迫压抑感。还在早春,但是两旁的油菜花,梨花都开了,风风景宜人,慢摇着,边到了重庆北站。

我来重庆大概有四次(加上这次):

第一次,是我小时候,还在读五年级,去长江三峡玩的时候。新奇,但是记忆模糊。
第二次是和媳妇谈恋爱期间,回老家结婚,和她闺蜜吃饭的一次。
第三次,结婚万,从重庆赶飞机回北京,因为疫情(红码问题),大逃亡的一次(博客有记载)

这一次,相对来说是最为轻松的一次,目标很纯粹,就是自由在在 Citywalk 一次。

自己从重庆北站出发,做十号线转二号线,直达我的第一站,就是十八梯。由于早上吃饭很早,到了重庆已经是一点多了,出地铁边去路边找了一家小面馆,味道还不错,一股火锅的味道。这次比较意外的是,发现了茶颜悦色有很多了。而且它家还出了咖啡,顺便点了杯美式(忘记他们家叫什么名字),开启徒步之旅。从十八梯,下完坡,边右转去往山城步道。

重庆

重庆人不容易胖,是有原因的,确实上上下下对卡路里消耗真的很多,待我一口气爬上去之后,感觉后背都有些许汗水了。阆中没有吃到的小糍粑,这次吃到了。5元20个。随后便是沿着马路去往朝天门,由于春节还没过完,路上的旅客依旧很多,尤其学生偏多。跟着人流,差不多得走上快一个小时,到了来福士。重庆的新地标,和新加坡的帆船建筑类似。我来来福士,是因为抖音上说商城有花园的装修效果,我来来去去反复确认,不是这个地方,抖音推荐出错了。不过来了,边也是还是到顶层看了下,俯瞰了下重庆的城景。由于持续暴走,手机也没电了,变在上面小憩一会。

万万没想到,一会出现了太阳,虽然天一直阴沉沉的,但是此刻太阳出来,还是吸引了不少游客过来拍照记录。

重庆

随后便下到商场,时间已经是6点40了,感觉也有些许饥饿的。最后在抄手和串串之间选择了,串串,毕竟来了重庆,还是应该尝试下地道的火锅滋味呢。

没过多久,便是去往最后一站,砖石广场。这真的是欣赏重庆江景最好的地方,非常宽阔,易于出片。查了查导航,也就40来分钟步行,沿着朝天门,过桥,到了大剧院站,在沿着马路下走就行。钻石广场,顾名思义,就是有个钻石的地标。而下到这边之后,那种震撼感扑面而来,我觉得重庆的夜景,不亚于上海,而且那种楼区的参差不齐,给人一种思博朋克的感觉,尤其千厮门嘉陵江大桥红色结合着背景楼宇的光芒,印象非常深刻。

重庆

最后9点多,终于结束行程,一看手表1000+的卡路里消耗,也算是达成目标了。

重庆这次给我留下了非常多好的印象(也可能是错开了高峰)

  • 很多充电宝的网点
  • 爬楼有做下休息的凳子
  • 景点超级多的饮料小吃
  • 外国人也多了起来(甲亢哥的宣传?)

Nice~~~

🔲 ☆

一句抵一万句

一句抵一万句

最近终于读完刘震云的《一句抵一万句》 。这篇小说豆瓣评分9.0,也常年再很多书单推荐中。

我对这本书充满好奇是在很多年前,他女儿排的电影,因为电影里是自己比较关注的两个影星,李倩和毛孩,一个龙门镖局里的青橙,一个炊事班故事的小毛。然而电影里拍的就是普普通通的小老百姓的事情,也是交代了没有交流的婚姻,所带来的影响。我原本觉得小说也是这样,但是再读了几篇后,发现完全是不同的时代背景。

这部小说也被誉为中国的《百年孤独》,因为他的跨度非常大,不是一代人的事情,而是几代人的故事。小说其实远比电影精彩,毕竟电影只是拍了下半部,而且省略了大量的内容,很难表达出原著的内核。

自己有了孩子后,发现自己爱和孩子说话,尽管孩子只有2岁多,很多词汇还表达不全。但是自己就是喜欢和他说话。然而我与我的父亲,母亲,却几乎很少沟通,至于为什么,我也很难说出来。小说里,牛爱国小时候不被母亲待见,但是大了后,母亲却爱和牛爱国聊天,牛爱国母亲也爱和孙女聊天,几乎什么都说。而最多的还是她小时候的事情,也就是电影里没拍上半部的内容。

自己婆婆年纪大了,也老爱跟我说她遇到的事情,经历的事情。我和我妈不一样,我会安静的听着。就像现在小青橙和我,我说什么小青橙都会仔细的听,无论理解不理解。

大概人这一辈子,就是想找个能够聊得来的人吧。

🔲 ☆

The Plan of 2026

The Plan of 2026

These are some plans for me in 2026

[ ] High Personal Performance
[ ] Relocate to the US(2/2)
[ ] Using AI to build 1 or more mini games for WeChat
[ ] 10 VLOGs
[ ] 30 Books
[ ] Body Fit(<=62.5 KG)
[ ] 40+ Blogs
[ ] English Speaking
[ ] JAGX(10000+) + FUBO(5000+) + BYND(10000) + QUBT(1000+)
[ ] BILI 12000 + 4paradigm 10000+ Haidilao 10000+

🔲 ☆

The Cursor Moment in Music Production

I've been thinking about this for a while. Cursor didn't change programming because it could write code. It changed programming because it **made real work faster while keeping every line editable**. That's the key. So when does music production get its Cursor moment? --- ## My imagination of this AI DAW Not magic. **Delegation with control.** And the difficulty scales fast. **Level 1:** "Generate a 4-bar piano MIDI with emotion." Already harder than it sounds. Pitch, velocity, note length, micro-timing, articulation, envelopes. Emotion isn't metadata, it's embedded in low-level decisions. Like writing a small utility function. Simple scope, high quality bar. **Level 2:** "Generate a 16-bar violin MIDI that matches the drum." Everything from Level 1, plus context. Groove awareness, phrasing, rhythmic interaction. The model has to listen. Like adding a feature that integrates with an existing module. **Level 3:** "Generate a sequence using Serum, make bars 8-16 flow, sidechain from track 2, match the vibe." This is the inflection point. Now the AI needs full DAW access, third-party plugin knowledge, routing logic, arrangement continuity, aesthetic coherence. Plugins become libraries. You need something like documentation context for tools, not just parameters. Multi-system engineering, not generation. **Level 4:** "Generate a clean vocal track based on the whole song." Lyrics, melody, phrasing, emotion, refinement at the word and timing level. Like adding a major feature to a large codebase. One-shot attempts will fail. But with a human-in-the-loop, reviewing, steering, refining, this becomes feasible. The AI drafts, the human produces. **Level 5:** "Give me fire." This must fail. Just like "make me Facebook" in coding, the spec is undefined. Taste *is* the task. Neither humans nor AI can guarantee this. --- ## Who could even make this Building a brand-new DAW? Dead end. Producers are deeply locked into their tools. Switching DAWs isn't like switching editors, it means relearning muscle memory, mental models, creative habits. For many producers, it's practically impossible. So any Cursor moment has to either sit on top of existing DAWs or deeply integrate with them. **Splice** has an interesting edge. Not DAW engineering, but data. Cross-DAW usage, massive libraries, user behavior at scale. Its natural position is as an intelligence layer, something like an LLM for music production that other tools call into. **Apple and Logic Pro** already ship features that hint at an agentic future. Session players that suggest MIDI, react to reference audio, generate parts from scratch. Apple has vertical integration: hardware, OS, DAW. It can ship something real. But it's also closed. A Cursor-like ecosystem thrives on extensibility, not just polished features. **Ableton Live** is interesting for a specific reason: its project files are XML-based. That means sessions are structured, serializable, writable. In principle, an AI can already read and modify a Live project the way a coding agent reads and edits source files. The blocker isn't the format. It's the model. The IDE substrate exists. What's missing is a music-production-focused base model that understands intent, taste, and workflow, not just structure. --- ## Why this hasn't happened yet Three hard blockers. **No clear "text of music."** Code has text. Serializable, diffable, composable. That's why LLMs worked so well, so fast. Music doesn't have a single equivalent. MIDI is editable but incomplete. Audio is complete but opaque. Without a clear fundamental representation, everything above it becomes fragile. **No production-native base model.** Cursor didn't invent intelligence. It orchestrated a strong base model. There's no equivalent yet for music production, one that understands MIDI, audio, arrangement, plugins, mixing, and taste as a unified domain. Current models generate outputs, they don't reason inside workflows. **Locked ecosystems.** There's no VS Code-level DAW that is open and dominant. DAWs are closed, fragmented, deeply personal. That pushes AI to the plugin layer, where integration is safer and optional. That's why we see "AI inside plugins" everywhere, and almost nowhere at the DAW core. --- ## Where we are now Here's something I made in 30 minutes, mostly Splice samples: [Piece made in 30 min]() ![Ableton Live project screenshot]() This is probably the fastest way to get something production-ready with a traditional workflow. But let's be honest, the vibe factor is low. It's assembled, not created. What's not AI here? Pretty much everything that matters. Sidechain, dialed in by hand. Reverb, tweaked until it sat right. Arrangement, mixing, mastering, all me. And what kind of AI do we have now in music production? Splice. It uses AI to find sounds faster, made matching tones easier. Real gains, but still operating inside the traditional production phase. Plugins. Pitch correction, noise reduction, vocal tuning, loudness matching. These are genuinely useful. They save time. But they don't change how music is made, they just speed up tasks inside the same old workflow. Then there are the full-track generators. Suno, Udio, you name it. They can spit out complete songs, sometimes with lyrics, and honestly the results are surprisingly not bad. For background music, promotional videos, that kind of stuff? They work. Fast, cheap, good enough. But at this point they skip something critical: **production**. ### The paradox Prompt → final audio. No MIDI, no arrangement control, no micro-timing, no note-level editing. You get output, not a workspace. It's closer to collage than composition. Yes, I know Suno has its own studio app, the thing is, if Suno really wants to enter serious production, it needs fine-grained control. How much sidechain pump fits my taste? How hard should the compressor hit before the vocal sounds perfect? What's the right master loudness? Should I apply true peak limiting? But the moment you add those controls, you're building a DAW. And those controls still require professional knowledge to use well. A beginner and a seasoned producer using the same AI tool will produce vastly different results. Just like a junior dev and a senior dev using Cursor. The tool accelerates. It doesn't replace judgment. ### The taste In code, taste is often invisible to users. A shitty function and a beautifully designed one can produce the same result: it works. Architecture and elegance mostly matter to developers. Music doesn't work like that. Humans are extremely sensitive to sound. Timing, tone, balance, texture, these are immediately perceptible. Choose a bad string library? Listener knows instantly. There's no abstraction layer that hides bad taste. And there's no "ship now, fix later" model. Software can be patched. Music can't. Once it's released, it's frozen. The first version is the version. ### The moving target There's something even deeper. Code is functional, music is cultural. Assembly from 1950 still runs correctly today. Music from 1950? Technically fine, culturally dated. Code has a stable target: correctness. Music's target drifts with time, with generations, with vibes no one can fully articulate. AI learning code is learning "what works." AI learning music is learning "what felt good to people in the past." But taste keeps moving. Training data is always yesterday. The ground truth itself is in motion. That's why skipping the production phase works for low-stakes content, but fails for anything serious. --- ## So when? Not when AI makes better songs. Not when generation gets faster. It happens when AI can **work inside music production**, not around it. When it accelerates real workflows, preserves taste, and allows refinement down to the smallest unit. I've been wanting to build this myself. A VST that generates context-aware MIDI, something that listens to what's already in the session and proposes what comes next. A few years ago, the blocker was obvious: I don't write C++, I don't know JUCE. Now? The blocker has shifted. I could probably vibe-code my way through the plugin architecture. But training a model that actually understands musical context? That's where it gets hard. Really hard.
🔲 ☆

2025

2025

每到年底,说是写点什么,开头是一件难事。

高中时候,语文老师觉得自己作文写的差,很是费解。他把作文结构成模板,强调总分总,开头和结尾,需要强调文章的主题,片中则是各述论点论据。虽然自己高中文笔确实不好,到了大学,因为对于博客的热爱,反而开始喜欢上写作,没有了八股文死板的结构,作文变成创作。

2025,于自己,两个字,

新与旧

新,是自己所经历所体验的新事物。旧,是自己所回忆所回访的旧地方。

人们常常按照地点或者时间回忆过去的点点滴滴。今年陆陆续续传了很多照片在自己的相册,记录生活的琐事变化,当然也方便年底写作提供点参考,今年边精选几张写写背后的故事,当然也是自己这一年的故事。

先聊聊新吧。新的,有自己第一次尝试的,挑战的,或者偶然撞到的。

春节小聚


春节每每都有我们这一代人的聚会。今年最大的变化是多了两个小孩子。孩子变化很快,春节可以依偎着在大人旁边玩了,也可以和同龄人争吃的抢玩具。每年春节是一家人团聚的时刻,也是记录岁月年华最明显的时候,转眼间,这里一起吃个饭,一张桌子坐不下了,我们这一代人也开始露出生活的压力,结婚的结婚,找工作的找工作,再也不会有几年前,还在校园生活寒假聚会那种轻松的氛围。然而又看看身边的小的,他们也才两岁不到,想吃什么就吃什么,想往哪里跑就往哪里跑,这个时候能理解孩童的快乐,是真的让人向往。

2025

离别的焦虑

春节后,由于工作原因自己和妻子边先行回到了北京。自己的想法也是孩子可以在老家多呆一阵子,避开回城高峰。然而离开后一天孩子身体就开始不舒服,然后高烧,食欲降低。才开始以为,普通感冒,过几天就好。然而事与愿违,后面视频 call 里面开始变得毫无精神,也不怎么说话,眼神里透露出一种迷茫。大概看了那个镜头,立马买了隔天正月十五的全价机票,将孩子和姥姥接回了北京,随后去医院,也只是开了止咳的药。

2025

人类的羁绊也是很神奇,回来后,第二天,精气神就完全好了,完全不像大病的样子,于是乎在798 拍了张夕阳的残影。我也是体会到父母对孩子,以及孩子对父母的依恋,真的是维系最为紧的吧。

追星的开心

今年自己主要关注英雄联盟的比赛和NBA的比赛。当然英雄联盟大家都比较关注的是 T1,尤其 Gumuyusi 选手。其实今年对于 Gumuyusi 选手真的坎坷的一年,年初被下放替补,随后面对 MSI 筛选前两轮被拉上来救火,随后夏季赛又再度下放,直到选手因为心里问题,被迫放弃主力位置,又从未主力位置。跌跌撞撞打完世界赛资格赛,被人疯狂诟病英雄池问题,芸阿娜,卡莎。当然今年有幸北京看到了现场 T1 的比赛,由于瑞士轮在北京举行,虽然了花些钱找了咸鱼黄牛,但是一次性看到诸多比赛队伍,TES, BLG,AL,GEN,尤其可以亲眼看到 Faker,也算是一饱眼福了。由于瑞士轮,北京场地相对较小,所以可以看到清晰的说实话,比赛开始,没人觉得今年 T1 能再度卫冕,Gumu 能拿下 FMVP,可是命运真的很神奇,这就是竞技体育吧。

2025

今年开船的比赛虽然看的不多,唯一就是季后赛掘金系列,见证了神奇的扣篮绝杀。其实现在 NBA 没有了小时候那种对抗,犯规和造犯规太多了,裁判的尺度也变得非常捉摸不定。恰好今年10月去旧金山,原本没有计划去看什么 NBA 比赛,但是运气很好撞见了金州勇士和快船的比赛,于是乎上午立马订票了,差不多50美刀左右,位置还算不错。场馆的灯光真的很足,如果早点过去,可以看到他们的热身,比如哈登小卡的投篮训练等,一起库里的三分特殊训练。

短暂的美国之行

10月下旬,公司有机会得去旧金山一趟。 这个国家无疑是整个中国最为关注的对手和学习对象。你可以看 CCTV4的内容,一大半都是报道美国的各种。美国签证非常麻烦,还得面试什么的,不过针对 business Travle,面试还好,当然如果你有家庭,有孩子似乎更好通过。签证虽然麻烦,但是十年有效期倒还算一个优点。

第一个问题便是长达十多个小时的飞行,第一次做这么长的飞机,真的算是特别煎熬。所以一定要多下载一些影视作品,那种无脑喜剧就行。

总的来说,加州真的是一个极为宜居的城市,落地还下着下雨,开车倒酒店的半个多小时,边转变成为了晴天,中午温度最高,但是非常适宜,穿短袖长裤即可,也不那么炙热。就这样的天气,你敢信,一年365 天都这样。

由于我们是公司搬家,你敢信,旧金山市长还莅临过来参加入驻仪式,市长在他的演讲提到了,这座城市的基因来源于创新,感谢公司的付出。好吧,我们自己都没觉得有什么特别的贡献。

吃的。真的是个大问题,一方面是贵,尽管可以报销,但是每每看到金额转换,真的不敢想。陆陆续续打车了几年,几乎都是10几刀刀20多刀,差不多都是100+。很多同事抱怨过来,最大的问题也是吃,尤其味道,这里芝士味真的很浓,导致我回来飞机上,闻到就有种呕吐的感觉。不过分量是真的大,早餐的三明治都可以分两顿吃了。

这里最震惊的还是安全问题,尽管旧金山已经改善几年了,但对于中国这样的环境,看街上的流浪汉,和吸食大骂后的人在街上闲逛,让人不寒而栗。到了晚上,街上是真的早早就没了人,你时不时就能听到警车的警鸣声音。当然安全问题一直是加州的问题,它是一个区域性最为明显的问题,自己还没有去什么比较好的白人小区,但是仅仅在 downtown,就足够让人大吃一惊了。

由于拜登法案的问题,很多同事今年 relocate 到了美国,公司变也会个人意向,一方面是美国带来的新的职业机会,一方面又是各种新的环境适应,自己也在想是不是,明年,或许我就在旧金山,洛杉矶或者圣地亚哥写总结,又或者还是在北京呢? 船到桥头自然直吧,也不必特意纠结。

2025

聊聊旧吧,过去一年,有回到某些地方,或者重新去体验的事情。

日本家庭之旅

终于在今年打算出去旅行,过去三年结婚,疫情,孩子,一直没能出去走走。如今孩子也算可以很好的表达,时间也有,就早早定下了计划。不过唯一的不好点,是时间不算去日本最好的点。因为日本6月底,进入梅雨季节,下雨是在所难免的。

16年去的时候,正好赶上红叶季,不过当时东京呆的时候相对较短,这次,东京计划了更多的时间。由于是旅游淡季,机票和酒店相对便宜,当然网红景点依旧人满为患。第一次去日本的时候,那种新鲜感是真的印象深刻,但是这次过去似乎发现大阪和东京变化真的很少,和第一次去没有太多的差异。去了海游馆,也赶上了大阪世博会,吃了日本烧肉,也喝了当地的啤酒。虽说带着娃,感觉一切都得围绕娃了,有几次都是在婴儿车上睡着的。东京几天天气很给力,小雨很小,而且去了镰仓,看到了灌篮高手的取景地,也在东京铁塔下散了散步,由于住在东京湾,晚上能够一眼望到彩虹大桥,和16年那个时候自己去港口独自散步对比,真的是不一样的感受,那一次是好奇,新鲜感,这一次确是难得的松弛和享受。

2025

海南旧旅

年底,赶着圣诞节假期和多出来的育儿假,和妻子计划了二人旅行。上次旅行,带着孩子,明显感觉一切都是孩子为中心。自己在2010年高中毕业的时候来过一次,那个时候是暑假,天气非常炎热,由于是跟团旅行,景点是一个跟着一个,而且炎热天气,以至于大家想的就是回酒店吹空调或者回车上吹空调。那个时候经济上限制,买个椰子也都要纠结半天。和曾经一样,这里成为了东北人的长据地,跑车的,做生意的,好多都是东北人,也有长期定居的。不一样的是,这次有了很多外国人,很多俄罗斯人过来度假,还有蒙古的,还有不少韩国人也过来享受。12月份的天气真的很好,中午虽然热,但是完全可以接受,到了下午晚上,有着风吹着,都会有些许冷的感受。

2025  和高中毕业不一样,我们会和本地司机会聊,你发现疫情后的感受一样的,对于经济下行的无力感,自己家有孩子的,就是非常担心就业问题,生育下行这个问题,在哪都是一样。、

宏观层面的下行,年轻人在时代面前似乎真的微不足道,今年看《与日为鉴》,说总有一代人或者两代人会被牺牲掉,似乎我们选择了牺牲一部分35+,也牺牲一部分00后,这个时候很多建议,就是

锻炼身体,拮据生活,保持身心愉悦,这就是时代的一部分。

新 与 旧,在拉长时间线上,似乎就有不一样。 无论怎样,做你自己所热爱的事情,这才是生命里的体现。

2026, Just do it.

2025

🔲 ☆

2025

Sup, it's December 30th, 2025, 4:37 AM, and I just started to write this wrap-up blog. This year felt different, things moved forward, also got heavier. ## The tension I joined Flowith when it was a 6-person team. Now it's around 40. That kind of growth changes how you work. I went from writing code to mostly keeping things from breaking. Code itself got weird. Coding agents, automations, black boxes everywhere. More production errors, but they get fixed faster. Development speed is up, so is responsibility. I care less about elegant code and more about whether the system actually survives. The trade-off works, I guess, for now. My WeChat contacts doubled. More conversations, more group chats, more weak ties. I'm not feeling more social, just more connected in a noisier way. This leaked into everything else. ## The scatter After the [thing](https://jw1.dev/breakup) in May, I definitely got more time and money to waste. Traveled a lot. Yunnan, Thailand, Macau, Thailand again, plus countless shorter trips by car and train. Took a lot of pictures, thought maybe put some in the post but you guys must have seen them so nah. Picked up billiards 🎱 after Yunnan, went from complete (somehow) beginner to clearing tables pretty fast, nice to know I can still get good at new things. Built a gaming PC recently and I barely have time to use it. Bought my first watch after years of not wearing one, not an Apple one, surprise, it's a Casio. Back home, the old house got torn down. Reconstruction started. Biggest expense my family took on this year, but it feels right. Before the Chinese New Year, the place that raised me will be there again, just renewed. Gotta say, time and money, nicely wasted! ## The broken Mind's definitely broken. Haven't really found my footing since that [thing](https://jw1.dev/breakup) happened. Living alone means I can do whatever the fuck I want, stay up all night? Zero, people, care. I know, of course I want someone new, been swiping on dating apps for months, but you guys know [how that went](https://jw1.dev/dating-app-sucks). Finding the right person is hard. You pick wrong? lifetime regret, you don't pick? parents nag. Sometimes I really envy my parents' or grandparents' generation, love was simple, almost pure. Body's broken too. My body's been keeping track, whether I like it or not. Weight's the same (good news?), everything else changed. Neck pain, lower back pain, more frequent now. Wrinkles at my eyes. Hairline maybe retreating. Chronic(慢性) rhinitis(鼻炎) and pharyngitis(咽炎) getting harder to ignore. I should go to the hospital. I don't want to. Some things are easier to postpone than to face. ## The view The future doesn't excite me like it used to. It feels conditional now. We're in a time where everything can change overnight. We want things to happen, until they don't benefit us. Then we hope nothing changes at all. Maybe that's just growing up. ## ... Shit man, 4 AM brain definitely got the mood 👀.
🔲 ☆

2022 年往事

2022 年往事

最近发现,世界杯开始预热了,复仇者联盟5预告,拥有说,明年是感受经济上行的年份。

突然会议,上次世界杯,我们还处在疫情之中,自己也是世界杯后期比赛的时候第一次感染了。我依稀记得,我半夜发觉背很凉,然后就盖了个很厚的被子,但是第二天早上还是发现高烧了,测试果真感染了。

很多人不愿意回忆 2022年,那是个几乎让人倍感绝望和愤怒的日子。随着国外逐渐放开,我们迎来的确实疫情的反反复复,和极为严格的核酸测试。

那几年,自己经历了

  • 两天一次的核酸,有过7天一次,14天的,也有一天一次的
  • 行程码,各个地方查行程码,商场,地铁, 火车站,公司
  • 公司上班分 AB 班
  • 坐飞机,被标记高危地区,无法回京
  • 各种媒体出现应封尽封
  • 流调,查询你接触的人
  • 突然性的,放开管控,大规模的感染,自己和媳妇接连中招
  • 公司此起彼伏的咳嗽
  • 某个乌鲁木齐事件后,凌晨朋友圈的大规模刷屏
  • 贵州半夜转运的车祸
  • 孕妇因为红码事情无法正常入院
  • 媳妇因为行程码无法进入医院取药
  • 办公室带着口罩
  • 小区外卖被放在小区外面外卖架无法送上门
  • 隔离,上门的核酸检查
  • 突发性双减,教育公司遭遇众创,第一次大规模裁员
  • 镇子上过年,全部关门不做生意
  • 商城必须碰手洗消毒液

最近一些,从能听到新冠,又或者新冠疫苗后,的一些症状,以前从来没出险过的长期慢性病,自己也恍然大悟,说说自己的变化

  • 白头发变多了(也可能是年龄大了),但是白头发真的比以前多太多多了,主要集中在两侧
  • 溢脂性皮炎,头上长痘痘,这个确实疫情前没有的
  • 代谢变慢,容易胖,
❌