阅读视图

发现新文章,点击刷新页面。
🔲 ☆

Native macOS Updates in Tauri Shouldn't Be This Hard

Four Tauri projects. Four times I wrote the same update dialog.

Download progress bar. Markdown release notes. "Remind Me Later" button. Same code, copied across repos, maintained separately. By the fourth project, I was done.

The Problem

Tauri's built-in updater downloads binaries and verifies signatures. It has a basic dialog: release notes, two buttons, that's it.

No download progress. No "Remind Me Later." No background checking.

You want any of that, you build it yourself.

Meanwhile, every native macOS app uses Sparkle. Raycast. CleanShot. Countless others. Same dialog. Same flow. Users recognize it immediately.

Sparkle handles everything I kept rebuilding: background checks, phased rollouts, localization. All the edge cases I'd never think to handle.

So why was I still writing custom update UI?

The Answer

Sparkle is Objective-C. Tauri is Rust. Bridging them isn't trivial.

But "not trivial" isn't "impossible."

The Plugin

So I built one. tauri-plugin-sparkle-updater wraps Sparkle's SPUStandardUpdaterController and exposes it to Tauri.

Rust:

tauri::Builder::default()
    .plugin(tauri_plugin_sparkle_updater::init())
    .run(tauri::generate_context!())

TypeScript:

import { checkForUpdates } from 'tauri-plugin-sparkle-updater-api';

await checkForUpdates();

Call checkForUpdates(). Sparkle takes over. Native dialog. Progress bar. Release notes. Done.

For programmatic control, the plugin exposes 41 commands and 18 events. Build a custom flow if you want. But you don't have to.

Cross-Platform

This only works on macOS. That's the point.

Cross-platform doesn't mean identical UI everywhere. It means doing each platform well.

#[cfg(target_os = "macos")]
builder = builder.plugin(tauri_plugin_sparkle_updater::init());

#[cfg(not(target_os = "macos"))]
builder = builder.plugin(tauri_plugin_updater::Builder::new().build());

Your Windows users get updates. Your macOS users get native updates.

The best update experience is invisible. Users click a button, see a familiar dialog, move on. No one should have to build that from scratch.

🔲 ☆

Why MCP Matters

"MCP is a fad." I've seen this take floating around. Too complex. Security holes everywhere. Just another wrapper around function calls.

I disagree.

The Problem No One Solved

Every protocol we use today was designed with the same assumption: a human sits in the middle. REST. GraphQL. gRPC. A human reads the docs. A human writes the integration. A human debugs when things break.

Agents don't work that way.

They can read documentation, sure. But they can't interpret what's missing from it. They need structured, machine-readable contracts. Capabilities they can discover at runtime. Error semantics they can actually parse.

This is what MCP actually solves. Not "a better way to call functions." The first protocol that treats agents as first-class citizens.

A Different Mental Model

The obvious answer is "tools for AI." That's not it.

MCP is bidirectional. Servers can request LLM completions from the host. The tool asks the agent for help. This isn't RPC. It's a conversation. REST can't express this pattern at all.

MCP is resource-aware. Instead of an agent repeatedly calling APIs and parsing responses (wasting tokens every time), it exposes structured data directly. Schemas, documentation, configs. Load once, work with it.

MCP is dynamic. Client and server negotiate capabilities when they connect. The agent doesn't need prior knowledge of every tool. It discovers them.

These aren't incremental improvements over REST. They're a different way of thinking about how software talks to AI.

STDIO Was Never the Point

MCP launched with STDIO transport. Client spawns server as a subprocess. Communication happens through stdin and stdout. Dead simple.

Critics love pointing out the limitations. Single client. No authentication. Local only. Tightly coupled process lifecycles. All valid criticisms.

But they miss the point.

STDIO was the MVP. It proved one thing: Claude Desktop could call local tools through a standardized protocol. That's all it needed to prove.

HTTP 0.9 had no headers, no status codes, no content types. The first version of any protocol exists to validate the idea. Evolution comes after.

OAuth Is Where It Gets Real

The introduction of Streamable HTTP and OAuth 2.1 transformed MCP from a local hack into genuine infrastructure.

Streamable HTTP unlocks what STDIO couldn't. Multi-client servers. Session management. Disconnection recovery. An MCP server can now run as a proper service, deployed independently, serving multiple agents at once.

OAuth solves the trust problem that STDIO ignored entirely. When an agent reads your email or accesses your calendar, it's acting on your behalf. That requires proper consent flows. Token binding to specific resources. Step-up authorization when the agent needs elevated permissions.

This is what an agent ecosystem actually needs. Not local script execution. Secure, authorized access to user data across services.

Evolution as Feature

Some criticize MCP for changing. For having limitations early on. For not shipping perfect on day one.

I see it differently.

A protocol's value isn't measured by its initial state. It's measured by its capacity to evolve. MCP now has formal governance: Specification Enhancement Proposals, Working Groups, oversight from the Linux Foundation. The community is actively tackling real problems:

The protocol keeps improving. That's exactly the point.

Honest Tradeoffs

MCP isn't universally necessary. I should be upfront about that.

For coding agents with direct filesystem and shell access, the value proposition gets murky. These agents write their own glue code. They're often good at it.

But think about web-based assistants. ChatGPT. Claude.ai. They can't spawn subprocesses or touch local files. They need a standardized way to connect to external services: your calendar, your email, your company's internal tools.

Here, MCP provides genuine value. One integration that works across every AI platform.

For enterprises, this translates to real savings. Expose MCP as a connector layer on top of existing APIs. Write the integration once, support every AI client. Not technical elegance. Ecosystem efficiency.

The Bet

Will MCP become the standard? I don't know.

But the protocols we have were built for humans. Agents need their own. Someone has to build it.

That's why MCP matters.

🔲 ☆

The Vim Guide for VS Code Users

Arrow keys are slow. You just don't know it yet.

Most frontend developers use VS Code. Almost none use Vim. But Vim's editing model is worth learning regardless of your editor. Once you internalize it, reaching for arrow keys feels like walking through mud.

You don't have to switch editors. VS Code has excellent Vim support. What matters is the model — the way Vim thinks about text.

This guide starts from zero. By the end, you'll have everything you need to start.

The Basics

Modes

Vim has four modes: Normal, Insert, Visual, and Command-line.

Other editors have one mode. You type, it appears. Simple.

Vim separates navigating from editing. You move in Normal mode. You type in Insert mode. You select in Visual mode. You run commands in Command-line mode.

This sounds complicated. It isn't. You'll switch between modes constantly, and it'll become muscle memory.

The four modes

Normal Mode

When you open Vim, you're in Normal mode. Your keyboard doesn't type — it commands.

h/j/k/l move the cursor. Left, down, up, right.

Why not arrow keys? Because your fingers never leave the home row.

Commands take counts. Want to move down 10 lines? 10j. Delete 5 lines? 5dd. This is faster than holding down a key.

Moving Around

Here's what you need:

  • h/j/k/l — left, down, up, right
  • ^ / $ — start / end of line
  • w / b — next word / previous word
  • f{char} / F{char} — jump to next/previous occurrence of a character

h and l are left and right. j goes down (your index finger's home position). k goes up.

^ and $ work like regex anchors. Start of line, end of line.

w and b hop between words. 2w moves two words forward.

f{char} is powerful. Want to jump to the next =? Press f=. Done.

Moving with ^ $ w b

Moving with f{char}

Master these, and you'll never reach for your mouse again.

Insert Mode

Insert mode is what other editors call "normal." You type, characters appear.

To enter Insert mode from Normal mode:

  • i — insert before cursor
  • a — insert after cursor
  • o — insert new line below

Entering Insert mode

Variations

Uppercase versions do more:

  • I — insert at start of line (like ^i)
  • A — insert at end of line (like $a)
  • O — insert new line above (like ko)

To exit Insert mode: Esc or Ctrl+[.

The workflow: Normal mode to navigate, Insert mode to type, back to Normal mode. Like a painter picking up the brush, painting, putting it down to think.

This feels awkward at first. Then it becomes automatic. Then regular editors feel broken.

Visual Mode

How do you delete text in Vim?

You could enter Insert mode and use backspace. But that's slow.

In Normal mode, x deletes a character. d deletes a range.

Common d commands:

  • dd — delete entire line
  • diw — delete current word
  • dt{char} — delete until character

But what if you want to select exactly what to delete?

Press v to enter Visual mode. Move with h/j/k/l/w/b/f. The selection follows. Press d to delete.

Selecting in Visual mode

V selects entire lines. Ctrl+v selects columns.

Cut, Copy, Paste

d is cut, not delete. The text goes to a register.

y is copy (yank). yy copies a line.

p is paste.

Select with v, copy with y, move somewhere, paste with p. Simple.

Text Objects

This is where Vim gets interesting.

diw means "delete inside word." d is the command. iw is a text object.

Text objects let you operate on logical chunks without moving the cursor first:

  • iw — inside word
  • aw — around word (includes trailing space)
  • i( — inside parentheses
  • a( — around parentheses (includes the parens)

Replace ( with any paired character: i[, i{, i", i'.

yiw copies the word under your cursor. ci" changes everything inside quotes. da( deletes a function call's arguments, parentheses included.

Copying a text object

This is Vim's superpower. Learn text objects.

Command Mode

Press : in Normal mode to enter Command mode.

Essential commands:

  • :q! — quit without saving
  • :w — save
  • :wa — save all
  • :s/old/new/g — replace text

There are hundreds of commands. You'll use maybe ten.

Key Mappings

Vim lets you remap anything.

In ~/.vimrc:

nnoremap <C-f> :s/

Now Ctrl+f starts a find-and-replace.

nnoremap means "normal mode, non-recursive mapping." Use inoremap for Insert mode, vnoremap for Visual mode.

Non-recursive means the mapping won't chain. If you map a to b and c to a, pressing c gives you a, not b. This prevents infinite loops.

The Leader key is Vim's modifier. Default is ,. I use Space:

let g:mapleader = "\<Space>"

Then:

nnoremap <Leader>j 2j

Press Space, then j, to move down two lines.

This is why I love Vim. Total control over keybindings. No conflicts. No restrictions.

Macros

Macros record and replay actions.

Press qq to start recording into register q. Do something. Press q to stop.

Press @q to replay. 10@q replays ten times.

Example: add prefix and suffix to 10 lines.

Record once: I, type prefix, Esc, A, type suffix, Esc, j. Stop recording.

Then 9@q for the remaining lines.

Recording a macro

Replaying a macro

I use macros constantly for transforming JSON, converting enums, reformatting data.

Basic Configuration

Vanilla Vim is spartan. Add this to ~/.vimrc:

Line Numbers

set number
set relativenumber

number shows the current line. relativenumber shows relative distances — essential for commands like 10j.

Relative line numbers

Better Indentation

nnoremap < <<
nnoremap > >>

Now > and < indent without pressing twice.

set shiftwidth=2
set tabstop=2
set autoindent
set smarttab
set expandtab

Two-space indentation with spaces, not tabs.

Better Movement

noremap H ^
noremap L $
noremap J G
noremap K gg

Now H/L go to line start/end. J/K go to file start/end. Logical extensions of h/j/k/l.

Vim in VS Code

You don't need to switch editors. VS Code has excellent Vim support.

Two options:

VSCodeVim

VSCodeVim emulates Vim in VS Code.

Configuration lives in settings.json:

{
  "vim.insertModeKeyBindings": [
    { "before": ["j", "j"], "after": ["<Esc>"] }
  ],
  "vim.normalModeKeyBindingsNonRecursive": [
    { "before": ["H"], "after": ["^"] },
    { "before": ["L"], "after": ["$"] }
  ]
}

It works. But you can't reuse your .vimrc, and configuration is verbose.

Good for beginners who want to stay in VS Code.

vscode-neovim

vscode-neovim embeds actual Neovim into VS Code.

This is what I use. It reads your existing Vim config. It's fast. You get real Vim, plus VS Code's extensions.

The catch: you need Neovim 0.5+ installed. More setup, but worth it.

Wrap VS Code-incompatible plugins in your config:

if !exists('g:vscode')
  " Vim-only plugins here
endif

Keep Going

This guide covers the fundamentals. There's much more: splits, buffers, registers, advanced motions.

Read Learn Vim the Smart Way or Practical Vim.

The learning curve is real. But once Vim clicks, you'll wonder how you ever edited text any other way.

Start small. Use Vim mode in VS Code. Let muscle memory build. One day you'll realize the arrow keys feel wrong.

That's when you know it worked.

References

🔲 ☆

Developing an App with My AI Intern

I got scolded for missing a message. That's how this started.

Since moving abroad, Messages became my main IM. Problem is, Messages doesn't have a menu bar icon. I hide my Dock. You see where this is going.

I found Doll, an app that puts notification badges in the menu bar. It worked. But the icon rendering was ugly — just color inversion that looked wrong on different backgrounds.

I could live with it. Or I could build something better.

I chose the latter. We shipped it as Badgeify.

The Intern

Electron felt like bringing a semi-truck to a bicycle race. Tauri was the obvious choice. One problem: Tauri means Rust, and I didn't know Rust.

Enter Claude 3.5 Sonnet. My AI intern.

I call it an intern deliberately. It's knowledgeable. It's fast. It's occasionally wrong. You don't trust an intern blindly. You guide them.

Some small functions written by my AI intern

The division of labor was simple: I handled the frontend. The intern handled the Rust parts I didn't understand — macOS Accessibility APIs, SVG-to-PNG conversion, Dock status monitoring.

Most of its code didn't work on first try. Wrong parameters. Deprecated APIs. I'd point it to GitHub repos, give it hints, let it iterate. Eventually, things ran.

Where AI Falls Short

One task: monitor menu bar background color changes. The intern produced elegant code. Clean abstractions. Proper API usage.

It missed every edge case that mattered.

We scrapped it and went with a completely different approach. My approach. This is the gap that matters — AI writes syntactically correct code. It doesn't understand what the code is for.

When the intern got stuck in loops, I stepped in. I learned Rust on the fly. Not from tutorials. From fixing the intern's mistakes.

By the end, I could write most of the Rust myself.

Thanks for teaching me Rust, intern.

Shipping

A few weeks later, the MVP was live on Reddit.

Adding some app icons to the menu bar

The image processing logic? Written by the intern, refined by me. It works beautifully now.

I kept the intern busy after launch. Custom icon uploads. Code review suggestions. Some suggestions were decent. Most weren't. I ignored them.

Will AI Replace Programmers?

No.

If it could, I would've shipped this app in a weekend. Instead, it took weeks. The intern couldn't handle deployment, infrastructure, product decisions. It couldn't handle the parts that actually matter.

Here's what I've learned: AI is a Swiss Army knife. Lots of tools. None of them are the right tool for every job. Sometimes the best solution is to close the AI chat and think.

The future isn't AI replacing programmers. It's programmers with AI interns. Those who can't outperform the intern will have a problem. Everyone else will be fine.


AI didn't replace me. It couldn't.

But it taught me Rust, and for that, I'm grateful.

The best tools don't do the work for you — they make you better at doing it yourself.

🔲 ☆

How to Make Your Tauri Dev Faster

Tauri is great. Small bundles. Low memory. Not Electron.

But tauri dev can be painfully slow. Change one line of Rust, wait a minute. Watch dependencies recompile for no apparent reason. Wonder if you made a terrible technology choice.

You didn't. The tooling is just fighting itself.

Here's how to fix it.

The Problem

Watch your terminal next time you run tauri dev. Notice how it recompiles dependencies that haven't changed? That's not normal. Something is triggering unnecessary rebuilds.

The culprit: MACOSX_DEPLOYMENT_TARGET.

Tauri reads bundle.macos.minimumSystemVersion from your config and sets this environment variable during builds. If you haven't set it, it defaults to 10.13.

Meanwhile, rust-analyzer runs cargo check in your editor — without that variable set. Different environment means different build cache. Every time you save, both tools invalidate each other's work.

One minute per change. Unacceptable.

The Fix

Make them agree. Add this to your editor settings:

{
  "rust-analyzer.cargo.extraEnv": {
    "MACOSX_DEPLOYMENT_TARGET": "10.13"
  }
}

And this to tauri.conf.json:

{
  "bundle": {
    "macos": {
      "minimumSystemVersion": "10.13"
    }
  }
}

Same value. Both places. Restart everything.

One minute became twenty-five seconds.

Still Slow?

You might still see this: Blocking waiting for file lock on build directory.

rust-analyzer and tauri dev are fighting over the same target folder. One waits for the other. Your build stalls.

Give them separate directories:

{
  "rust-analyzer.cargo.targetDir": "target/analyzer"
}

Now rust-analyzer writes to target/analyzer. tauri dev writes to target. No conflicts.

This also fixes the environment variable problem. Separate caches mean they can't invalidate each other.

Twenty-five seconds became fifteen.

Going Further

Want more? Tweak your Cargo profiles.

You probably don't need full debug info for dependencies. A little optimization can speed up linking.

# src-tauri/Cargo.toml

[profile.dev]
incremental = true
opt-level = 0
debug = true

[profile.dev.package."*"]
opt-level = 1
debug = false

First build after this change will be slow. Dependencies need recompiling. Judge by subsequent builds.

Fifteen seconds became ten.

Summary

Three changes:

  1. Match MACOSX_DEPLOYMENT_TARGET — Same value in editor settings and tauri.conf.json. Stops cache invalidation between tools.

  2. Set rust-analyzer.cargo.targetDir — Separate build caches. No file locks. No environment conflicts. This is the big one.

  3. Optimize dependency builds — Less debug info, slight optimization. Minor gains, some trade-offs.

One minute to ten seconds. Worth the five minutes it took to configure.

Now get back to building.

References

🔲 ☆

My Journey with Vim

Seven years with Vim. Still going.

Back then, I was using Sublime Text and Atom. Atom was popular. Hard to imagine now.

My initial Vim configuration

The Spark

Second year of college. Internship. My mentor Doni was an Emacs user. He watched me hammer the arrow keys in Atom and looked at me like I was personally offending him.

So I tried Emacs. It wasn't for me.

Then I found Vim. That was it.

I started with a basic .vimrc from some tutorial. Mapped H/L to ^/$. Made Space my leader key. Probably made a dozen decisions I'd question later.

Here's the thing: I still question some of those mappings. Every time I SSH into a server and realize vanilla vi doesn't work the way my fingers expect, I wonder if I trained myself wrong.

But they work on my machine. That's what matters.

The Obsession

GitHub became my source of inspiration. I'd browse other people's dotfiles for hours, stealing ideas, tweaking endlessly.

One .vimrc file became many. Mappings in one file. Options in another. Plugins managed through vim-plug. I wrote scripts to automate installation. Pushed everything to GitHub. When my laptop died once, I had a new dev environment running in minutes.

Plug 'mbbill/undotree'
Plug 'scrooloose/nerdtree'
Plug 'w0rp/ale'
Plug 'junegunn/fzf.vim'
Plug 'Valloric/YouCompleteMe'

I don't use most of these anymore. Tools come and go. The obsession stays.

Completion plugins alone: YouCompleteMe (slow, painful to install), then deoplete (lighter), then coc.nvim (finally, something that just worked).

The plugins what I created

I even wrote my own plugins. resize.vim — because I hated using a mouse to resize windows. vim-fileheader — auto-generates file headers with my name and timestamps.

Looking back, the file header thing is kind of embarrassing. I haven't cared about putting my name in files for years. But building those plugins? Pure joy. Even if Vimscript made me want to quit.

Neovim

At some point — I can't remember exactly when — I switched to Neovim.

Float windows. Built-in LSP. Lua instead of Vimscript. It felt like the future.

My configuration is fully lua-based for now

I rewrote everything in Lua. Years of Vimscript, gone. No regrets.

Now I run lazy.nvim for plugins. telescope.nvim for fuzzy finding. neo-tree.nvim for files. nvim-treesitter for highlighting. nvim-cmp with LSP for everything else.

That's the stack. It'll probably change again. Everything does.

If you're curious: my dotfiles.

The Point

Does Vim make me faster?

Probably not.

Does it matter?

Not at all.

The joy is in the tinkering. The endless configuration. The rabbit holes. The moment something finally clicks and your editor does exactly what you imagined.

I'll probably rewrite my config again soon. I always do.

Seven years in, and I'm still not done. That's the point.

🔲 ☆

My First Car

Six years before I bought my first car, I was a college kid on a subway in Guangzhou, reading Elon Musk's biography during my summer internship commute. Somewhere between stations, I made a decision: I'd own a Tesla someday.

In September 2022, I did. A white Model 3. Not the Model S I'd dreamed of, but dreams evolve.

Thirteen months later, I sold it. Earlier than planned. Life doesn't care about your plans.

Routes recorded in Teslamate

The negotiation with dealers felt like nothing. But signing that final contract? That hit different. I'd set up Teslamate the week I got the car, tracking every route, every charge. Looking at that map one last time — Shenzhen loops, trips to Chaoshan, roads we'd never take again — I understood what I was letting go.

We never did that long road trip. The one we always talked about. "Next month," we'd say. Next month never came.

11,400 kilometers in thirteen months. Most of it after I stopped commuting. Most of it just... driving. For the joy of it.

Parked roadside at Shimen National Forest Park

The Model 3 was far from perfect. Brutal midday sun through that glass roof. A ride that reminded you of every pothole. But for two people figuring out life together, it was enough.

I kept things minimal. White paint. Tinted windows. A mushroom-shaped air freshener that somehow matched the wood trim. It became a hat rack eventually. Things find their purpose.


Financially, it was a disaster. Bought at the peak, sold in the trough. Classic.

I don't care.

A car isn't an investment. It's a key. Mine unlocked two spontaneous months in Guangzhou, three trips home to Chaoshan, and hours of holiday traffic that I'll somehow remember fondly.

You can calculate the depreciation. You can't calculate what it meant.

Some things aren't meant to be measured.

🔲 ⭐

USDT 出金流程总结

由于某些原因从这个月开始收入会变成 USDT,为此提前准备了一个硬件钱包用于接收。之前稍微研究了一下从 USDT 转美元出金的流程,这一次完整的走了一遍,整体损耗大概 100 USD。需要用到的平台/工具有:
🔲 ☆

开始使用 Neovim

使用 Neovim 的经验,包括如何迁移配置文件、使用 ale 代替 syntastic、把 Leader 改成空格、使用 Spacegray 主题、以及优化缩进调整。作者总结了使用 Vim 的便利之处,并分享了自己的 Vim 配置文件。
🔲 ⭐

从 optimizeCb 优化说起

介绍了 underscore 中的内部函数 optimizeCb,并讨论了 call 与 apply 的性能问题。在知道参数个数的情况下,使用 call 可以获得更好的性能。
❌