普通视图

发现新文章,点击刷新页面。
昨天以前Yuexun's Blog

Native macOS Updates in Tauri Shouldn't Be This Hard

2026年1月18日 08:00

Four Tauri projects. Four times I wrote the same update dialog.

Download progress bar. Markdown release notes. "Remind Me Later" button. Same code, copied across repos, maintained separately. By the fourth project, I was done.

The Problem

Tauri's built-in updater downloads binaries and verifies signatures. It has a basic dialog: release notes, two buttons, that's it.

No download progress. No "Remind Me Later." No background checking.

You want any of that, you build it yourself.

Meanwhile, every native macOS app uses Sparkle. Raycast. CleanShot. Countless others. Same dialog. Same flow. Users recognize it immediately.

Sparkle handles everything I kept rebuilding: background checks, phased rollouts, localization. All the edge cases I'd never think to handle.

So why was I still writing custom update UI?

The Answer

Sparkle is Objective-C. Tauri is Rust. Bridging them isn't trivial.

But "not trivial" isn't "impossible."

The Plugin

So I built one. tauri-plugin-sparkle-updater wraps Sparkle's SPUStandardUpdaterController and exposes it to Tauri.

Rust:

tauri::Builder::default()
    .plugin(tauri_plugin_sparkle_updater::init())
    .run(tauri::generate_context!())

TypeScript:

import { checkForUpdates } from 'tauri-plugin-sparkle-updater-api';

await checkForUpdates();

Call checkForUpdates(). Sparkle takes over. Native dialog. Progress bar. Release notes. Done.

For programmatic control, the plugin exposes 41 commands and 18 events. Build a custom flow if you want. But you don't have to.

Cross-Platform

This only works on macOS. That's the point.

Cross-platform doesn't mean identical UI everywhere. It means doing each platform well.

#[cfg(target_os = "macos")]
builder = builder.plugin(tauri_plugin_sparkle_updater::init());

#[cfg(not(target_os = "macos"))]
builder = builder.plugin(tauri_plugin_updater::Builder::new().build());

Your Windows users get updates. Your macOS users get native updates.

The best update experience is invisible. Users click a button, see a familiar dialog, move on. No one should have to build that from scratch.

Why MCP Matters

2026年1月11日 08:00

"MCP is a fad." I've seen this take floating around. Too complex. Security holes everywhere. Just another wrapper around function calls.

I disagree.

The Problem No One Solved

Every protocol we use today was designed with the same assumption: a human sits in the middle. REST. GraphQL. gRPC. A human reads the docs. A human writes the integration. A human debugs when things break.

Agents don't work that way.

They can read documentation, sure. But they can't interpret what's missing from it. They need structured, machine-readable contracts. Capabilities they can discover at runtime. Error semantics they can actually parse.

This is what MCP actually solves. Not "a better way to call functions." The first protocol that treats agents as first-class citizens.

A Different Mental Model

The obvious answer is "tools for AI." That's not it.

MCP is bidirectional. Servers can request LLM completions from the host. The tool asks the agent for help. This isn't RPC. It's a conversation. REST can't express this pattern at all.

MCP is resource-aware. Instead of an agent repeatedly calling APIs and parsing responses (wasting tokens every time), it exposes structured data directly. Schemas, documentation, configs. Load once, work with it.

MCP is dynamic. Client and server negotiate capabilities when they connect. The agent doesn't need prior knowledge of every tool. It discovers them.

These aren't incremental improvements over REST. They're a different way of thinking about how software talks to AI.

STDIO Was Never the Point

MCP launched with STDIO transport. Client spawns server as a subprocess. Communication happens through stdin and stdout. Dead simple.

Critics love pointing out the limitations. Single client. No authentication. Local only. Tightly coupled process lifecycles. All valid criticisms.

But they miss the point.

STDIO was the MVP. It proved one thing: Claude Desktop could call local tools through a standardized protocol. That's all it needed to prove.

HTTP 0.9 had no headers, no status codes, no content types. The first version of any protocol exists to validate the idea. Evolution comes after.

OAuth Is Where It Gets Real

The introduction of Streamable HTTP and OAuth 2.1 transformed MCP from a local hack into genuine infrastructure.

Streamable HTTP unlocks what STDIO couldn't. Multi-client servers. Session management. Disconnection recovery. An MCP server can now run as a proper service, deployed independently, serving multiple agents at once.

OAuth solves the trust problem that STDIO ignored entirely. When an agent reads your email or accesses your calendar, it's acting on your behalf. That requires proper consent flows. Token binding to specific resources. Step-up authorization when the agent needs elevated permissions.

This is what an agent ecosystem actually needs. Not local script execution. Secure, authorized access to user data across services.

Evolution as Feature

Some criticize MCP for changing. For having limitations early on. For not shipping perfect on day one.

I see it differently.

A protocol's value isn't measured by its initial state. It's measured by its capacity to evolve. MCP now has formal governance: Specification Enhancement Proposals, Working Groups, oversight from the Linux Foundation. The community is actively tackling real problems:

The protocol keeps improving. That's exactly the point.

Honest Tradeoffs

MCP isn't universally necessary. I should be upfront about that.

For coding agents with direct filesystem and shell access, the value proposition gets murky. These agents write their own glue code. They're often good at it.

But think about web-based assistants. ChatGPT. Claude.ai. They can't spawn subprocesses or touch local files. They need a standardized way to connect to external services: your calendar, your email, your company's internal tools.

Here, MCP provides genuine value. One integration that works across every AI platform.

For enterprises, this translates to real savings. Expose MCP as a connector layer on top of existing APIs. Write the integration once, support every AI client. Not technical elegance. Ecosystem efficiency.

The Bet

Will MCP become the standard? I don't know.

But the protocols we have were built for humans. Agents need their own. Someone has to build it.

That's why MCP matters.

My Journey with Vim

2024年5月9日 08:00

Seven years with Vim. Still going.

Back then, I was using Sublime Text and Atom. Atom was popular. Hard to imagine now.

My initial Vim configuration

The Spark

Second year of college. Internship. My mentor Doni was an Emacs user. He watched me hammer the arrow keys in Atom and looked at me like I was personally offending him.

So I tried Emacs. It wasn't for me.

Then I found Vim. That was it.

I started with a basic .vimrc from some tutorial. Mapped H/L to ^/$. Made Space my leader key. Probably made a dozen decisions I'd question later.

Here's the thing: I still question some of those mappings. Every time I SSH into a server and realize vanilla vi doesn't work the way my fingers expect, I wonder if I trained myself wrong.

But they work on my machine. That's what matters.

The Obsession

GitHub became my source of inspiration. I'd browse other people's dotfiles for hours, stealing ideas, tweaking endlessly.

One .vimrc file became many. Mappings in one file. Options in another. Plugins managed through vim-plug. I wrote scripts to automate installation. Pushed everything to GitHub. When my laptop died once, I had a new dev environment running in minutes.

Plug 'mbbill/undotree'
Plug 'scrooloose/nerdtree'
Plug 'w0rp/ale'
Plug 'junegunn/fzf.vim'
Plug 'Valloric/YouCompleteMe'

I don't use most of these anymore. Tools come and go. The obsession stays.

Completion plugins alone: YouCompleteMe (slow, painful to install), then deoplete (lighter), then coc.nvim (finally, something that just worked).

The plugins what I created

I even wrote my own plugins. resize.vim — because I hated using a mouse to resize windows. vim-fileheader — auto-generates file headers with my name and timestamps.

Looking back, the file header thing is kind of embarrassing. I haven't cared about putting my name in files for years. But building those plugins? Pure joy. Even if Vimscript made me want to quit.

Neovim

At some point — I can't remember exactly when — I switched to Neovim.

Float windows. Built-in LSP. Lua instead of Vimscript. It felt like the future.

My configuration is fully lua-based for now

I rewrote everything in Lua. Years of Vimscript, gone. No regrets.

Now I run lazy.nvim for plugins. telescope.nvim for fuzzy finding. neo-tree.nvim for files. nvim-treesitter for highlighting. nvim-cmp with LSP for everything else.

That's the stack. It'll probably change again. Everything does.

If you're curious: my dotfiles.

The Point

Does Vim make me faster?

Probably not.

Does it matter?

Not at all.

The joy is in the tinkering. The endless configuration. The rabbit holes. The moment something finally clicks and your editor does exactly what you imagined.

I'll probably rewrite my config again soon. I always do.

Seven years in, and I'm still not done. That's the point.

我的第一辆车

2023年12月11日 17:42
2022 年 6 月,我下定了我人生中的第一辆车。往回翻 6 年,那个时候我还在学校上学,偶然的机会找到了在广州的暑假实习,在通勤的地铁上看完了《硅谷钢铁侠》,之后就想着以后要买一辆特斯拉。9 月下旬,如愿以偿的提到了一辆白色的特斯拉 Model 3,虽然不是心心念念的 Model S,但也很开心。

Vim 折腾记

2023年11月27日 19:39
昨天在微信读书上面看到了池建强的《MacTalk 人生元编程》,然后花了差不多一天看完,其中有篇有关于 Vim 的文章提起了我对这个编辑器的再一次兴趣(对的,我对它感兴趣很多次了,每次都懒..)。一直懒得去慢慢配置它,又不想用别人的配置。自己用的编辑器还是想自己折腾,然后今天就趁着我对这个还有点热度...
❌
❌