- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- April 23, 2026
-
🔗 Console.dev newsletter HyperFrames rss
Description: Write HTML. Render video.
What we like: Compositions are HTML using data attributes rather than React, so it doesn’t require a build step or bundler. Supports seekable, frame-accurate animations. Easy to preview in the browser. Export to MP4. Includes AI agent skills or start manually.
What we dislike: Preview does the rendering work in real time so it can cause stuttering with large compositions that doesn’t happen once rendered.
-
🔗 Console.dev newsletter Pijul rss
Description: Distributed version control.
What we like: Independent changes can be applied in any order without changing the result - much simpler than rebase. Conflicts are expected and considered a first-class state that can be resolved, then never come back. Changes are stored as patches which model an atomic unit of work rather than a snapshot or version.
What we dislike: Development seems sporadic, although it is seemingly active.
-
- April 22, 2026
-
🔗 badlogic/pi-mono v0.69.0 release
New Features
- TypeBox 1.x migration for extensions and SDK integrations, including TypeBox-native tool argument validation that now works in eval-restricted runtimes such as Cloudflare Workers. See docs/extensions.md and docs/sdk.md.
- Stacked extension autocomplete providers via
ctx.ui.addAutocompleteProvider(...), allowing extensions to layer custom completion logic on top of built-in slash and path completion. See docs/extensions.md#autocomplete-providers and examples/extensions/github-issue-autocomplete.ts. - Terminating tool results via
terminate: true, allowing custom tools to end on a final tool call without paying for an automatic follow-up LLM turn. See docs/extensions.md and examples/extensions/structured-output.ts. - OSC 9;4 terminal progress indicators during agent streaming and compaction for supporting terminals.
Breaking Changes
- Migrated first-party coding-agent code, SDK/examples/docs, and package metadata from
@sinclair/typebox0.34.x totypebox1.x. New extensions, SDK integrations, and pi packages should depend on and import fromtypebox. Legacy extension loading still aliases the root@sinclair/typeboxpackage, but@sinclair/typebox/compileris no longer shimmed. This migration also picks up the new@mariozechner/pi-aiTypeBox-native validator path, so tool argument validation now works in eval-restricted runtimes such as Cloudflare Workers instead of being skipped (#3112) - Session-replacement commands now invalidate captured pre-replacement session-bound extension objects after
ctx.newSession(),ctx.fork(), andctx.switchSession(). Oldpiand commandctxreferences now throw instead of silently targeting the replaced session. Migration: if code needs to keep working in the replacement session after one of those calls, passwithSessionto that same method and do the post-switch work there. In practice, move post-switchpi.sendUserMessage(),pi.sendMessage(), and command-ctx/session-manager access intowithSession, and use only theReplacedSessionContextpassed to that callback for session-bound operations. Footguns:withSessionruns after the old extension instance has already receivedsession_shutdown, old cleanup may already have invalidated captured state, captured oldpi/ old commandctxare stale, and previously extracted raw objects such asconst sm = ctx.sessionManagerremain the caller's responsibility and must not be reused after the switch.
Added
- Added support for terminating tool results via
terminate: true, allowing custom tools to end the current tool batch without an automatic follow-up LLM call, plus astructured-output.tsextension example and extension docs showing the pattern (#3525) - Added OSC 9;4 terminal progress indicators during agent streaming and compaction, so terminals like iTerm2, WezTerm, Windows Terminal, and Kitty show activity in their tab bar
- Added
ctx.ui.addAutocompleteProvider(...)for stacking extension autocomplete providers on top of the built-in slash/path provider, plus agithub-issue-autocomplete.tsexample and extension docs (#2983)
Fixed
- Fixed exported session HTML to sanitize markdown link URLs before rendering them into anchor tags, blocking
javascript:-style payloads while preserving safe links in shared/exported sessions (#3532) - Fixed
ctx.getSystemPrompt()insidebefore_agent_startto reflect chained system-prompt changes made by earlierbefore_agent_starthandlers, and clarified the extension docs around provider-payload rewrites and whatctx.getSystemPrompt()does and does not report (#3539) - Fixed built-in
google-gemini-climodel lists and selector entries to includegemini-3.1-flash-lite-preview, so Cloud Code Assist users no longer need manual--modelfallback selection to use it (#3545) - Fixed extension session-replacement flows so
ctx.newSession(),ctx.fork(),ctx.switchSession(), and imported-session replacements fully rebind before post-switch work runs, addedwithSessionreplacement callbacks with freshReplacedSessionContexthelpers, and make stale pre-replacementpi/ctxsession-bound accesses throw instead of silently targeting the wrong session (#2860) - Fixed
models.jsonbuilt-in provider overrides to acceptheaderswithout requiringbaseUrl, so request-header-only overrides now load and apply correctly (#3538)
-
🔗 sacha chua :: living an awesome life YE20: Emacs Carnival: Newbies/starter kits rss
This was a rough braindump on what I might want to write or do for the Emacs Carnival theme this month.
- Emacs Carnival April 2026: newbies/starter kits
- How I got into Emacs
- Start with why:
- TODO: possibly a post about where people come from and typical resources, next steps
- Obstacles:
- A. Isolation
- B. Overwhelm
- Breaking things down into manageable pieces
- C. Balance of time: tinkering with config vs doing actual stuff
- D. Unknowns: different vocabulary, don't even know what's possible
- What's close by?
- Curious
- Cool demo
- Reputation
- Someone else
- Leisure vs wanting to be productive ASAP
- Journey:
- Outsiders
- Newbie
- Basic working environment
- Intermediate
- Packages
- Configuration
- Advanced
- Writing custom code
- Stuff I work on / can tinker with
- inspiration helps with isolation (A) and unknowns (D)
- Emacs News
- TODO: Add intro
- how to use it
- how to subscribe
- resources for getting help, finding meetups, etc.
- TODO: Add resources (esp. beginner resources) to map and EmacsWiki
- TODO: Add intro
- EmacsConf
- Emacs News
- Meetups, Emacs Calendar
- Videos, livestreams
- Reading people's configurations, demonstrating workflow, showing how to incorporate them
- How to Learn Emacs: A Hand-drawn One-pager for Beginners / A visual tutorial
- TODO: Needs updates: URLs, etc.
- inspiration helps with isolation (A) and unknowns (D)
- Beginner map
- My Emacs configuration
- Starter kits
- emacs.tv
- Mastering Emacs
- Emacs Lisp Elements | Protesilaos
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life May 7: Emacs Chat with Shae Erisson rss
On May 7, I'll chat with Shae Erisson about Emacs and life.
(America/Toronto UTC-4) = Thu May 7 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
- Shae Erisson: Haskell, Python, Swedish, knitting, mountain unicycling, contact juggling
- Shae Erisson's blog - 1. DO SOMETHING 2. BRAG ABOUT IT
- Shae Erisson (@shapr@recurse.social) - recurse.social
- shapr/markovkeyboard: keyboard layout that changes by markov frequency · GitHub
This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/may-7-emacs-chat-with-shae-erisson/
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Yorkshire Middleton Woods, Ilkley rss
| submitted by /u/Inevitable-Debt4312
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life May 21: Emacs Chat with Raymond Zeitler rss
On May 21, I'll chat with Raymond Zeitler about Emacs and life.
America/Toronto = Thu May 21 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/emacs-chat-with-raymond-zeitler/
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life June 18: Emacs Chat with Ross A. Baker rss
America/Toronto = Thu Jun 18 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
On June 18, I'll chat with Ross Baker about Emacs and life.
This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/04/june-18-emacs-chat-with-ross-a-baker/
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life May 4: Emacs Chat with Amin Bandali rss
(America/Toronto UTC-4) = Mon May 4 1400H EDT / 1300H CDT / 1200H MDT / 1100H PDT / 1800H UTC / 2000H CEST / 2100H EEST / 2330H IST / Tue May 5 0200H +08 / 0300H JST
On May 4, I'll chat with Amin Bandali about Emacs and life.
This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/emacs-chat-with-amin-bandali/
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Harrogate Cafe / Lunch spot recommendations? rss
Did a bit of cursory research and Tilly Peepers looks to fit the bill. Any other gems / favourite cafes I should know about?
Looking for brunch style stuff and good sandwiches. Betty's is great, but not what we are looking for on this occasion. Just want a cosy spot with quality food and a good cup of tea.
submitted by /u/Ubik_Fresh
[link] [comments] -
🔗 r/reverseengineering [Release] LCSAJdump v2.0: I added an ML ranking engine to my gadget finder (and thanks for 7k downloads!) rss
submitted by /u/LCSAJdump
[link] [comments] -
🔗 roboflow/supervision [RC] supervision-0.28 release
No content.
-
🔗 r/Leeds Best adult shop in area rss
Please no snide comments.
I'm looking for the most discrete adult shop in Leeds, or even a bit further, I don't mind a drive (in fact it could be preferable.
I'd like something with a good range and discrete entrance/location.
Serious replies only please!
submitted by /u/Common_Advantage_835
[link] [comments] -
🔗 @binaryninja@infosec.exchange Binary Ninja 5.3 adds new BNTL utilities for easier type library workflows in mastodon
Binary Ninja 5.3 adds new BNTL utilities for easier type library workflows in both the UI and headless environments. WARP also gets a cleaner server experience, with bundled Linux signatures helping complete the shift away from SigKit. https://binary.ninja/2026/04/13/binary-ninja-5.3-jotunheim.html#types --signatures
-
🔗 r/LocalLLaMA unsloth Qwen3.6-27B-GGUF rss
| finally with files inside :) submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/york Anyone know a good skip hire in York? rss
Just what the title says. I just got a fence replaced and need to get rid of the old one. Anyone have a skip hire they've used in the past that won't cost 100s of £?
submitted by /u/Trent-Popverse
[link] [comments] -
🔗 r/Yorkshire Teachers at East Yorkshire primary school strike over 'physical and verbal abuse' rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Yorkshire Mirror, mirror… Red Squirrel, Yorkshire Dales rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen3.6-27B released! rss
| Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What's new: - Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks - Strong reasoning across text & multimodal tasks - Supports thinking & non-thinking modes - Apache 2.0 — fully open, fully yours Smaller model. Bigger results. Community's favorite. ❤️ We can't wait to see what you build with Qwen3.6-27B! Blog: https://qwen.ai/blog?id=qwen3.6-27b Qwen Studio: https://chat.qwen.ai/?models=qwen3.6-27b Github: https://github.com/QwenLM/Qwen3.6 Hugging Face: https://huggingface.co/Qwen/Qwen3.6-27B https://huggingface.co/Qwen/Qwen3.6-27B-FP8 submitted by /u/ResearchCrafty1804
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen 3.6 27B is out rss
-
🔗 r/york Ey Up Everyone! I am a student from Singapore and I love collecting postcards. I would love to receive postcards from York 🙂. Can someone send me one? rss
| Ey Up Everyone! I’m a student from Singapore and I enjoy collecting postcards. I would be very grateful to receive postcards from York. 🙂 If postcards aren’t available, I’d also really appreciate a greeting card, city card, or even a small souvenir. (like a keychain, rock, local snack, flag, ornament, cap, T-shirt, or handmade craft). This is for my personal collection, and not for any commercial purpose. If you’re willing to help, please leave a comment and I’ll share my mailing address with you. Thank you very much, and warm greetings from Singapore! 🇸🇬🤝🏴 submitted by /u/Nessieinternational
[link] [comments]
---|--- -
🔗 r/york Full Moo Ice Cream boat rss
Has anyone seen the full moo ice cream boat this year? I know last year it started appearing during the Easter holiday but I’ve still not seen it here this year? Has anyone heard anything on its status?
submitted by /u/iredxx
[link] [comments] -
🔗 r/LocalLLaMA Qwen3.6-35B becomes competitive with cloud models when paired with the right agent rss
A short follow-up to my previous post, where I showed that changing the scaffold around the same 9B Qwen model moved benchmark performance from 19.11% to 45.56%:
https://www.reddit.com/r/LocalLLaMA/s/JMHuAGj1LV
After feedback from people here, I tried little-coder with Qwen3.6 35B.
It now lands in the public Polyglot top 10 with a success rate of 78.7%, making it actually competitive with the best models out there for this benchmark!
At this point I’m increasingly convinced that part of the performance gap to cloud models is harness mismatch: we may have been testing local coding models inside scaffolds built for a different class of model.
Next up is Terminal Bench, then likely GAIA for research capabilities. Would love to hear your feedback here!
EDIT: after many requests, pi.dev adaptation is up!
Full write up: https://open.substack.com/pub/itayinbarr/p/honey-i-shrunk-the- coding-agent
GitHub: https://github.com/itayinbarr/little-coder
Full benchmark results: https://github.com/itayinbarr/little- coder/blob/main/docs/benchmark-qwen3.6-35b-a3b.md
submitted by /u/Creative-Regular6799
[link] [comments] -
🔗 r/Leeds Can anyone recommend a dog groomer for a daft little Pom Chi, ideally in the Kirkstall / Horsforth area rss
Have recently moved back to Leeds and my pup could do with a trim - Poms have tricky coats so I'm looking for a dog groomer who is experienced in that. Any recommendations would be greatly appreciated
submitted by /u/illustratejacket
[link] [comments] -
🔗 r/Yorkshire This is my peace place🫶 Wish I could be there now!🥺 rss
| submitted by /u/HammersAndPints
[link] [comments]
---|--- -
🔗 r/Yorkshire Taken at Ewden yesterday afternoon, felt a bit like a Monet scene. rss
| submitted by /u/knackered_biker
[link] [comments]
---|--- -
🔗 r/Leeds Potential scam / warning for students in Leeds (unpaid “internships”) rss
Just wanted to raise awareness, especially for international students in Leeds.
There’s an individual claiming to be the CEO of two companies, restartldn and SporeAndPour they are on instagram, who has recently been advertising unpaid “internship” opportunities.
From what I’ve seen:
- There doesn’t appear to be any real ongoing business activity
- No visible sales, customers, or revenue streams
- The presence seems to be mostly social media content (Instagram videos, etc.)
- The work being offered is unpaid, with unclear structure or outcomes
It looks like a lot is being portrayed online, but there’s little evidence of actual business operations behind it.
I’m not making direct accusations, but I’d strongly advise people to do proper research before committing your time, especially to unpaid roles. If you’re going to work for free, it should at least be with a legitimate, established company where you gain real experience and value.
Just putting this out there so people can make informed decisions.
submitted by /u/Miserable_Bridge_146
[link] [comments] -
🔗 r/reverseengineering [CrackMe] I built a custom C++ stack-machine VM. I dare you to break it. rss
submitted by /u/PynaBola
[link] [comments] -
🔗 r/Leeds Get reminders to avoid match day congestion near Elland Road rss
Hi all,
If you live near or travel through areas around Elland Road, you've probably noticed that some days it suddenly gets much busier - traffic, parking, and general footfall all spike.
A lot of that comes down to football matches at Elland Road, but it's not always obvious when they're happening or when crowds will peak.
I put together a simple free tool that tracks those matchdays and shows when crowds are likely to build and clear, so you can plan around it a bit more easily.
It lets you:
-
See when matchday crowds are likely to affect different areas
-
Get reminders before things get busy
- Get a heads-up when parking restrictions may apply
It's free, no ads or sign-ups - just something I built for myself that I thought others might find useful too.
Would this be helpful for anyone here?
https://nexthomegame.co.uk/leeds-united-elland-road
submitted by /u/richelectron
[link] [comments] -
-
🔗 r/wiesbaden Treffen in Wiesbaden rss
Hi, ich 23w würde mich freuen neue Kontakte zu knüpfen in der Stadt. Ich komme nicht gebürtig hierher und daher ist es schwieriger als gedacht neue Leute kennen zu lernen.
Vielleicht finden sich ja hier ein paar Leute in derselben Situation:)
Ich wäre offen für die verschiedensten Aktivitäten, Kaffeetrinken, Spaziergang, Sonner genießen am Rhein etc etc.
submitted by /u/heyheyheyoooooo
[link] [comments] -
🔗 backnotprop/plannotator v0.19.0 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
v0.17.2 | Supply-chain hardening, sticky toolstrip and badges, overlay scrollbars, external annotation highlighting, Conventional Comments
v0.17.1 | Pi PR review parity, parseRemoteUrl rewrite, cross-repo clone fixes, diff viewer flash fix
v0.17.0 | AI code review agents, token-level annotation, merge-base diffs
What's New in v0.19.0
v0.19.0 lands updates across all four surfaces — Plan/Annotate, Code Review, Pi, and Claude Code. Four PRs, one from a first-time contributor.
Plan / Annotate
GitHub-Flavored Markdown
The in-app reader now matches GitHub's rendering across blocks and inline. Raw HTML blocks (
<details>,<summary>, and friends) render throughmarkedplus DOMPurify, with nested markdown preserved;innerHTMLis set imperatively via ref+useEffectso React reconciliation doesn't collapse an open<details>on rerender. GitHub alerts (> [!NOTE],[!TIP],[!WARNING],[!CAUTION],[!IMPORTANT]) render with inline Octicons and Primer colors, honoringprefers-color-scheme. Directive containers (:::kind ... :::) cover project-specific callouts, and every heading now carries a slug-derived anchor id.Inline gains came alongside: bare URL autolinks with trailing-punctuation trimming;
@mentionsand#issue-refsthat render as clickable links when the repo is a GitHub repo and styled spans otherwise; 29 curated emoji shortcodes (:wave:,:rocket:, …); and smart punctuation (curly quotes, em and en dashes, ellipsis). All inline transforms run after the code-span regex has consumed code content, so backticks stay literal for shell and regex snippets.The refactor that landed with the feature is as important as the feature itself:
InlineMarkdown,BlockRenderer, and the new block components were pulled out ofViewer.tsxinto dedicated files. Viewer dropped from 1279 to 770 lines. New block features now land inblocks/*.tsxrather than swelling Viewer further. DOMPurify's allowlist blockson*handlers,styleattrs, and scripts;sanitizeLinkUrlstripsjavascript:,data:,vbscript:, andfile:protocols. Total bundle cost: +1.8KB gzipped.Copy Table as Markdown or CSV
Every markdown table now has a small toolbar. Copy the rendered table as markdown (round-trip ready for another document) or as RFC 4180 CSV (safe to paste into a spreadsheet — commas, quotes, and newlines are escaped per the spec). Useful when a plan or annotated doc includes a comparison table you want to extract.
Table Popout
Tables can also pop out into a dedicated overlay for wide or dense data that doesn't fit the reader flow. The popout gives the table its own scroll container, so you can read across columns without competing with the document's own scroll position. Cycle back to the inline table when you're done.
- All three features shipped in #597
Code Review
Custom Settings Per Agent
Every review agent now has first-class model and effort/reasoning controls. Claude Review exposes
--modeland--effort. Codex Review exposes-m,-c model_reasoning_effort=..., and-c service_tier=fast. Code Tour supports both engines, each with its own model and effort settings. The Settings dropdowns dropped the hidden "Default" option in favor of explicit sensible defaults — Opus 4.7 / High for Claude, GPT-5.3 Codex / High for Codex, Sonnet / Medium for Tour Claude, GPT-5.3 Codex / Medium for Tour Codex.Codex reasoning's invalid
noneoption is gone (codex-rs rejects it); a one- shot cookie migration rewrites existing users'nonevalues to the default on load so nobody keeps launching a broken flag.Settings persistence moved to a single
plannotator.agentscookie that holds the whole agent settings tree, keyed per (agent × model). Switching models reveals the effort/reasoning/fast-mode you last used with that specific model. React state is the authority; the cookie mirrors it; all mutations funnel through a single owner via functionalsetState, so rapid successive changes can't stale-read or lose writes.The job card badge now carries the full story —
Claude · Opus 4.7 · High,Codex · GPT-5.3 Codex · Medium · Fast,Tour · Claude · Sonnet · Medium— and the main dropdown reads action-first:Code Review · Claude,Code Review · Codex,Code Tour.Code Tour
Alongside Claude Review and Codex Review, Plannotator now ships a third review agent: Code Tour. Point it at a PR and it produces a guided walkthrough — greeting, stated intent, before/after framing, ordered stops with inline diff anchors, key takeaways, and a QA checklist — rendered in a three-page animated dialog. Similar in spirit to Cursor's and Graphite's PR walkthroughs, but wired into the same review surface you already use. Demos coming.
The tour auto-opens when the job reaches a terminal state. Checklist state persists across dialog open/close within a review session, and pending saves are flushed on unmount with
keepalive: trueso closing the dialog during the 500ms debounce window never drops a tick.Both Claude and Codex can drive the tour. Claude streams JSONL via stdin; Codex writes to a file via
--output-schema. If the model returns empty or malformed output, the job flips tofailedwith a clear error rather than silently 404ing the dialog. Underprefers-reduced-motion, page navigation swaps directly instead of waiting on anonAnimationEndthat would otherwise soft-lock the walkthrough behind the intro. The Claude allowlist permitsgh issue view,gh api repos/*/*/issues/*, andglab issue view, so when the prompt follows aFixes #123, the agent can actually read the linked issue.Under the hood, a shared
createTourSession()factory owns the lifecycle —buildCommand,onJobComplete,getTour,saveChecklist— so the Bun hook server and the Pi Node server wire it up with about 25 lines of route glue each instead of the ~100 lines of duplicated provider-branch logic that the review agents used to carry. Route parity (GET /api/tour/:jobId,PUT /api/tour/:jobId/checklist) is enforced by tests across both runtimes.- Custom settings and Code Tour shipped in #569
Pi
More Flexible Planning Mode
The Pi extension used to require a single configured plan-file path — set once via
--plan-fileor/plannotator-set-fileand stuck with it for the session. In practice this made multi-plan workflows awkward and confused the agent when a repo already had its own plan conventions. That whole layer is gone.plannotator_submit_plangained a requiredfilePathargument, and the agent now writes its plan as a markdown file anywhere inside the working directory, passing the path at submission. Validation enforces.mdor.mdxextension, rejects..traversal and absolute paths that escape cwd, and stat-checks the file before it's read. The planning write gate allows any markdown file inside cwd, andlastSubmittedPathtracks the most recent submission so the execution phase rebuilds correctly on session resume — including after a denial. The planning system prompt suggests (but doesn't require)PLAN.mdat the repo root orplans/<short-name>.md.Since version history in
~/.plannotator/history/{project}/{slug}/keys off plan content (first# Headingplus date) rather than file path, free-form naming keeps version linking intact.Breaking changes for Pi users: the
--plan-fileflag, the/plannotator- set-fileslash command, and the file-path argument to/plannotatorhave been removed. Existing workflows that relied on them need to let the agent pick the path instead.Claude Code
/plannotator-lastwith Multiple Sessions in the Same Directory/plannotator-lastused to pick the wrong session whenever two Claude Code sessions shared a repo. Invoked from a slash command's!bang, Plannotator's direct parent (process.ppid) is the intermediate bash shell that the Bash tool spawned, not Claude Code itself. The oldresolveSessionLogByPpid()always missed on that parent, and the mtime-based fallback picked whichever.jsonlin the project had been touched most recently — which was usually the other session.The fix is a four-tier resolution ladder. First, an ancestor-PID walk calls
ps -o ppid=fromprocess.ppidup to eight hops, checking~/.claude/sessions/<pid>.jsonat each one; this matches the exact session deterministically. Second, a cwd-scan reads every session metadata file, filters bycwd, and picks the entry with the most recentstartedAt— a better fallback than mtime whenpsis unavailable. The legacy cwd-slug mtime check and ancestor directory walk remain as tiers three and four. 17 new tests cover the ladder with injectable process-tree and filesystem dependencies.- Authored by @elithompson in #598, closing #458 reported by @blimmer
Additional Changes
- Prompts reference page. New
/docs/reference/prompts/page documents the three-layer message shape — system prompt owned by the CLI, user message (review prompt joined to user prompt with\n\n---\n\n), and JSON schema as a terminal constraint. Calls out that Claude and Codex review prompts are upstream-derived; only Tour's prompt is new (#569) - Motion library added. Code Tour's spring-driven accordions and intro composition cascade pulled in
motion@12.38.0(~30 KB gzipped) (#569)
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: Code Tour — guided PR walkthrough as a third agent provider by @backnotprop in #569
- fix(pi): let agent submit any markdown plan file by path by @backnotprop in #595
- feat(ui): markdown reader parity — HTML blocks, GitHub alerts, GFM inline extras by @backnotprop in #597
- fix(session-log): walk ancestor PIDs to resolve correct session log by @elithompson in #598
New Contributors
- @elithompson made their first contribution in #598
Contributors
@elithompson authored the session-log ancestor-PID walk (#598), closing a long- standing issue where
/plannotator-lastpicked the wrong session whenever two Claude Code sessions shared a repo. First contribution to the project.@blimmer diagnosed and reported the session-log bug (#458) with a detailed empirical walkthrough of the process tree, which made the fix straightforward to scope.
Full Changelog :
v0.18.0...v0.19.0 -
🔗 Simon Willison Is Claude Code going to cost $100/month? Probably not - it's all very confusing rss
Anthropic today quietly (as in silently, no announcement anywhere at all) updated their claude.com/pricing page (but not their Choosing a Claude plan page, which shows up first for me on Google) to add this tiny but significant detail (arrow is mine, and it's already reverted):

The Internet Archive copy from yesterday shows a checkbox there. Claude Code used to be a feature of the $20/month Pro plan, but according to the new pricing page it is now exclusive to the $100/month or $200/month Max plans.
Update: don't miss the update to this post, they've already changed course a few hours after this change went live.
So what the heck is going on? Unsurprisingly, Reddit and Hacker News and Twitter all caught fire.
I didn't believe the screenshots myself when I first saw them - aside from the pricing grid I could find no announcement from Anthropic anywhere. Then Amol Avasare, Anthropic's Head of Growth, tweeted:
For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
And that appears to be the closest we have had to official messaging from Anthropic.
I don't buy the "~2% of new prosumer signups" thing, since everyone I've talked to is seeing the new pricing grid and the Internet Archive has already snapped a copy. Maybe he means that they'll only be running this version of the pricing grid for a limited time which somehow adds up to "2%" of signups?
I'm also amused to see Claude Cowork remain available on the $20/month plan, because Claude Cowork is effectively a rebranded version of Claude Code wearing a less threatening hat!
There are a whole bunch of things that are bad about this.
If we assume this is indeed a test, and that test comes up negative and they decide not to go ahead with it, the damage has still been extensive:
- A whole lot of people got scared or angry or both that a service they relied on was about to be rug-pulled. There really is a significant difference between $20/month and $100/month for most people, especially outside of higher salary countries.
- The uncertainty is really bad! A tweet from an employee is not the way to make an announcement like this. I wasted a solid hour of my afternoon trying to figure out what had happened here. My trust in Anthropic's transparency around pricing - a crucial factor in how I understand their products - has been shaken.
- Strategically, should I be taking a bet on Claude Code if I know that they might 5x the minimum price of the product?
- More of a personal issue, but one I care deeply about myself: I invest a great deal of effort (that's 105 posts and counting) in teaching people how to use Claude Code. I don't want to invest that effort in a product that most people cannot afford to use.
Last month I ran a tutorial for journalists on "Coding agents for data analysis" at the annual NICAR data journalism conference. I'm not going to be teaching that audience a course that depends on a $100/month subscription!
This also doesn't make sense to me as a strategy for Anthropic. Claude Code defined the category of coding agents. It's responsible for billions of dollars in annual revenue for Anthropic already. It has a stellar reputation, but I'm not convinced that reputation is strong enough for it to lose the $20/month trial and jump people directly to a $100/month subscription.
OpenAI have been investing heavily in catching up to Claude Code with their Codex products. Anthropic just handed them this marketing opportunity on a plate - here's Codex engineering lead Thibault Sottiaux:
I don't know what they are doing over there, but Codex will continue to be available both in the FREE and PLUS ($20) plans. We have the compute and efficient models to support it. For important changes, we will engage with the community well ahead of making them.
Transparency and trust are two principles we will not break, even if it means momentarily earning less. A reminder that you vote with your subscription for the values you want to see in this world.
I should note that I pay $200/month for Claude Max and I consider it well worth the money. I've had periods of free access in the past courtesy of Anthropic but I'm currently paying full price, and happy to do so.
But I care about the accessibility of the tools that I work with and teach. If Codex has a free tier while Claude Code starts at $100/month I should obviously switch to Codex, because that way I can use the same tool as the people I want to teach how to use coding agents.
Here's what I think happened. I think Anthropic are trying to optimize revenue growth - obviously - and someone pitched making Claude Code only available for Max and higher. That's clearly a bad idea, but "testing" culture says that it's worth putting even bad ideas out to test just in case they surprise you.
So they started a test, without taking into account the wailing and gnashing of teeth that would result when their test was noticed - or accounting for the longer-term brand damage that would be caused.
Or maybe they did account for that, and decided it was worth the risk.
I don't think that calculation was worthwhile. They're going to have to make a very firm commitment along the lines of "we heard your feedback and we commit to keeping Claude Code available on our $20/month plan going forward" to regain my trust.
As it stands, Codex is looking like a much safer bet for me to invest my time in learning and building educational materials around.
Update: they've reversed it already
In the time I was typing this blog entry Anthropic appear to have reversed course - the claude.com/pricing page now has a checkbox back in the Pro column for Claude Code. I can't find any official communication about it though.
Let's see if they can come up with an explanation/apology that's convincing enough to offset the trust bonfire from this afternoon!
Update 2: it may still affect 2% of signups?
Amol on Twitter:
was a mistake that the logged-out landing page and docs were updated for this test [embedded self-tweet]
Getting lots of questions on why the landing page / docs were updated if only 2% of new signups were affected.
This was understandably confusing for the 98% of folks not part of the experiment, and we've reverted both the landing page and docs changes.
So the experiment is still running, just not visible to the rest of the world?
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
sync repo: +1 plugin, +2 releases ## New plugins - [threatray](https://github.com/threatray/plugin-ida) (3.0.0) ## New releases - [ida-search](https://github.com/milankovo/ida-search): 0.2.2 -
🔗 badlogic/pi-mono v0.68.1 release
New Features
- Fireworks provider support with built-in models and
FIREWORKS_API_KEYauth. See README.md#providers--models and docs/providers.md. - Configurable inline tool image width via
terminal.imageWidthCellsin/settings. See docs/settings.md#terminal--images.
Added
- Added built-in Fireworks provider support, including
FIREWORKS_API_KEYsetup/docs and the default Fireworks modelaccounts/fireworks/models/kimi-k2p6(#3519)
Fixed
- Fixed interactive inline tool images to honor configurable
terminal.imageWidthCellsvia/settings, so tool-output images are no longer hard-capped to 60 terminal cells (#3508) - Fixed
sessionDirinsettings.jsonto expand~, so portable session-directory settings no longer require a shell wrapper (#3514) - Fixed parallel tool-call rows to leave the pending state as soon as each tool is finalized, while still appending persisted tool results in assistant source order (#3503)
- Fixed exported session markdown to render Markdown while showing HTML-like message content such as
<file name="...">...</file>verbatim, so shared sessions match the TUI instead of letting the browser interpret message text (#3484) - Fixed exported session HTML to render
grepandfindoutput through their existing TUI renderers andlsoutput through a native template renderer, avoiding missing formatting and spacing artifacts in shared sessions (#3491 by @aliou) - Fixed
@autocomplete fuzzy search to follow symlinked directories and include symlinked paths in results (#3507) - Fixed proxied agent streams to preserve the proxy-safe serializable subset of stream options, including session, transport, retry-delay, metadata, header, cache-retention, and thinking-budget settings (#3512)
- Hardened Anthropic streaming against malformed tool-call JSON by owning SSE parsing with defensive JSON repair, replacing the deprecated
fine-grained-tool-streamingbeta header with per-tooleager_input_streaming, and updating stale test model references (#3175) - Fixed Bedrock runtime endpoint resolution to stop pinning built-in regional endpoints over
AWS_REGION/AWS_PROFILE, restoringus.*andeu.*inference profile support after v0.68.0 while preserving custom VPC/proxy endpoint overrides (#3481, #3485, #3486, #3487, #3488)
- Fireworks provider support with built-in models and
-
🔗 anthropics/claude-code v2.1.117 release
What's changed
- Forked subagents can now be enabled on external builds by setting
CLAUDE_CODE_FORK_SUBAGENT=1 - Agent frontmatter
mcpServersare now loaded for main-thread agent sessions via--agent - Improved
/model: selections now persist across restarts even when the project pins a different model, and the startup header shows when the active model comes from a project or managed-settings pin - The
/resumecommand now offers to summarize stale, large sessions before re-reading them, matching the existing--resumebehavior - Faster startup when both local and claude.ai MCP servers are configured (concurrent connect now default)
plugin installon an already-installed plugin now installs any missing dependencies instead of stopping at "already installed"- Plugin dependency errors now say "not installed" with an install hint, and
claude plugin marketplace addnow auto-resolves missing dependencies from configured marketplaces - Managed-settings
blockedMarketplacesandstrictKnownMarketplacesare now enforced on plugin install, update, refresh, and autoupdate - Advisor Tool (experimental): dialog now carries an "experimental" label, learn-more link, and startup notification when enabled; sessions no longer get stuck with "Advisor tool result content could not be processed" errors on every prompt and
/compact - The
cleanupPeriodDaysretention sweep now also covers~/.claude/tasks/,~/.claude/shell-snapshots/, and~/.claude/backups/ - OpenTelemetry:
user_promptevents now includecommand_nameandcommand_sourcefor slash commands;cost.usage,token.usage,api_request, andapi_errornow include aneffortattribute when the model supports effort levels. Custom/MCP command names are redacted unlessOTEL_LOG_TOOL_DETAILS=1is set - Native builds on macOS and Linux: the
GlobandGreptools are replaced by embeddedbfsandugrepavailable through the Bash tool — faster searches without a separate tool round-trip (Windows and npm-installed builds unchanged) - Windows: cached
where.exeexecutable lookups per process for faster subprocess launches - Default effort for Pro/Max subscribers on Opus 4.6 and Sonnet 4.6 is now
high(wasmedium) - Fixed Plain-CLI OAuth sessions dying with "Please run /login" when the access token expires mid-session — the token is now refreshed reactively on 401
- Fixed
WebFetchhanging on very large HTML pages by truncating input before HTML-to-markdown conversion - Fixed a crash when a proxy returns HTTP 204 No Content — now surfaces a clear error instead of a
TypeError - Fixed
/loginhaving no effect when launched withCLAUDE_CODE_OAUTH_TOKENenv var and that token expires - Fixed prompt-input undo (
Ctrl+_) doing nothing immediately after typing, and skipping a state on each undo step - Fixed
NO_PROXYnot being respected for remote API requests when running under Bun - Fixed rare spurious escape/return triggers when key names arrive as coalesced text over slow connections
- Fixed SDK
reload_pluginsreconnecting all user MCP servers serially - Fixed Bedrock application-inference-profile requests failing with 400 when backed by Opus 4.7 with thinking disabled
- Fixed MCP
elicitation/createrequests auto-cancelling in print/SDK mode when the server finishes connecting mid-turn - Fixed subagents running a different model than the main agent incorrectly flagging file reads with a malware warning
- Fixed idle re-render loop when background tasks are present, reducing memory growth on Linux
- [VSCode] Fixed "Manage Plugins" panel breaking when multiple large marketplaces are configured
- Fixed Opus 4.7 sessions showing inflated
/contextpercentages and autocompacting too early — Claude Code was computing against a 200K context window instead of Opus 4.7's native 1M
- Forked subagents can now be enabled on external builds by setting
-
🔗 exe.dev Series A for exe.dev rss
We have raised a Series A, for a total of $35m in funding. We are using it to build a new generation of cloud infrastructure. Major investors are Amplify, CRV, and HeavyBit.
Call it a cloud for developers.
Why do we need new infrastructure primitives and a new cloud now? Agents. Lower barriers to entry mean there are going to be more developers, and each of us is going to write more programs. Software needs a home; exe.dev is a good home for software.
Many companies are approaching the question of next generation infrastructure as “What do agents need?” We believe this is the wrong question. Agents are trained on how developers work. They want exactly what we want. Full computers, understandable and stable building blocks, familiar systems wherever possible. You can see that in our approach. The moment you start with exe.dev, you use SSH. You know it.
We are building a cloud that makes sense for the current and future state of software development. One that includes the features needed for fast, secure development out of the box. A cloud developers actually enjoy using. We want to revitalize the spirit of projects like early Heroku (though our technology is very different) and ship features that bring you joy. That is why our servers have HTTPS by default, and are private by default, and are easy to share with a link. It is why our pricing for individual developers is simple: pay a flat rate, run as many computers as you need with the CPU and memory purchased. And it is why we have a simple web-based agent in the default Ubuntu image with credits included in the default plan. Sometimes you just need an agent.
We have a lot of work to do! There is a lot to build. To get these primitives right we are not building on top of existing clouds; we are working with our own machines in data centers. We have written our own global load balancer. We do our own DNS. We have to strip away all the layers and go back to the actual computers to ship a cloud developers actually like. Traditional Cloud 1.0 companies sell you a VM with a default of 3000 IOPS, while your laptop has 500k. Getting the defaults right (and the cost of those defaults right) requires careful thinking through the stack. Hence the Series A: we have some computers to buy.
-
- April 21, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-21 rss
IDA Plugin Updates on 2026-04-21
New Releases:
Activity:
- augur
- d2b1ccdc: chore: update dependencies
- claude-of-alexandria
- c526354b: chore(deps-dev): bump the minor-and-patch group (#6)
- 09a3422d: chore(deps-dev): bump wrangler in /server in the minor-and-patch grou…
- b15e4b53: chore(deps): bump the minor-and-patch group in /server with 5 updates…
- 428b84b7: fix(agents): rename Task tool to Agent in all agent and skill frontma…
- haruspex
- b83ea796: chore: update dependencies
- ida-domain
- e07fe48f: Add object store/retrieve APIs for typed data serialization (#41)
- ida-search
- ddd88efa: Guard next search before initialization
- plugin-ida
- python-elpida_core.py
- rhabdomancer
- 2230fb2b: chore: update dependencies
- tix-seven
- 3582fa05: feat: enhance MOSIPAdapter with detailed auth status documentation an…
- 04ae1e1a: feat: add read me
- 77dc39ef: Merge branch 'main' of https://github.com/ark1tech/tix-seven
- 295ac06d: feat: normalize mock log seed values to current enums
- 8281a329: feat: retire stale pre-alembic supabase seed paths
- 68c9648a: feat: align supabase schema migrations with alembic models
- df8df855: feat: add missing mosip-related credentials to .gitignore
- fe31c62f: feat: expand verify endpoint test coverage
- 65d81259: feat: wire verification flow to sqlalchemy session
- c2d92814: feat: align log enum and schema with grant deny
- 29521d0d: feat: remove legacy supabase client bootstrap
- b65ba595: feat: rebuild and seed mock schema for debug mode
- e3e0e925: feat: refactor ticket and entry-log dashboard data paths
- 6d532dca: feat: update gates feature for alembic model changes
- 2ace37f0: feat: update events flow to new event schema
- dcb966e9: feat: add alembic-backed schema updates for gate server
- 5bade248: Merge branch 'main' of https://github.com/ark1tech/tix-seven
- augur
-
🔗 r/reverseengineering Reversing The Gentlemen ransomware (Go/Garble) — ephemeral X25519 keys persist in go routine stacks, enabling full decryption. rss
submitted by /u/BedrockSafeGuard
[link] [comments] -
🔗 r/LocalLLaMA Claude Code removed from Claude Pro plan - better time than ever to switch to Local Models. rss
| Time to switch to Kimi k2.6 guys if you haven't already. For $20 a month you can buy the OpenCode Go coding plan (its actually $5 for the first month then $10) which gives you many more tokens on models like Kimi K2.6, and then you can pay for the rest of the usage. So for $20 a month of tokens of Kimi K2.6 you're basically getting the equivalent amount of tokens of the $100 plan. You can also use Qwen 3.6 35B A3B, which you can run on your local PC (as long as you have a decent graphics card). submitted by /u/bigboyparpa
[link] [comments]
---|--- -
🔗 r/york i am bike packing around the uk, is there anywhere i would be to safely leave my bike in york center. rss
evening, i will be coming into york on Friday, is there anywhere i would be to leave my bike in york while i do a bit exploring, its not as easy as just locking it up as all my camping gear is on it will be around 2 hours.
submitted by /u/DullHall7
[link] [comments] -
🔗 Simon Willison Where's the raccoon with the ham radio? (ChatGPT Images 2.0) rss
OpenAI released ChatGPT Images 2.0 today, their latest image generation model. On the livestream Sam Altman said that the leap from gpt-image-1 to gpt-image-2 was equivalent to jumping from GPT-3 to GPT-5. Here's how I put it to the test.
My prompt:
Do a where's Waldo style image but it's where is the raccoon holding a ham radiogpt-image-1
First as a baseline here's what I got from the older gpt-image-1 using ChatGPT directly:
I wasn't able to spot the raccoon - I quickly realized that testing image generation models on Where's Waldo style images (Where's Wally in the UK) can be pretty frustrating!
I tried getting Claude Opus 4.7 with its new higher resolution inputs to solve it but it was convinced there was a raccoon it couldn't find thanks to the instruction card at the top left of the image:
Yes — there's at least one raccoon in the picture, but it's very well hidden. In my careful sweep through zoomed-in sections, honestly, I couldn't definitively spot a raccoon holding a ham radio. [...]
Nano Banana 2 and Pro
Next I tried Google's Nano Banana 2, via Gemini:
That one was pretty obvious, the raccoon is in the "Amateur Radio Club" booth in the center of the image!
Claude said:
Honestly, this one wasn't really hiding — he's the star of the booth. Feels like the illustrator took pity on us after that last impossible scene. The little "W6HAM" callsign pun on the booth sign is a nice touch too.
I also tried Nano Banana Pro in AI Studio and got this, by far the worst result from any model. Not sure what went wrong here!
gpt-image-2
With the baseline established, let's try out the new model.
I used an updated version of my openai_image.py script, which is a thin wrapper around the OpenAI Python client library. Their client library hasn't yet been updated to include
gpt-image-2but thankfully it doesn't validate the model ID so you can use it anyway.Here's how I ran that:
OPENAI_API_KEY="$(llm keys get openai)" \ uv run https://tools.simonwillison.net/python/openai_image.py \ -m gpt-image-2 \ "Do a where's Waldo style image but it's where is the raccoon holding a ham radio"
Here's what I got back. I don't think there's a raccoon in there - I couldn't spot one, and neither could Claude.
The OpenAI image generation cookbook has been updated with notes on
gpt-image-2, including theoutputQualitysetting and available sizes.I tried setting
outputQualitytohighand the dimensions to3840x2160- I believe that's the maximum - and got this - a 17MB PNG which I converted to a 5MB WEBP:OPENAI_API_KEY="$(llm keys get openai)" \ uv run 'https://raw.githubusercontent.com/simonw/tools/refs/heads/main/python/openai_image.py' \ -m gpt-image-2 "Do a where's Waldo style image but it's where is the raccoon holding a ham radio" \ --quality high --size 3840x2160
That's pretty great! There's a raccoon with a ham radio in there (bottom left, quite easy to spot).
The image used 13,342 output tokens, which are charged at $30/million so a total cost of around 40 cents.
Takeaways
I think this new ChatGPT image generation model takes the crown from Gemini, at least for the moment.
Where's Waldo style images are an infuriating and somewhat foolish way to test these models, but they do help illustrate how good they are getting at complex illustrations combining both text and details.
Update: asking models to solve this is risky
rizaco on Hacker News asked ChatGPT to draw a red circle around the raccoon in one of the images in which I had failed to find one. Here's an animated mix of their result and the original image:

Looks like we definitely can't trust these models to usefully solve their own puzzles!
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 @binaryninja@infosec.exchange A lot of practical UI work landed in Binary Ninja 5.3. We replaced the old mastodon
A lot of practical UI work landed in Binary Ninja 5.3. We replaced the old MachO slice selection flow with a dedicated picker, expanded Container Browser coverage across a wide range of container formats, and significantly extended command palette behavior. https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#ui
-
🔗 r/Leeds Best small plates / tapas in Leeds City Centre rss
Love coming over to Leeds (From York) so as the title says I’m looking for a really good small plates, tapas or charcuterie place to take the Girlfriend on a date night.
Tia x
submitted by /u/Plain-Black-Vans
[link] [comments] -
🔗 r/york The best decision I’ve ever made was moving to York rss
| submitted by /u/ramblinginmyhead
[link] [comments]
---|--- -
🔗 r/york York Mosque Community Kitchen | THURSDAY 23 APRIL 12:00 - 13:30. rss
| submitted by /u/LittleForm3711
[link] [comments]
---|--- -
🔗 r/Leeds The Prodigy tomorrow rss
So this is a long shot however tomorrow night I’m off to see Prodigy and Carl Cox at First Direct Arena, I’m a gig veteran however tomorrow I’m flying solo without my usual gig friend and as quite an anxious sorta dude I wondered if there was any other peeps going alone that may or may not want to gather up and share the experience?
Not sure if this is allowed here, if not please remove. But thought I’d take my chances to not be the only loner there!
I realise this post might seem a little sad lol but why be alone if there is others going solo tomorrow? :D
submitted by /u/ToyMachibe
[link] [comments] -
🔗 r/Harrogate Price of Wales Rdbout - Roadworks Again! rss
I seriously think somebody is running a social experiment with the town now. I may be mistaken but I think it’s the third time it’s being dug up in last six months, following being totally resurfaced early last year.
This is directly after brining Leeds Road to a standstill for the last three weeks.
It’s either actively looking to reduce pressures on housing by putting people off of living here, wanting to reduce tourist numbers, or Occams Razor telling me they simply don’t have anywhere else to store the cones, portaloos and fencing.
submitted by /u/Similar-Actuator-338
[link] [comments] -
🔗 r/Yorkshire Flamborough rss
| Few photos submitted by /u/Embarrassed-Air7202
[link] [comments]
---|--- -
🔗 r/reverseengineering ida-mcp 2.2: From Tool Calls to Analysis Scripts rss
submitted by /u/jtsylve
[link] [comments] -
🔗 r/Yorkshire My mother in law, 90 today. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Yorkshire One of my favourite views of Richmond. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Leeds Best Chinese in LS17? rss
It's my birthday today and I would like to partake in a succulent Chinese meal.
submitted by /u/Row_Echelon_Form
[link] [comments] -
🔗 r/wiesbaden Henkell 0,0% Vinothon mit Start in Rüdesheim am 25.04. rss
submitted by /u/jotheta
[link] [comments] -
🔗 r/york Driving lessons rss
Hi I am trying desperately to sell 7hrs of manual driving lessons for £310 all together which roughly works out to £45 per lesson I am open to offers. I simply want to change to automatic the lessons are with “GoRoadie” and the instructor is brilliant please message or comment to enquire! (Picture for boosting this post) also if you have any recommendations for automatic driving lessons in York that would be great
submitted by /u/-thatstiny-
[link] [comments] -
🔗 r/york Want to give music a real go rss
Hello, I am a 24 M and want to really give my passion for singing and song writing a real go. Ive been writing music and singing for over 4 years now but recently found what genre I want to go into. The vibe I love is Phoebe Bridgers, Noah Kahan, Novo Amor and Bon Iver (ik a little sad). Is there anyone in the area that would maybe like to collab or help me produce some of the songs I have in the works?
Thanks
submitted by /u/Careless_Regret9883
[link] [comments] -
🔗 r/york F33 lf girl friends near York/Selby rss
Looking for new girl friends near York and Selby area, I've lived here for a couple of years but never managed to get out or make any nearby friends!
I'm really into gaming, gachas, anime and would love some like-minded friends to talk to or eventually meet up with.
submitted by /u/SaintSixx
[link] [comments] -
🔗 sacha chua :: living an awesome life OBS: A dump button for dropping the last ~10 seconds before it hits the stream rss
I want to make it easier to livestream without worrying about leaking private information. Tradeoff: slower conversations with the chat, but more peace of mind.
I think I've sorted out a setup involving two instances of OBS, with the source instance sending the stream with a delay to the restreaming instance that will then send it on to YouTube. This allows me to cut the feed from the source instance to the restreaming instance in case something happens.
The first OBS is the one that has my screen capture, webcam, audio, etc. Here's what I needed to do to change it.
- Create a new profile or rename the profile to "Source".
- Name the collection of streams "Source" as well.
- In Settings - Hotkeys, define a keyboard shortcut for Stop streaming (discard delay). I use
Super + F12. - In Settings - Stream:
- Service: Custom
- Destination - Server:
srt://127.0.0.1:9000?mode=caller
- In Settings - Advanced:
- Check Stream Delay - Enable.
- Set the duration. Let's try 10 seconds.
- Uncheck Preserve cutoff point (increase delay) when reconnecting.
Then I can launch that one with:
obs --profile "Source" --collection "Source" --launch-filter --multiThe second OBS will restream the output of the first OBS to YouTube.
obs --profile "Restream" --collection "Restream" --launch-filter --multiI used the Profile menu to create a new profile called "Restream" and the Scene Collection menu to create a new collection called "Restream." I set up the scene as follows:
- Create a text source with the backup message.
- Create a media source.
- Uncheck Local File.
- Uncheck Restart playback when source becomes active.
- Input: srt://127.0.0.1:9000?mode=listener
In the first OBS (the source), click on Start streaming. After some delay, the stream will appear, and I can move or resize it.
I was a little thrown off by the fact that my audio bars didn't initially show up in the mixer in the restreamer, but both recording and streaming seem to include the audio.
To stop the stream, I can switch to OBS, click on Stop streaming, and (important!) choose Stop streaming (discard delay). The OBS window might be buried under other things on my second screen, though, and that's too many clicks and mouse movements. The keyboard shortcut
Super + F12we just set up should be handy, but I might not remember that, so let's add some scripts. The OBS websocket protocol doesn't support discarding the delay buffer yet, but I'm on Linux and X11, so I can use xdotool to simulate a keypress. Here I select the window matching the profile name I set up previously.WID=$(xdotool search --name "OBS .* - Profile: Source") xdotool key --window $WID super+F12I can
org-capturethe timestamp of the panic so that I can doublecheck the recording.;;;###autoload (defun sacha-obs-panic () "Stop streaming and discard the delay buffer. This uses a hotkey I defined in OBS." (interactive) (shell-command "~/bin/panic") (org-capture-string "Panicked" "l") (org-capture-finalize))I always have Emacs around, and if it's not my main app, I have an autokey shortcut that maps
super + 1to focus on Emacs. Then I canM-x panicand Emacs completion will take care of finding the right function.Let's add a menu item for even more panic assistance:
(easy-menu-define sacha-stream-menu global-map "Menu for streaming-related commands." '("Stream" ["🛑 PANIC" sacha-obs-panic] ["Start streaming" obs-websocket-start-streaming] ["Start recording" obs-websocket-start-recording] ["Stop streaming" obs-websocket-stop-streaming] ["Stop recording" obs-websocket-stop-recording]))Let's see if I remember to use it!
This is part of my Emacs configuration.You can e-mail me at sacha@sachachua.com.
-
🔗 r/Harrogate Improv Jam Session - Tonight rss
| Hi All, I run improv comedy sessions every couple of weeks in Harrogate. Our next one is next tonight. They are very low pressure, we do some easy group warm ups, followed by games and exercises. Our current sessions are aimed at beginners and improvers so there has never been a better time to try it out. If you have any questions let me know. As a bonus for first time joiners your first session is free. Thanks. submitted by /u/GritstoneBoulderer
[link] [comments]
---|--- -
🔗 r/york York woman, 86, convicted after car insurance typo rss
| submitted by /u/Perfect-Cycle-5384
[link] [comments]
---|--- -
🔗 r/Harrogate Second hand furniture rss
Moving to the area soon, and wondering where is best to look for second hand furniture, if there's any big stores or anything.
Looking for things like dining set , shelves, drawers, lamps etc.
I've only had a quick look in St Michael's on Ripon road so far but didn't find much there.
submitted by /u/brich0910
[link] [comments] -
🔗 r/wiesbaden Cafés zum Lernen rss
Hallo zusammen,
gibt es gute Cafés zum Lernen in der Innenstadt? Bibliotheken sind für mich eher raus, weil ich nebenbei gerne was essen/ trinken/ snacken möchte oder wenn wir zu zweit lernen, uns auch mal unterhalten wollen. Hab gehört das Café im Hugendubel soll gut sein, aber sind Lernende dort auch willkommen? Beim Coffee Fellows wurden welche wohl schon blöd angemacht, wenn man mit Laptop länger dort saß.
Freue mich auf eure Tipps!
submitted by /u/Hour_Inspector8601
[link] [comments] -
🔗 r/LocalLLaMA Unpopular opinion: OpenClaw and all its clones are almost useless tools for those who know what they're doing. It's kind of impressive for someone who has never used a CLI, Claude Code, Codex, etc. Nor used any workflow tool like 8n8 or make. rss
It seems to me that OpenClaw and all its clones are almost useless tools for those who know what they're doing.
It's kind of impressive for someone who has never used a CLI, Claude Code, Codex, etc. Nor used any workflow tool like 8n8 or make.
For these people, asking an AI to create a program or a new tool with a prompt must seem like magic. For those who already use it, it seems like something that simplified the old ones but made them much more chaotic and unsafe.
The only good thing about it is that it made more "ordinary" people interested in these agentic tools. Sending messages via Telegram is much more user- friendly.
submitted by /u/pacmanpill
[link] [comments] -
🔗 r/york Cheap train tickets to London rss
Hi, apologies if this isn’t allowed. I’m selling these two return tickets (direct) from York to London Kings Cross next weekend. Both tickets were bought with railcard (18-25 and 26-30 respectively) for £90. Open to offers as desperate to sell. PM if you have any questions
Outbound : Friday 1st May departure 08:18 York arrival 10:16 King’s Cross
Return : Saturday 2nd May departure 20:33 King’s Cross arrival 22:30 York.
submitted by /u/AdditionalMobile381
[link] [comments] -
🔗 r/york Recycling - only took cardboard? rss
Morning all, just wondering if anyone else has had a situation either today or previously where their cardboard recycling was taken, but they've left the plastic, glass and tin? This is the whole street, not just us. We're Heworth area.
(apologies this is a bit of a Facebook type of post, but I try to stay away from that nonsense platform. Don't want to get brainwashed into voting reform.)
submitted by /u/Educational-Ground83
[link] [comments] -
🔗 r/LocalLLaMA Every time a new model comes out, the old one is obsolete of course rss
| submitted by /u/FullChampionship7564
[link] [comments]
---|--- -
🔗 r/wiesbaden Brettspiel Mitspieler gesucht rss
Ich bin großer Brettspielfanatiker und lade regelmäßig Freunde zu mir ein. Da aber selten größere Spiele oder gar Kampagnen auf den Tisch kommen, da es manchen oft zu komplex ist, suche ich nach Leuten (gerne in den 20ern) die Bock auf solche Spiele haben!
Ein paar Beispiele: Pandemic Legacy, Scythe, Descent, Ankh, Nemesis, Gaia Project
submitted by /u/vivienskt
[link] [comments] -
🔗 r/LocalLLaMA 2x 512gb ram M3 Ultra mac studios rss
| $25k in hardware. tell me what you want me to load on them and i'll help test.
i've done deepseek v3.2 Q8 so far with exo backend. currently running GLM 5.1 Q4 on each (troubleshooting why exo isn't loading the Q8 version) patiently awaiting kimi2.6 for when the community optimizes it for MLX/mmap submitted by /u/taylorhou
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Kimi K2.6 is a legit Opus 4.7 replacement rss
After testing it and getting some customer feedback too, its the first model I'd confidently recommend to our customers as an Opus 4.7 replacement.
It's not really better than Opus 4.7 at anything, but, it can do about 85% of the tasks that Opus can at a reasonable quality, and, it has vision and very good browser use.
I've been slowly replacing some of my personal workflows with Kimi K2.6 and it works surprisingly well, especially for long time horizon tasks.
Sure the model is monstrously big, but I think it shows that frontier LLMs like Opus 4.7 are not necessarily bringing anything new to the table. People are complaining about usage limits as well, it looks like local is the way to go.
submitted by /u/bigboyparpa
[link] [comments] -
🔗 r/reverseengineering Detect It Easy 3.20 Program for determining types of files for Windows, Linux and MacOS. rss
submitted by /u/horsicq
[link] [comments] -
🔗 Drew DeVault's blog Addressing the harassment rss
Kiwi Farms is a web forum that facilitates the discussion and harassment of online figures and communities. Their targets are often subject to organized group trolling and stalking, as well as doxing and real-life harassment. Kiwi Farms has been tied to the suicides of three people who were victims of harassment by the website.
About three years ago, a thread on Kiwi Farms was opened about me. In the years since, it has grown to about 1,200 posts full of bigots responding to anything and everything I do online with scorn, slurs, and overt bigotry. The thread is full of resources to facilitate harassment, including, among other things, all of my social media profiles, past and present, a history of my residential addresses, my phone numbers, details about my family members, a list of my usernames and password hashes from every leaked database of websites I have accounts on, and so on. Most of my articles or social media posts are archived on Kiwi Farms and then subjected to the most bigoted rebuttals you can imagine. Honestly, it’s mostly just… pathetic. But it’s a problem when it escapes containment, and it’s designed to.
Kiwi Farms is the most organized corner of the harassment which comes my way, but it comes in many forms. On Mastodon, for example, before I deleted my account I would often receive death threats, or graphic images and videos of violence against minorities. I have received a lot of hate and death threats over email, too, several of which I confess that I took some pleasure in forwarding to the sender’s employer.
One of the motivations for this harassment is to “milk” me for “drama”. The idea is to get my hackles up, make me fearful for my safety, and alienate me from my communities, with the hope that it will trigger an entertaining meltdown. Maybe people respond poorly to this kind of harassment – that’s the idea, really – and it often makes the situation worse. Responding to it can legitimize the abuse, elevate it into the discourse, draw more attention to it, and stoke the flames. It can make the victim look bad when they respond emotionally to harassment designed to evoke negative emotions. I have left it unaddressed for a long time in order to subvert this goal, and address it now with a cool head in a relatively quiet period in the harassment campaign.
The harassment waxes and wanes over time, usually picking up whenever I write a progressive blog post that gets some reach. It really took off after a series of incidents in which I called for the Hyprland community and its maintainers to be held to account for the bigotry and harassment on their Discord server (1, 2) and when I spoke out against Richard Stallman’s prolific and problematic public statements regarding the sexual abuse of minors (3).
The abuse crescendoed in October of 2024, when I was involved in editing The Stallman Report. The report is a comprehensive analysis of Richard Stallman’s problematic political discourse regarding sexual harassment, sexual assault, and the sexual abuse of minors, and it depends almost entirely on primary sources – quotes from Stallman’s website which remain online and have not been retracted to this day. The purpose of the report was to make a clear and unassailable case for Stallman’s removal from positions of power, make specific recommendations to address the underlying problems, and to stimulate a period of reflection and reform in the FOSS community. It didn’t achieve much, in the end: the retaliation from Stallman’s defenders was fiercer and more devoted than the support from those who saw the report’s sense.
Myself and the other authors asserted our moral rights to publish anonymously, motivated by our wish to reduce our exposure to the exact sort of harassment I’ve been subjected to over the years. However, I was careless in my opsec during the editing process, and it was possible to plausibly link me to the report as a result, leading to a sharp increase in harassment.
This brings me to a retaliatory, defamatory “report” published about me in the style of the Stallman Report.1 This report is, essentially, a distillation of the Kiwi Farms thread on me, sanitized of overt bigotry and presented in a readily linkable form in order to stalk me around the internet and enable harassment. It’s used to discredit anything I do online and push for my exclusion from online communities, by dropping the link on Hacker News, Reddit, GitHub or Codeberg issues, etc, anywhere myself or my work is mentioned, or used to discredit the Stallman Report by discrediting one of its unmasked authors.2
The report is pretty obviously written in bad faith and relies on a lot of poor arguments to make the case that I’m a misogynist and a pedophile, charges I deny. It also accuses me of being a hypocrite, which I acknowledge in general terms, because, well, who isn’t. The key thing I want people who encounter this report to keep in mind is that this is the “polite” face of an organized harassment campaign.
Most reasonable readers easily dismiss the report because it is rather transparent in its bad faith. However, someone who reads it in good faith, just trying to do their due diligence, might come away from it with some reasonable concerns. Consider the following quote from my long-deleted Reddit account, /u/sircmpwn:
I’m of the opinion that 14 year old girls should be required to have an IUD installed. Ten years of contraception that requires a visit to the doctor to remove prematurely.
This comment was written 13 years ago, and I don’t stand by what I wrote. I was 19 at the time, and I was a moron. My mother had me when she was 23 years old, and the abuse I suffered at her hands during my childhood was severe, and I generalized this experience to all women. When I wrote this comment, I was one year removed from the abuse, living alone and in poverty, and early in a life-long process of coming to terms with the abuse and figuring out how to be a well-adjusted adult after 18 long years of abuse and isolation.
But an explanation is not an excuse. This comment was reprehensible, as were many of the awful ideas I held at the time. Many years later, I can recognize that this comment is misogynistic, denies the agency of children and women over their own bodies, disparages the many, many mothers who do a wonderful job raising children in difficult circumstances, and is based in argumentation which can reasonably be related to eugenics. This comment was just awful – there’s a reason this was deleted. I apologize to anyone who read it at the time, or comes across it now, and is justifiably insulted.
I don’t feel that it’s necessary to rebuke most of the report. But, there is a grain of truth in the report, the grain of truth that led me to retract my shitty Reddit comments and reflect on myself, and that grain of truth is this: in early adulthood, I was a huge asshole.
I have had more than my fair share of harmful ignorance, bad takes, sexism and misogyny, transphobic and homophobic beliefs, and worse. Moreover, I have verbally abused many people and made many of my own arguments in bad faith to support bad conclusions. Some of the people who read this will recall having found themselves at the wrong end of my verbal abuse and harassment.
It’s important for me to take responsibility for this period of my life, and in dismissing bad faith criticisms of myself to carefully avoid dismissing good faith criticisms in the same fell swoop.
I’m not really sure how to deal with this part of my life appropriately. I have apologized to a few people individually, but it’s not a scalable solution and with many people I have no business re-opening wounds to salve my own conscience. I can offer a general apology, and I will. I’ve never found the right moment to say it, but now will do: I apologise, sincerely, to everyone who I have harmed with verbal abuse and with hateful and problematic rhetoric. If you have had a bad experience or experiences with me, and there’s anything you want from me that can help you heal from that experience – a personal apology, for example – please reach out to me and ask.
That said, apologies alone aren’t enough. I believe in restorative justice, in growing and mending wounds and repairing harm done, and I set myself seriously to this task over many years. I have gone to therapy, spoken with close friends about it, and taken structural action as well: I have founded support groups and worked one-on-one with many of the people whose politics and behavior I object to. I want an amicable end to bigotry and bullying, for bigots and bullies like my former self to look forward to, to provide a path that doesn’t require them to double down. It’s not easy, and not everyone manages, but I have to look at myself and see the path I’ve taken and imagine that it’s possible, because what’s left for the likes of me if not?
This part of my past brings me a great deal of shame, and that shame motivates me to grow as a person. In a certain sense, it is an ironic, cruel privilege to have had so much cause to reflect on myself, to drive me to question myself and my ideas, and become a much better person with much more defensible ideas. It has driven me to study feminism, social justice, racial justice, intersectionality, LGBTQ theory, antifascism, and to find the intersections in my own life and strive to act out of a more legitimate sense of justice.
I’m often still a firebrand, but I’ve chosen much better hills to die on. My passion is invested in making a more just world, building safe and healthy communities, elevating my peers, and calling for justice and a just society. I have taken the lessons I have learned and tried to share them with other people, and to stand up for what I can now say I know is right, both online and in real life. Through a process of learning, reflection, and humility, I acknowledge that I have done a lot harm in my youth. To repair this harm, I have committed myself to doing more than enough good now to make sure that the world is a better place when all is said and done. That’s what justice means to me when I turn my principles inwards and hold myself accountable.
So where do we go from here?
The response to my progressive beliefs and activism is reactionary backlash, doxing, harassment, and death threats targeting me and my family, all of which is likely to escalate in response to this post, and none of which is defensible. On the other hand, I understand that the consequences for my own reactionary past is, in some cases, alienation – and, honestly, fair enough.
But I don’t want you to confuse my honest faults with the defamation and harassment I endure for standing up for my honest strengths. If you feel generous and optimistic about who I am today, and you recognize my growth, and wish for an ally in the fight for what’s right, your good faith and solidarity mean the world to me. I would appreciate it if you would express your support and rebuke harassment when you see it, and help keep me honest as I continue a life-long process of learning and growth.
If I’ve hurt you, and you want to seek reconciliation, I make myself available to you for that purpose. If I’ve hurt you, and you simply don’t care to be hurt again, I’m sorry – I understand where you’re coming from, and have made my peace with it.
Please send words of support and/or death threats to drew@ddevault.org.
Thank you.
-
🔗 Baby Steps Symposium: community-oriented agentic development rss
I'm very excited to announce the first release of the Symposium project as well as its inclusion in the Rust Foundation's Innovation Lab. Symposium’s goal is to let everyone in the Rust community participate in making agentic development better. The core idea is that crate authors should be able to vend skills, MCP servers, and other extensions, in addition to code. The Symposium tool then installs those extensions automatically based on your dependencies. After all, who knows how to use a crate better than the people who maintain it?
If you want to read more details about how Symposium works, I refer you to the announcement post from Jack Huey on the main Symposium blog. This post is my companion post, and it is focused on something more personal - the reasons that I am working on Symposium.
I believe in extensibility everywhere
The short version is that I believe in extensibility everywhere. Right now, the Rust language does a decent job of being extensible: you can write Rust crates that offer new capabilities that feel built-in, thanks to proc- macros, traits, and ownership. But we're just getting started at offering extensibility in other tools, and I want us to hurry up!
I want crate authors to be able to supply custom diagnostics. I want them to be able to supply custom lints. I want them to be able to supply custom optimizations. I want them to be able to supply custom IDE refactorings. And, as soon as I started messing around with agentic development, I wanted extensibility there too.
Symposium puts crate authors in charge
The goal of Symposium is to give crate authors, and the broader Rust community, the ability to directly influence the experience of people writing Rust code with agents. Rust is a really popular target language for agents because the type system provides strong guardrails and it generates efficient code - and I predict it's only going to become more popular.
Despite Rust's popularity as an agentic coding target, the Rust community right now are basically bystanders when it comes to the experience of people writing Rust with agents; I want us to have a means of influencing it directly.
Enter Symposium. With Symposium, Crate authors can package up skills etc and then Symposium will automatically make them available for your agent. Symposium also takes care of bridging the small-but-very-real gaps between agents (e.g., each has their own hook format, and some of them use
.agents/skillsand some use.claude/skills, etc).Example: the assert-struct crate
Let me give you an example. Consider the assert- truct crate, recently created by Carl Lerche.
assert-structlets you write convenient assertions that test the values of specific struct fields:assert_struct!(val, _ { items: [1, 2, ..], tags: #("a", "b", ..), .. });The problem: agents don't know about it
This crate is neat, but of course, no models are going to know how to use it - it's not part of their training set. They can figure it out by reading the docs, but that's going to burn more tokens (expensive, slow, consumes carbon), so that's not a great idea.
You could teach the agent how to use it…
In practice what people do today is to add skills to their project - for example, in his
toastycrate, Carl has a testing skill that also shows how to use assert-struct. But it seems silly for everybody who uses the crate to repeat that content.…but wouldn't it be better the crate could teach the agent itself?
With Symposium, teaching your agent how to use your dependencies should not be necessary. Instead, your crates can publish their own skills or other extensions.
The way this works is that the assert-struct crate defines the skill once, centrally, in its own repository1. Then there is a separate file in Symposium's central recommendations repository with a pointer to the assert-struct repository. Any time that the assert-struct repository updates that skill, the updates are automatically synchronized for you. Neat! (You can also embed skills directly in the rr repository, but then updating them requires a PR to that repo.)
Frequently asked questions
How do I add support for my crate to Symposium?
It's easy! Check out the docs here:
https://symposium.dev/crate-authors/supporting-your-crate.html
What kind of extensions does Symposium support?
Skills, hooks, and MCP Servers, for now.
Why does Symposium have a centralized repository?
Currently we allow skill content to be defined in a decentralized fashion but we require that a plugin be added to our central recommendations repository. This is a temporary limitation. We eventually expect to allow crate authors to adds skills and plugins in a fully decentralized fashion.
We chose to limit ourselves to a centralized repository early on for three reasons:
- Even when decentralized support exists, a centralized repository will be useful, since there will always be crates that choose not to provide that support.
- Having a central list of plugins will make it easy to update people as we evolve Symposium.
- Having a centralized repository will help protect against malicious skills[^threat] while we look for other mechanisms, since we can vet the crates that are added and easily scan their content.
What if I want to add skills for crates private to my company? I don't
want to put those in the central repository!
No problem, you can add a custom plugin source.
Are you aware of the negative externalities of LLMs?
I am, very much so. I feel like a lot of the uses of LLMs we see today are not great (e.g., chat bots hijack conversational and social cues to earn trust that they don't deserve) and to reconfirm peoples' biases instead of challenging their ideas. And I'm worried about the environmental cost of data centers and the way companies have retreated from their climate goals. And I don't like how centralized models concentrate economic power.2 So yeah, I see all that. And I also see how LLMs enable people to build things that they couldn't build before and help to make previously intractable problems soluble - and that includes more and more people who never thought of themselves as programmers3. My goal with Symposium and other projects is to be part of the solution, finding ways to leverage LLMs that are net positive: opening doors, not closing them.
Extensibility: because everybody has something to offer
Fundamentally, the reason I am working on Symposium is that I believe everybody has something unique to offer. I see the appeal of strongly opinionated systems that reflect the brilliant vision of a particular person. But to me, the most beautiful systems are the ones that everybody gets to build together4. This is why I love open source. This is why I love emacs5. It's why I love VSCode's extension system, which has so many great gems6.
To me, Symposium is a double win in terms of empowerment. First, it makes agents extensible, which is going to give crate authors more power to support their crates. But it also helps make agentic programming better, which I believe will ultimately open up programming to a lot more people. And that is what it's all about.
-
Actually as of this posting, the assert-struct skill is embedded directly in the recommendations repo. But I opened a PR to put it on assert-struct and I'll port it over once it lands. ↩︎
-
I'm very curious to do more with open models. ↩︎
-
Within Amazon, it's been amazing to watch how many people who never thought of themselves as software developers are starting to build software. Considering the challenges the software industry has with representation, I find this very encouraging. Diverse teams are stronger, better teams! ↩︎
-
None of this is to say I don't believe in good defaults; there's a reason I use Zed and VSCode these days, and not emacs, much as I love it in concept. ↩︎
-
OMG. One of my friends college wrote this amazing essay some time back on emacs. Next time you're doomscrolling on the toilet or whatever, pop over to this essay instead. Fair warning, it's long, so it'll take you a while to read, but I think it nails what people love about emacs. ↩︎
-
These days I'm really enjoying Zed, but I have to say, I really miss kahole/edamagit! Which of course is inspired by the magit emacs package. ↩︎
-
- April 20, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-20 rss
IDA Plugin Updates on 2026-04-20
Activity:
- ida-chat-plugin
- 883f9b35: Merge pull request #3 from joaquimbc/windows-cli
- IDAPluginList
- 3dcf0a61: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- python-elpida_core.py
- c4e01069: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T23:45Z
- 563e2ac9: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T23:26Z
- 2c045956: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T23:06Z
- 9eb03804: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T22:47Z
- db2376e6: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T22:27Z
- 0bf53343: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T22:07Z
- 506143ab: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T21:49Z
- 60091540: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T21:28Z
- e659ef33: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-20T21:06Z
- ida-chat-plugin
-
🔗 r/reverseengineering Wrote a Linux rootkit (DKOM, eBPF bypass) and a detector to find it — sharing both rss
submitted by /u/buter_chkalova
[link] [comments] -
🔗 r/york The perks of being a local - spontaneous trips in to create art! rss
| submitted by /u/GalacticGoose1
[link] [comments]
---|--- -
🔗 anthropics/claude-code v2.1.116 release
What's changed
/resumeon large sessions is significantly faster (up to 67% on 40MB+ sessions) and handles sessions with many dead-fork entries more efficiently- Faster MCP startup when multiple stdio servers are configured;
resources/templates/listis now deferred to first@-mention - Smoother fullscreen scrolling in VS Code, Cursor, and Windsurf terminals —
/terminal-setupnow configures the editor's scroll sensitivity - Thinking spinner now shows progress inline ("still thinking", "thinking more", "almost done thinking"), replacing the separate hint row
/configsearch now matches option values (e.g. searching "vim" finds the Editor mode setting)/doctorcan now be opened while Claude is responding, without waiting for the current turn to finish/reload-pluginsand background plugin auto-update now auto-install missing plugin dependencies from marketplaces you've already added- Bash tool now surfaces a hint when
ghcommands hit GitHub's API rate limit, so agents can back off instead of retrying - The Usage tab in Settings now shows your 5-hour and weekly usage immediately and no longer fails when the usage endpoint is rate-limited
- Agent frontmatter
hooks:now fire when running as a main-thread agent via--agent - Slash command menu now shows "No commands match" when your filter has zero results, instead of disappearing
- Security: sandbox auto-allow no longer bypasses the dangerous-path safety check for
rm/rmdirtargeting/,$HOME, or other critical system directories - Fixed Devanagari and other Indic scripts rendering with broken column alignment in the terminal UI
- Fixed Ctrl+- not triggering undo in terminals using the Kitty keyboard protocol (iTerm2, Ghostty, kitty, WezTerm, Windows Terminal)
- Fixed Cmd+Left/Right not jumping to line start/end in terminals that use the Kitty keyboard protocol (Warp fullscreen, kitty, Ghostty, WezTerm)
- Fixed Ctrl+Z hanging the terminal when Claude Code is launched via a wrapper process (e.g.
npx,bun run) - Fixed scrollback duplication in inline mode where resizing the terminal or large output bursts would repeat earlier conversation history
- Fixed modal search dialogs overflowing the screen at short terminal heights, hiding the search box and keyboard hints
- Fixed scattered blank cells and disappearing composer chrome in the VS Code integrated terminal during scrolling
- Fixed an intermittent API 400 error related to cache control TTL ordering that could occur when a parallel request completed during request setup
- Fixed
/branchrejecting conversations with transcripts larger than 50MB - Fixed
/resumesilently showing an empty conversation on large session files instead of reporting the load error - Fixed
/pluginInstalled tab showing the same item twice when it appears under Needs attention or Favorites - Fixed
/updateand/tuinot working after entering a worktree mid-session
-
🔗 badlogic/pi-mono v0.68.0 release
New Features
- Configurable streaming working indicator for extensions via
ctx.ui.setWorkingIndicator(), including animated, static, and hidden indicators. See docs/tui.md#working-indicator, docs/extensions.md, and examples/extensions/working-indicator.ts. before_agent_startnow exposessystemPromptOptions(BuildSystemPromptOptions) so extensions can inspect the structured system-prompt inputs without re-discovering resources. See docs/extensions.md#before_agent_start and examples/extensions/prompt-customizer.ts.- Configurable keybindings for scoped model selector actions and session-tree filter actions. See docs/keybindings.md.
/cloneduplicates the current active branch into a new session, while extensions can choose whether to forkbeforeoratan entry viactx.fork(..., { position }). See README.md, docs/extensions.md, and docs/session.md.
Breaking Changes
- Changed SDK and CLI tool selection from cwd-bound built-in tool instances to tool-name allowlists.
createAgentSession({ tools })now expectsstring[]names such as"read"and"bash"instead ofTool[],--toolsnow allowlists built-in, extension, and custom tools by name, and--no-toolsnow disables all tools by default rather than only built-ins. Migrate SDK code fromtools: [readTool, bashTool]totools: ["read", "bash"](#2835, #3452) - Removed prebuilt cwd-bound tool and tool-definition exports from
@mariozechner/pi-coding-agent, includingreadTool,bashTool,editTool,writeTool,grepTool,findTool,lsTool,readOnlyTools,codingTools, and the corresponding*ToolDefinitionvalues. Use the explicit factory exports instead, for examplecreateReadTool(cwd),createBashTool(cwd),createCodingTools(cwd), andcreateReadToolDefinition(cwd)(#3452) - Removed ambient
process.cwd()/ default agent-dir fallback behavior from public resource helpers.DefaultResourceLoader,loadProjectContextFiles(), andloadSkills()now require explicit cwd/agent-dir style inputs, and exported system-prompt option types now require an explicitcwd. Pass the session or project cwd explicitly instead of relying on process-global defaults (#3452)
Added
- Added extension support for customizing the interactive streaming working indicator via
ctx.ui.setWorkingIndicator(), including custom animated frames, static indicators, hidden indicators, a newworking-indicator.tsexample extension, and updated extension/TUI/RPC docs (#3413) - Added
systemPromptOptions(BuildSystemPromptOptions) tobefore_agent_startextension events, so extensions can inspect the structured inputs used to build the current system prompt (#3473 by @dljsjr) - Added
/cloneto duplicate the current active branch into a new session, while keeping/forkfocused on forking from a previous user message (#2962) - Added
ctx.fork()support forposition: "before" | "at"so extensions and integrations can branch before a user message or duplicate the current point in the conversation; the interactive clone/fork UX builds on that runtime support (#3431 by @mitsuhiko) - Added configurable keybinding ids for scoped model selector actions and tree filter actions, so those interactive shortcuts can be remapped in
keybindings.json(#3343 by @mpazik) - Added
PI_OAUTH_CALLBACK_HOSTsupport for built-in OAuth login flows, allowing local callback servers used bypi authto bind to a custom interface instead of hardcoded127.0.0.1(#3409 by @Michaelliv) - Added
reasonandtargetSessionFilemetadata tosession_shutdownextension events, so extensions can distinguish quit, reload, new-session, resume, and fork teardown paths (#2863)
Changed
- Changed
pi updateto batch npm package updates per scope and run git package updates with bounded parallelism, reducing multi-package update time while preserving skip behavior for pinned and already-current packages (#2980) - Changed Bedrock session requests to omit
maxTokenswhen model token limits are unknown and to omittemperaturewhen unset, letting Bedrock use provider defaults and avoid unnecessary TPM quota reservation (#3400 by @wirjo)
Fixed
- Fixed
AgentSessionsystem-prompt option initialization to avoid constructing an invalid emptyBuildSystemPromptOptions, sonpm run checkpasses aftercwdbecame mandatory. - Fixed shell-path resolution to stop consulting ambient
process.cwd()state during bash execution, so session/project-specificshellPathsettings now follow the active coding-agent session cwd instead of the launcher cwd (#3452) - Fixed
ctx.ui.setWorkingIndicator()custom frames to render verbatim instead of forcing the theme accent color, so extensions now own working-indicator coloring when they customize it (#3467) - Fixed
pi updatereinstalling npm packages that are already at the latest published version by checking the installed package version before runningnpm install <pkg>@latest(#3000) - Fixed
@autocomplete plain queries to stop matching against the full cwd/base path, so path fragments in worktree names no longer crowd out intended results such as@plan(#2778) - Fixed built-in tool wrapping to use the same extension-runner context path as extension tools, so built-in tools receive execution context and
readcan warn when the current model does not support images (#3429) - Fixed
openai-completionsassistant replay to preservecompat.requiresThinkingAsTexttext-part serialization, avoiding same-model follow-up crashes when previous assistant messages mix thinking and text (#3387) - Fixed direct OpenAI Chat Completions sessions to map
sessionIdandcacheRetentionto prompt caching fields, sendingprompt_cache_keywhen caching is enabled andprompt_cache_retention: "24h"for directapi.openai.comrequests with long retention (#3426) - Fixed OpenAI-compatible Chat Completions sessions to optionally send aligned
session_id,x-client-request-id, andx-session-affinityheaders fromsessionIdviacompat.sendSessionAffinityHeaders, improving cache-affinity routing for backends such as Fireworks (#3430) - Fixed threaded
/resumesession relationships and current-session detection to canonicalize symlinked session paths during selector comparisons, so shared session directories no longer break parent-child matching or active-session delete protection (#3364) - Fixed
/session, Sessions docs, and CLI help to consistently document that session reuse supports both file paths and session IDs, and that/sessionshows the current session ID (#3390) - Fixed Windows pnpm global install detection to recognize
\\.pnpm\\store paths, so update notices now suggestpnpm install -g @mariozechner/pi-coding-agentinstead of falling back to npm (#3378) - Fixed missing
@sinclair/typeboxruntime dependency in@mariozechner/pi-coding-agent, so strict pnpm installs no longer fail withERR_MODULE_NOT_FOUNDwhen startingpi(#3434) - Fixed xterm uppercase typing in the interactive editor by decoding printable
modifyOtherKeysinput and normalizing shifted letter matching, soShift+letterno longer disappears inpi(#3436) - Fixed
/compactto reuse the session thinking level for compaction summaries instead of forcinghigh, avoiding invalid reasoning-effort errors ongithub-copilot/claude-opus-4.7sessions configured formediumthinking (#3438) - Fixed shared/exported plain-text tool output to preserve indentation instead of collapsing leading whitespace in the web share page (#3440)
- Fixed exported share pages to use browser-safe
TandOshortcuts with clickable header toggles for thinking and tool visibility instead of browser-reservedCtrl+T/Ctrl+Obindings (#3374 by @vekexasia) - Fixed skill resolution to dedupe symlinked aliases by canonical path, so
pi configno longer shows duplicate skill entries when~/.pi/agent/skillspoints to~/.agents/skills(#3417 by @rwachtler) - Fixed OpenRouter request attribution to include Pi app headers (
HTTP-Referer: https://pi.dev,X-OpenRouter-Title: pi,X-OpenRouter-Categories: cli-agent) when sessions are created through the coding-agent SDK and install telemetry is enabled (#3414) - Fixed custom-model
compatschema/docs to supportcacheControlFormat: "anthropic"for OpenAI-compatible providers that expose Anthropic-style prompt caching viacache_controlmarkers (#3392) - Fixed Cloud Code Assist tool schemas to strip JSON Schema meta-declaration keys before provider translation, avoiding validation failures for tool-enabled sessions that use
$schema,$defs, and related metadata (#3412 by @vladlearns) - Fixed direct Bedrock sessions to honor
model.baseUrlas the runtime client endpoint, restoring support for custom Bedrock VPC or proxy routes (#3402 by @wirjo) - Fixed the
edittool to coerce stringifiededitsJSON before validation, so models that send the array payload as a JSON string no longer fall back to ad-hoc shell edits (#3370 by @dannote) - Fixed package manifest positive glob entries to expand before loading packaged resources, restoring manifest patterns such as
skills/**/*.md(#3350 by @neonspectra)
- Configurable streaming working indicator for extensions via
-
🔗 r/Yorkshire Dean's Park, York rss
submitted by /u/RedPandaCommander24
[link] [comments] -
🔗 r/york Dean's Park rss
| submitted by /u/RedPandaCommander24
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Gemma-4-E2B's safety filters make it unusable for emergencies rss
| I’ve been testing Google’s Gemma-4-E2B-it as a local, offline resource for emergency preparedness. The idea was to have a lightweight model that could provide basic technical or medical info if the internet goes down. As the screenshots show, the safety filters are so aggressive that the model is functionally useless for these scenarios. It issues a "hard refusal" on almost everything: - First Aid: Refused to explain an emergency airway procedure, even when specified as a last resort. - Water/Sanitation: Refused to provide chemical ratios for purifying water. - Maintenance: Refused basic mechanical help with a self-defense tool. - Food: Refused instructions on how to process livestock. In a scenario like a war or a total grid collapse, "Contact emergency services" isn't a valid answer. It's disappointing that an offline model, designed for portability, is programmed to withhold basic survival information under the guise of safety. submitted by /u/Unfounded_898
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [IDAssist](https://github.com/symgraph/IDAssist): 2.0.0 -
🔗 r/Leeds There was a campaign around 2010 to rename Leeds Bradford airport "Sir Jimmy Savile International" rss
This was a big facebook group and there was a petition to have it made official, especially in the weeks after he died. This was kick started in the Yorkshire Evening Post - the original article in the Yorkshire Evening post has been deleted (shocker) but some websites remain.
submitted by /u/M_M_X_X_V
[link] [comments] -
🔗 r/Leeds Leeds music rss
What bands do you think really define Leeds’ sound? And are there any newer acts people are excited about at the moment?
I put this map together a while ago and I’m thinking of updating it, so would be great to hear what people think, especially if there’s anything obvious I’ve missed or newer bands worth adding.
submitted by /u/TheSenseOfDoubt
[link] [comments] -
🔗 r/wiesbaden Hubschrauber rss
Weiß jemand, was es mit den beiden Hubschrauber-Flügen heute Abend gegen 21:30 auf sich hatte?
Wirkten um einiges größer als ein Polizei- oder Rettungshubschrauber und waren überm Dichterviertel deutlich zu sehen und zu hören.
Werden geplante Flüge von der US Airbase irgendwo angekündigt/öffentlich dokumentiert?
submitted by /u/Tisiphoni1
[link] [comments] -
🔗 r/Yorkshire York cherry blossoms looking spectacular this year rss
submitted by /u/RedPandaCommander24
[link] [comments] -
🔗 @binaryninja@infosec.exchange Binary Ninja 5.3 (Jotunheim) adds new architecture APIs for full function mastodon
Binary Ninja 5.3 (Jotunheim) adds new architecture APIs for full function level lifting. We are already using them for upcoming TMS320C6x work, and plugin authors should be able to put them to good use too. Also new: NDS32 and AArch64 ILP32 ABI updates. Check out the latest blog: https://binary.ninja/2026/04/13/binary-ninja-5.3-jotunheim.html#architecture --platform
-
🔗 r/wiesbaden Best Schnitzel in town und Umgebung? 🤤 rss
submitted by /u/Haunting-Ad2182
[link] [comments] -
🔗 r/Leeds Harewood House, Gardens and Lake - 2025 rss
Photographs captured by Samuel Greenwood.
submitted by /u/Money_Pie_40
[link] [comments] -
🔗 r/LocalLLaMA Kimi K2.6 rss
| Benchmarks submitted by /u/Fantastic-Emu-3819
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Kimi K2.6 Released (huggingface) rss
| submitted by /u/BiggestBau5
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life 2026-04-20 Emacs news rss
I enjoyed reading Hot-wiring the Lisp machine (an adventure into modifying Org publishing). I'm also looking forward to debugging my Emacs Lisp better with timestamped debug messages and ert-play-keys. I hope you also find lots of things you like in the links below!
- Upcoming events (iCal file, Org):
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Apr 25 0130 America/Vancouver - 0330 America/Chicago - 0430 America/Toronto - 0830 Etc/GMT - 1030 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Emacs Berlin: Emacs-Berlin Hybrid Meetup https://emacs-berlin.org/ Wed Apr 29 1000 America/Vancouver - 1200 America/Chicago - 1300 America/Toronto - 1700 Etc/GMT - 1900 Europe/Berlin - 2230 Asia/Kolkata – Thu Apr 30 0100 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri May 1 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Beginner:
- Emacs configuration:d
- Emacs Lisp:
- What are some common code smells that inexperienced Elispers make?
- Updated kickingvegas/elisp-for-python - improved sections on map types and iteration (@kickingvegas@sfba.social)
- load settings from files sorted by number (@cage@mastodon.bsd.cafe)
- dmsg.el: Timestamped debug messages with backtrace support (Reddit)
- Defining λ as a macro for lambda (@marcel@van-der-boom.nl)
- Listful Andrew: Mars Rovers IV: The Solutions — Emacs Lisp
- Listful Andrew: Mars Rovers IX: The Grid Viz Solutions — Emacs Lisp
- Appearance:
- Loading the theme and user face customizations at the right moment
- Protesilaos Stavrou: Emacs: new modus-themes-exporter package (YouTube 2:56:36)
- faff theme v4.0; now using modus-themes (Reddit)
- folio-theme: a warm paper-like light theme for Emacs (Reddit)
- Emacs Redux: Batppuccin and Tokyo Night Themes Land on MELPA
- Navigation:
- Dired:
- Writing:
- Dave Pearson: boxquote.el v2.4 - added a transient
- Dave Pearson: blogmore.el v4.1 - change image extension to webp
- Launching a new grammar/spell checking tool for Org-mode, LaTeX, Markdown, Python, Clang, etc. (Reddit)
- ekg version 0.9.0: New notes UI, Apple Notes Syncing, agentic actions and org integration (YouTube 21:36)
- Org Mode:
- Remember everything with Org Mode (10:17)
- Organizing my retirement with org-mode – Andy Sylvester's Web
- Org-roam pour la prise de notes (avec Spacemacs) (20:37)
- org-auto-scheduler (r/emacs, r/orgmode)
- folgezett.el a package for Org-Roam users (Reddit)
- Avoiding mismatched Org versions by removing ELPA/MELPA packages and other Org performance tips (@publicvoit@graz.social)
- Emacs as a Math Notebook and Advanced Symbolic Solver! (Irreal)
- #28 bbb:OrgMeetup on Wed, March 11, 19:00 UTC+3 - meeting notes (@yantar92@fosstodon.org)
- Import, export, and integration:
- Graphs in Org-Mode! Matplotlib Demo (Reddit)
- [EMACS LAB] #4: "literate" programming (org-babel) (01:38:44)
- Org Mode requests: [RFC] Drop GoogleCL from LoB + ideas for a replacement?
- James Endres Howell: Embedding a Mastodon thread as comments to a blog post - org-static-blog-emfed
- Sacha Chua: Org Mode: JS for translating times to people's local timezones
- Sacha Chua: Create a Google Calendar event from an Org Mode timestamp
- Recent Features Added to lazyblorg (Static Blog Generator) (@jameshowell@fediscience.org)
- Hot-wiring the lisp machine (Reddit, lobste.rs) - modifying publishing
- Org development:
- Completion:
- Coding:
- Tip about using eglot-extend-to-xref
- New Package: eglot-rcpp for simplifying Rcpp package development in emacs (Reddit)
- Scheme for Beginners 2: Guile and Emacs (04:56)
- Shipit update: Atlassian Dashboard for Jira, PR↔issue linking, and activity-level notification navigation
- [Showcase] k8s-to-puml: Deterministic Kubernetes diagrams from your manifests using Tree-sitter and GOFAI rules (Reddit)
- Shells:
- Web:
- paw browser extension can now manage tabs and send tab info, copy links to Emacs (Reddit) Chrome/Firefox extension for sending page context via org-protocol
- Doom Emacs:
- Multimedia:
- Fun:
- Dave Pearson: wordcloud.el v1.4
- Dave Pearson: slstats.el v1.11 - Second Life grid
- AI:
- Community:
- VSCode too SLOW | switch to Emacs and go to PLAID (06:59)
- Cocinándose la renovación de la Web… | Hacia la Hispa-Emacs Conf. 2026 ! (@hispaemacs@fosstodon.org)
- Sacha Chua: YE16: Sacha and Prot talk Emacs
- Eric MacAdie: 2026-04 Austin Emacs Meetup
- 26: Why You'll Never Switch Editors (And What You're Missing)
- Other:
- Tip about setting w32-use-visible-system-caret to nil on Windows
- # omarchy.el - Emacs integration for Omarchy (Reddit)
- trust-manager.el — Towards Trust in Emacs (Reddit, HN, long discussion on emacs-devel)
- emskin: a nested Wayland compositor in Rust that embeds any app into Emacs windows (Reddit)
- Dave's blog: Posframe for everything
- Emacs development:
- New packages:
- agent-recall: Search and browse agent-shell conversation transcripts (MELPA)
- batppuccin: Shared infrastructure for Batppuccin themes (MELPA)
- citar-vulpea: Minor mode integrating Citar and Vulpea (MELPA)
- comet-trail: Cursor comet trail effect (MELPA)
- elixir-iex: IEx REPL via eat terminal emulator (MELPA)
- go-prettify-mode: Hide `if err != nil' and prettify them (MELPA)
- hidepass: Hide passwords at one or multiple lines (MELPA)
- http-server: Speaks HTTP for you (MELPA)
- modus-ewal-theme: Modus theme that uses pywal colors powered by ewal (MELPA)
- python-unicode-escape: Completion for Python \N{NAME} escapes (MELPA)
- rimel: A lightweight Rime input method (MELPA)
- rocq-timing: Display timing of rocq commands in buffer (MELPA)
- sidebuf: Buffer list sidebar panel (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 r/LocalLLaMA When you dial in your bot’s personality rss
| sycophancy: deleted efficiency per token:+1000% friendship: just beginning edit: “sup” got cut off at top submitted by /u/technaturalism
[link] [comments]
---|--- -
🔗 r/Leeds Things to do in Leeds rss
Going to Leeds as a work trip this week and staying there for full day. Can yall recommend places to go to or your favourite food places ?
Thank you 💗
submitted by /u/ConsciousBowler4019
[link] [comments] -
🔗 r/Leeds Anyone lost a ferret? rss
seen along the canal near city island. seemed domesticated but kinda skinny
submitted by /u/fluxpeach
[link] [comments] -
🔗 r/reverseengineering Reconstructing a Dead USB protocol: From Unknown Chip to Working Implementation rss
submitted by /u/Bobby_Bonsaimind
[link] [comments] -
🔗 r/wiesbaden Moving to Wiesbaden rss
Hello everyone
I’m starting a new job in Wiesbaden this August and I desperately need an apartment.
Currently im living near Freiburg.
I don’t need a lot of space but I do have a dog which isn’t gonna make getting an apartment easy.
Do you have any tips or suggestions for me?
Thank you in advance!
submitted by /u/Skoobdie
[link] [comments] -
🔗 r/york Early spring at the Minster rss
| submitted by /u/RedDevilPlay
[link] [comments]
---|--- -
🔗 r/wiesbaden Hiking Wiesbaden/Mainz/Lorch rss
Is anyone interested in hiking this Saturday? Weather is perfect - (Flexible route and time)Lorch to Rewe to Lorchhausen..https://maps.app.goo.gl/K9NB4gg6NomsTvWs5
submitted by /u/Ok-Muscle-9502
[link] [comments] -
🔗 r/york York Mosque Community Kitchen | THURSDAY 23 APRIL 12:00 - 13:30. rss
| Welcome back to our neighbours & friends in r/York! York Mosque Community Kitchen will be back open on Thursday 23rd April between 12:00 and 13:30, where our dedicated volunteers will be cooking and serving two delicious dishes for lunch. We hope to see you there! Bring someone with you who's in need of a good meal and a friendly chat. Always free, everyone welcome! submitted by /u/YorkMosque-Kitchen
[link] [comments]
---|--- -
🔗 r/reverseengineering SASS King: reverse engineering NVIDIA SASS rss
submitted by /u/CurrentLawfulness358
[link] [comments] -
🔗 r/wiesbaden Nix pflück! rss
submitted by /u/Happycosinus
[link] [comments] -
🔗 r/Harrogate The Neverending Harrogate Roadworks Tour has come to my street rss
| Which means I’m minorly inconvenienced for the next couple of days as there’s no parking on street. The surface of roads isn’t really my forte so can someone explain the issue with the road here? I’m fairly certain they resurfaced and repainted it last year and it’s one of the few Harrogate roads with zero potholes. submitted by /u/kamasutramarkviduka
[link] [comments]
---|--- -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/Yorkshire Throwback to 2023. Fountains Abbey hits different in the sun. rss
| Found this photo from three years ago. Fountains Abbey looking bright and the daffodils were just perfect. What’s your favourite spot for a spring walk? Is it looking like this yet? submitted by /u/Happy-Fox11
[link] [comments]
---|--- -
🔗 backnotprop/plannotator v0.18.0 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
v0.17.2 | Supply-chain hardening, sticky toolstrip and badges, overlay scrollbars, external annotation highlighting, Conventional Comments
v0.17.1 | Pi PR review parity, parseRemoteUrl rewrite, cross-repo clone fixes, diff viewer flash fix
v0.17.0 | AI code review agents, token-level annotation, merge-base diffs
v0.16.7 | Gemini CLI plan review, install script skills directory fix
What's New in v0.18.0
v0.18.0 adds focus & wide modes for annotate, first-class OpenCode detection, word-level inline plan diffs, content negotiation for URLs that publish Markdown (via Cloudflare), and inline color swatches in the plan viewer. 13 PRs, 7 from external contributors — 6 of them first-timers.
Word-Level Inline Plan Diff
The old plan diff stacked the full old block above the full new block whenever a paragraph was modified. A single word change showed the same paragraph twice with no visual cue to where the edit actually happened. Readers ended up comparing two nearly identical blocks line by line to find the delta.
The new default Rendered mode performs a second-pass word diff on modified blocks and highlights only the changed tokens inline. A one-word reword now reads as a single paragraph with
<ins>and<del>markers on exactly the changed words. Inline code spans, markdown links, and fenced code blocks are preserved as atomic units through a sentinel substitution pass, so diff markers can't split them.A third mode switcher tab, "Classic," keeps the legacy block-level stacked rendering for users who prefer it. Raw git-style output is unchanged. Modified blocks are click-to-annotate directly, with both the old and new content captured in the exported feedback so comments on struck-through words keep their context.
Amber borders on modified blocks complete the green/red/yellow convention used by GitHub and VS Code.
- #565 by @backnotprop, closing #560 requested by @pbowyer
Wide and Focus Modes
Wide markdown tables were unreadable because both side panels (TOC on the left, annotations on the right) stayed fixed while the reader width was capped. Tables wrapped awkwardly or required horizontal scrolling inside a narrow column.
Two new toggles sit above the document and next to the lightning-bolt action:
- Wide hides both panels and removes the reader width cap. Wide tables and code fences get the full document area.
- Focus hides both panels but keeps the normal reader width. Distraction-free reading without stretching the content.
Enabling either mode collapses the left sidebar, hides the annotations panel and resize handle, and toggles the width cap accordingly. Exiting restores the exact previous layout, including which sidebar tab was open. Opening any sidebar or annotations panel automatically exits.
Available in plan review, annotate, and linked-doc overlays. Archive mode and plan-diff view keep the standard layout.
- #578 by @dgrissen2
First-Class OpenCode Detection
The origin detection chain in the hook server didn't include OpenCode. Every OpenCode invocation fell through to the
claude-codedefault, which loaded the wrong UI variant: missing agent-switch toggle, wrong agent badge. Theopencodeorigin key was already defined inAGENT_CONFIGwith its badge styling in place, but the detection side was never wired up.OpenCode is now detected via
OPENCODE=1, the canonical runtime flag set unconditionally by the OpenCode binary. The full priority order is:PLANNOTATOR_ORIGIN > Codex > Copilot CLI > OpenCode > Claude Code (default)The
PLANNOTATOR_ORIGINenvironment variable was documented in the source but never read. It now functions as an explicit override at the top of the chain, validated againstAGENT_CONFIGso invalid values fall through to env-based detection instead of breaking.Content Negotiation for Markdown-Serving URLs
When you run
plannotator annotate https://..., the tool goes through Jina Reader (or Turndown as a fallback) to convert HTML to markdown. But a growing number of sites — including Cloudflare's developer docs — now publish Markdown directly when you ask for it. Routing those through an HTML-to-markdown converter is wasteful and loses fidelity.URL annotation now tries
Accept: text/markdown, text/html;q=0.9first, with a 5-second timeout. If the server returnscontent-type: text/markdown, the response is used directly — one fetch, no conversion. If the server returns HTML or the request fails, it falls through silently to the existing Jina/Turndown pipeline. Local URLs skip negotiation entirely.A new
content-negotiationsource type is recorded on the result so the UI can indicate which path produced the content.- #557 by @backnotprop
Hex Color Swatches in the Plan Viewer
Frontend plans reference hex color values constantly — design tokens, Tailwind overrides, CSS variable assignments, component palette decisions. Reviewers had to mentally decode every
#ff6600or open a color picker to follow the author's intent.The plan viewer now renders a small filled swatch inline, immediately to the left of the hex code. The swatch is a
14×14rounded square matching the referenced color. Supports 3-, 4-, 6-, and 8-digit hex with a negative lookahead that excludes URL anchors, CSS id selectors, and any identifier that continues with word characters.The color value is constrained by the regex before reaching React, and rendered via the React style object — not
cssText— so there's no CSS injection path. 19 tests cover valid patterns, false-positive guards, and injection attempts.Self-Hosted Paste Service Support
Short-link sharing for larger plans routes through a paste service at
plannotator-paste.plannotator.workers.dev. Self-hosted deployments had no way to point at their own paste service — the URL was hardcoded in the OpenCode plugin.The
PLANNOTATOR_PASTE_URLenvironment variable now configures a custom paste endpoint. The OpenCode plugin reads it via a newgetPasteApiUrldependency that flows through command handlers (annotate, annotate-last, archive) and the review server. The Landing component accepts ashareBaseUrlprop with a fallback to the default. CORS documentation in the paste service now includes explicit guidance for self-hosters.Backward compatible: unset
PLANNOTATOR_PASTE_URLcontinues to use the hosted default.- #582 by @backnotprop, closing #580 reported by @ndesjardins-comact
OpenCode Review: Reuse the Existing Local Server
On subsequent review commands, the OpenCode AI review path tried to start a second
opencode serveand collided with the existing local server on port 4096. The firstopencode servewasn't being cleaned up, so port conflicts were guaranteed on the second invocation.The review flow now attaches to the default local OpenCode server at
127.0.0.1:4096if one is already running. If nothing is listening, it spawns a new instance as before. No extra lifecycle management, no extra ports — just reuse what's already there.The PR also fixes two local-testing issues uncovered along the way: the source-loaded OpenCode plugin was resolving bundled HTML from the wrong directory, and the sandbox + postinstall paths were not using the documented
plugins/andcommands/directories.- #567 by @oorestisime, closing #513 reported by @alexey-igrychev
Additional Changes
~expansion in user-entered file paths — The shared path resolver now expands home-relative~in annotate entrypoints and the Bun and Pi reference handlers, so file, folder, vault, and linked-document paths all handle~consistently. #572 by @AlexanderKolberg- Thumbs-up quick label on the annotation toolbar — A one-click "Looks good" 👍 button sits before the existing quick labels menu, with green hover styling to match the semantic. #588 by @backnotprop
- Save as PDF discoverability — The action menu label is now "Print / Save as PDF" with a subtitle explaining how to choose Save as PDF in the system print dialog. No new print pipeline — just making the existing capability findable. #587 by @backnotprop
- Disable auto-invocation of plannotator slash commands in Claude Code — The four plannotator Claude Code command definitions (annotate, archive, last, review) now carry
disable-model-invocation: true, preventing the model from running them automatically. #586 by @backnotprop - Stop forcing an agent cycle in OpenCode —
agent_cycleassumed only a build and plan agent and broke when users had other agents defined. Removed. #564 by @andreineculau - RSS feed link in the marketing layout — The blog's RSS feed is now advertised in the shared
<head>so feed readers and browsers can discover it automatically. #573 by @dotemacs
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows PowerShell:
irm https://plannotator.ai/install.ps1 | iexPin a specific version:
curl -fsSL https://plannotator.ai/install.sh | bash -s -- --version v0.18.0Claude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".Copilot CLI:
/plugin marketplace add backnotprop/plannotator /plugin install plannotator-copilot@plannotatorGemini CLI: The install script auto-detects
~/.geminiand configures hooks, policy, and slash commands.OpenCode: Clear cache and restart:
rm -rf ~/.cache/opencode/packages/@plannotator ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extensionVS Code Extension: Install from the VS Code Marketplace.
What's Changed
- feat(annotate): content negotiation for Markdown for Agents by @backnotprop in #557
- feat(viewer): render color swatches next to hex color codes by @Pran-Ker in #562
- don't change current agent by @andreineculau in #564
- feat(plan-diff): word-level inline diff rendering by @backnotprop in #565
- fix(opencode): reuse local server for review flows by @oorestisime in #567
- Add ~ support for user-entered file paths by @AlexanderKolberg in #572
- Fix for RSS feed by @dotemacs in #573
- Add annotate wide mode by @dgrissen2 in #578
- Add configurable paste service URL for self-hosting by @backnotprop in #582
- Prevent model from auto-invoking plannotator slash commands by @backnotprop in #586
- Surface Save as PDF via existing print flow by @backnotprop in #587
- Add thumbs up quick label button to annotation toolbar by @backnotprop in #588
- feat: detect OpenCode origin + activate PLANNOTATOR_ORIGIN override by @HeikoAtGitHub in #590
New Contributors
- @Pran-Ker made their first contribution in #562
- @andreineculau made their first contribution in #564
- @oorestisime made their first contribution in #567
- @AlexanderKolberg made their first contribution in #572
- @dotemacs made their first contribution in #573
- @HeikoAtGitHub made their first contribution in #590
Contributors
@Pran-Ker shipped inline hex color swatches in the plan viewer, with a carefully constrained regex, a negative lookahead to avoid URL anchors and CSS selectors, and 19 tests including explicit injection guards.
@andreineculau removed the
agent_cyclecall that assumed everyone had only build and plan agents in OpenCode, fixing a bug introduced by #40.@oorestisime fixed the OpenCode review port collision by reusing the existing local
opencode serveat127.0.0.1:4096instead of spawning a second one, and cleaned up two local-testing path issues along the way.@AlexanderKolberg added
~home- directory expansion to the shared path resolver so annotate entrypoints and the Bun and Pi reference handlers all treat~/file.mdthe same way.@dotemacs added the RSS autodiscovery
<link>to the marketing site layout so feed readers and browsers can pick up the blog feed automatically.@dgrissen2 returned with annotate wide mode — a toggle that collapses both side panels and removes the reader width cap, gated to annotate sessions only, with layout restoration on exit. This follows their prior work on linked-doc navigation, image lightboxing, smart file resolution, and the purple P favicon.
@HeikoAtGitHub wired OpenCode into the origin detection chain (via
OPENCODE=1) and activated thePLANNOTATOR_ORIGINoverride that had been documented but never read, with seven headless detection tests covering the new priority order.Community issue reporters:
- @pbowyer filed #560 with a detailed request for word-level diffs and diff display options — that issue directly shaped the design of the new Rendered/Classic/Raw mode switcher.
- @ndesjardins-comact reported #580, the hardcoded share URL blocking custom-domain usage, which drove the
PLANNOTATOR_PASTE_URLwork. - @alexey-igrychev reported both #513 (the
opencode serveport collision) and #514 (empty response bubbles in the OpenCode AI tab).
Full Changelog :
v0.17.10...v0.18.0 -
🔗 matklad 256 Lines or Less: Test Case Minimization rss
256 Lines or Less: Test Case Minimization
Apr 20, 2026
Property Based Testing and fuzzing are a deep and science-intensive topic. There are enough advanced techniques there for a couple of PhDs, a PBT daemon, and a client-server architecture. But I have this weird parlor-trick PBT library, implementable in a couple of hundred lines of code in one sitting.
This week I’ve been thinking about a cool variation of a consensus algorithm. I implemented it on the weekend. And it took just a couple of hours to write a PBT library itself first, and then a test, that showed a deep algorithmic flaw in my thinking (after a dozen trivial flaws in my coding). So, I don’t get to write more about consensus yet, but I at least can write about the library. It is very simple, simplistic even. To use an old Soviet joke about Babel and Bebel, it’s Gogol rather than Hegel. But for just 256 lines, it’s one of the highest power-to-weight ratio tools in my toolbox.
Read this post if:
- You want to stretch your generative testing muscles.
- You are a do-it-yourself type, and wouldn’t want to pull a ginormous PBT library off the shelf.
- You would pull a library, but want to have a more informed opinion about available options, about essential and accidental complexity.
- You want some self-contained real-world Zig examples :P
Zig works well here because it, too, is exceptional in its power-to-weight.
FRNG
The implementation is a single file,
FRNG.zig, because the core abstraction here is a Finite Random Number Generator — a PRNG where all numbers are pre-generated, and can run out. We start with standard boilerplate:const std = @import("std"); const assert = std.debug.assert; entropy: []const u8, pub const Error = error{OutOfEntropy}; const FRNG = @This(); pub fn init(entropy: []const u8) FRNG { return .{ .entropy = entropy }; }In Zig, files are structs: you obviously need structs, and the language becomes simpler if structs are re-used for what files are. In the above
const FRNG = @This()assigns a conventional name to the file struct, andentropy: []const u8declares instance fields (only one here).const Errorandfn initare “static” (container level) declarations.The only field we have is just a slice of raw bytes, our pre-generated random numbers. And the only error condition we can raise is
OutOfEntropy.The simplest thing we can generate is a slice of bytes. Typically, API for this takes a mutable slice as an out parameter:
pub fn fill(prng: *PRNG, bytes: []u8) void { ... }But, due to pre-generated nature of FRNG, we can return the slice directly, provided that we have enough entropy. This is going to be our (sole) basis function, everything else is going to be a convenience helper on top:
pub fn bytes(frng: *FRNG, size: usize) Error![]const u8 { if (frng.entropy.len < size) return error.OutOfEntropy; const result = frng.entropy[0..size]; frng.entropy = frng.entropy[size..]; return result; }The next simplest thing is an array (a slice with a fixed size):
pub fn array(frng: *FRNG, comptime size: usize) Error![size]u8 { return (try frng.bytes(size))[0..size].*; }Notice how Zig goes from runtime-known slice length, to comptime known array type. Because
sizeis acomptimeconstant, slicing[]const u8with[0..size]returns a pointer to array,*const [size]u8.We can re-interpret a 4-byte array into
u32. But, because this is Zig, we can trivially generalize the function to work for any integer type, by passing inIntcomptime parameter of typetype:const builtin = @import("builtin"); pub fn int(frng: *FRNG, Int: type) Error!Int { comptime { assert(@typeInfo(Int).int.signedness == .unsigned); assert(builtin.cpu.arch.endian() == .little); } return @bitCast(try frng.array(@sizeOf(Int))); }This function is monomorphised for every
Inttype, so@sizeOf(Int)becomes a compile-time constant we can pass tofn array.Production code would be endian-clean here, but, for simplicity, we encode our endianness assumption as a compile-time assertion. Note how Zig communicates information about endianness to the program. There isn’t any kind of side- channel or extra input to compilation, like
--cfgflags. Instead, the compiler materializes all information about target CPU as Zig code. There’s abuiltin.zigfile somewhere in the compiler caches directory that containspub const cpu: std.Target.Cpu = .{ .arch = .aarch64, .model = &std.Target.aarch64.cpu.apple_m3, // ... }This file can be accessed via
@import("builtin")and all the constants inspected at compile time.We can make an integer, and a boolean is even easier:
pub fn boolean(frng: *FRNG) Error!bool { return (try frng.int(u8)) & 1 == 1; }Strictly speaking, we only need one bit, not one byte, but tracking individual bits is too much of a hassle.
From an arbitrary int, we can generate an int in range. As per Random Numbers Included, we use a closed range, which makes the API infailable and is usually more convenient at the call-site:
pub fn int_inclusive(frng: *FRNG, Int: type, max: Int) Error!IntAs a bit of PRNG trivia, while this could be implemented as
frng.int(Int) % (max + 1), the result will be biased (not uniform). Consider the case whereInt = u8, and a call likefrng.int_inclusive(u8, 64 * 3).The numbers in
0..64are going to be twice as likely as the numbers in64..(64*3), because the last quarter of 256 range will be aliased with the first one.Generating an unbiased number is tricky and might require drawing arbitrary number of bytes from entropy. Refer to https://www.pcg- random.org/posts/bounded-rands.html for details. I didn’t, and copy-pasted code from the Zig standard library. Use at your own risk!
pub fn int_inclusive(frng: *FRNG, Int: type, max: Int) Error!Int { comptime assert(@typeInfo(Int).int.signedness == .unsigned); if (max == std.math.maxInt(Int)) return try frng.int(Int); const bits = @typeInfo(Int).int.bits; const less_than = max + 1; var x = try frng.int(Int); var m = std.math.mulWide(Int, x, less_than); var l: Int = @truncate(m); if (l < less_than) { var t = -%less_than; if (t >= less_than) { t -= less_than; if (t >= less_than) t %= less_than; } while (l < t) { x = try frng.int(Int); m = std.math.mulWide(Int, x, less_than); l = @truncate(m); } } return @intCast(m >> bits); }Now we can generate an int bounded from above and below:
pub fn range_inclusive( frng: *FRNG, Int: type, min: Int, max: Int, ) Error!Int { comptime assert(@typeInfo(Int).int.signedness == .unsigned); assert(min <= max); return min + try frng.int_inclusive(Int, max - min); }Another common operation is picking a random element from a slice. If you want to return a pointer to a element, you’ll need a
constandmutversions of the function. A simpler and more general solution is to return an index:pub fn index(frng: *FRNG, slice: anytype) Error!usize { assert(slice.len > 0); return try frng.range_inclusive(usize, 0, slice.len - 1); }At the call site,
xs[try frng.index(xs)]doesn’t look too bad, is appropriatelyconst-polymorphic, and is also usable for multiple parallel arrays.Simulation
So far, we’ve spent about 40% of our line budget implementing a worse random number generator that can fail with
OutOfEntropyat any point in time. What is it good for?We use it to feed our system under test with random inputs, see how it reacts, and check that it does not crash. If we code our system to crash if anything unexpected happens and our random inputs cover the space of all possible inputs, we get a measure of confidence that bugs will be detected in testing.
For my consensus simulation, I have a
Worldstruct that holds aFRNGand a set of replicas:const World = struct { frng: *FRNG, replicas: []Replica, // ... };Worldhas methods like:fn simulate_request(world: *World) !void { const replica = try world.frng.index(world.replicas); const payload = try world.frng.int(u64); world.send_payload(replica, payload); }I then select which method to call at random:
fn step(world: *World) !void { const action = try world.frng.weighted(.{ .request = 10, .message = 20, .crash = 1, }); switch (action) { .request => try world.simulate_request(), .message => { ... }, .crash => { ... }, } }Here,
fn weightedis another FRNG helper that selects an action at random, proportional to its weight. This helper needs quite a bit more reflection machinery than we’ve seen so far:pub fn weighted( frng: *FRNG, weights: anytype, ) Error!std.meta.FieldEnum(@TypeOf(weights)) { const fields = comptime std.meta.fieldNames(@TypeOf(weights)); var total: u32 = 0; inline for (fields) |field| total += @field(weights, field); assert(total > 0); var pick = try frng.int_inclusive(u64, total - 1); inline for (fields) |field| { const weight = @field(weights, field); if (pick < weight) { return @field( std.meta.FieldEnum(@TypeOf(weights)), field, ); } pick -= weight; } unreachable; }weights: anytypeis compile-time duck-typing. It means that ourweightedfunction is callable with any type, and each specific type creates a new monomorphised instance of a function. While we don’t explicitly name the type ofweights, we can get it as@TypeOf(weights).FieldEnumis a type-level function that takes a struct type:const S = struct { foo: bool, bar: u32, baz: []const u8 };and turns it into an enum type, with a variant per-field, exactly what we want for the return type:
const E = enum { foo, bar, baz };Tip: if you want to quickly learn Zig’s reflection capabilities, study the implementation of
std.metaandstd.enumsin Zig’s standard library.The
@fieldbuilt-in function accesses a field givencomptimefield name. It’s exactly like Python’sgetattr/setattrwith an extra restriction that it must be evaluated at compile time.To add one more twist here, I always find it hard to figure out which weights are reasonable, and like to generate the weights themselves at random at the start of the test:
pub fn swarm_weights(frng: *FRNG, Weights: type) Error!Weights { var result: Weights = undefined; inline for (comptime std.meta.fieldNames(Weights)) |field| { @field(result, field) = try frng.range_inclusive(u32, 1, 100); } return result; }(If you feel confused here, check out Swarm Testing Data Structures)
Stepping And Runnig
Now we have enough machinery to describe the shape of test overall:
fn run_test(gpa: Allocator, frng: *FRNG) !void { var world = World.init(gpa, &frng) catch |err| switch (err) { error.OutOfEntropy => return, else => return err, }; defer world.deinit(gpa); while (true) { world.step() catch |err| switch (err) { error.OutOfEntropy => break, }; } } const World = struct { frng: *FRNG, weights: ActionWeights, // ... const ActionWeights = struct { request: u32, message: u32, crash: u32, // ... }; pub fn init(gpa: Allocator, frng: *FRNG) !void { const weights = try frng.swarm_weights(ActionWeights); // ... } fn step(world: *World) error{OutOfEntropy}!void { const action = try world.frng.weighted(world.weights); switch (action) { .request => { ... }, // ... } } };A test needs an
FRNG(which ultimately determines the outcome) and an General Purpose Allocator for theWorld. We start by creating a simulatedWorldwith random action weights. IfFRNGentropy is very low, we can run out of entropy even at this stage. We assume that the code is innocent until proven guilty — if we don’t have enough entropy to find a bug, this particular test returns success. Don’t worry, we’ll make sure that we have enough entropy elsewhere.We use
catch |err| switch(err)to peel offOutOfEntropyerror. I find that, whenever I handle errors in Zig, very often I want to discharge just a single error from the error set. I wish I could use parenthesis with acatch:// NOT ACTUALY ZIG :( var world = try World.init(gpa, &frng) catch (error.OutOfEntropy) return;Anyway, having created the
World, we step through it while we still have entropy left. If any step detects an internal inconsistency, the entireWorldcrashes with an assertion failure. If we got to the end ofwhile(true)loop, we know that at least that particular slice of entropy didn’t uncover anything suspicious.Notice what isn’t there. We aren’t generating a complete list of actions up- front. Rather, we make random decisions as we go, and can freely use the current state of the
Worldto construct a menu of possible choices (e.g., when sending a message, we can consider only not currently crashed replicas).Binary Search the Answer
And here we can finally see the reason why we bothered writing a custom Finite PRNG, rather than using an off-the-shelf one. The amount of entropy in FRNG defines the complexity of the test. The fewer random bytes we start with, the faster we exit the step loop. And this gives us an ability to minimize test cases essentially for free.
Suppose you know that a particular entropy slice makes the test fail (cluster enters split brain at the millionth step). Let’s say that the slice was 16KiB. The obvious next step is to see if just 8KiB would be enough to crash it. And, if 8KiB isn’t, than perhaps 12KiB?
You can *binary search* the minimal amount of entropy that’s enough for the test to fail. And this works for any test, it doesn’t have to be a distributed system. If you can write the code to generate your inputs randomly, you can measure complexity of each particular input by measuring how many random bytes were drawn in its construction.
And now the hilarious part — of course it seems that the way to minimize entropy is to start with a particular failing slice and apply genetic- algorithm mutations to it. But a much simpler approach seems to work in practice — just generated a fresh, shorter entropy slice. If you found some failure at random, then you should be able to randomly stumble into a smaller failing example, if one exists — there are much fewer small examples, so finding a failing one becomes easier when the
sizegoes down!The Searcher
The problem with binary searching for failing entropy is that a tripped assertion crashes the program. There’s no unwinding in Zig. For this reason, we’ll move the search code to a different process. So a single test will be a binary with a
mainfunction, that takes entropy onstdin.Zig’s new juicy main makes writing this easier than in any previous versions of Zig :D
pub fn main(init: std.process.Init) !void { const gpa = init.gpa; const io = init.io; var stdin_reader = std.Io.File.stdin().reader(io, &.{}); const entropy = try stdin_reader.interface .allocRemaining(gpa, .unlimited); defer gpa.free(entropy); var frng = FRNG.init(entropy); var world = World.init(gpa, &frng, .{}) catch |err| switch (err) { error.OutOfEntropy => return, else => return err, }; defer world.deinit(gpa); world.run(); }Main gets
Initas an argument, which provides access to things like command line arguments, default allocator and a defaultIoimplementation. These days, Zig eschews global ambient IO capabilities, and requires threading an Io instance whenever we need to make a syscall. Here, we need Io to read stdin.Now we will implement a harness to call this main. This will be
FRNG.Driver:pub const Driver = struct { io: std.Io, sut: []const u8, buffer: []u8, const log = std.log; };It will be spawning external processes, so it’ll need an
Io. We also need a path to an executable with a test main function, a System Under Test. And we’ll need a buffer to hold the entropy. This driver will be communicating successes and failures to the users, so we also prepare alogfor textual output.How we get entropy to feed into
sut? Because we are only interested in entropy size, we won’t be storing the actual entropy bytes, and instead will generate it from au64seed. In other words, just two numbres, entropy size and seed, are needed to reproduce a single run of the test:fn run_once(driver: Driver, options: struct { size: u32, seed: u64, quiet: bool, }) !enum { pass, fail } { assert(options.size <= driver.buffer.len); const entropy = driver.buffer[0..options.size]; var rng = std.Random.DefaultPrng.init(options.seed); rng.random().bytes(entropy); var child = try std.process.spawn(driver.io, .{ .argv = &.{driver.sut}, .stdin = .pipe, .stderr = if (options.quiet) .ignore else .inherit, }); try child.stdin.?.writeStreamingAll(driver.io, entropy); child.stdin.?.close(driver.io); child.stdin = null; const term = try child.wait(driver.io); return if (success(term)) .pass else .fail; } fn success(term: std.process.Child.Term) bool { return term == .exited and term.exited == 0; }We use default deterministic PRNG to expand our short seed into entropy slice of the required size. Then we spawn
sutproces, feeding the resulting entropy via stdin. Closing child’s stdin signals the end of entropy. We then return either.passor.faildepending on child’s exit code. So, both explicit errors and crashes will be recognized as failures.Next, we implement the logic for checking if a particular seed size is sufficient to find a failure. Of course, we won’t be able to say that for sure in a finite amount of time, so we’ll settle for some user-specified amount of retries:
fn run_multiple(driver: Driver, options: struct { size: u32, attempts: u32, }) !union(enum) { pass, fail: u64 } { // ... }The user passes us the number of
attemptsto make, and we return.passif they all were successfull, or a specific failing seed if we found one:assert(options.size <= driver.buffer.len); for (0..options.attempts) |_| { var seed: u64 = undefined; driver.io.random(@ptrCast(&seed)); const outcome = try driver.run_once(.{ .seed = seed, .size = options.size, .quiet = true, }); switch (outcome) { .fail => return .{ .fail = seed }, .pass => {}, } } return .pass;To generate a real seed we need “true” cryptographic non-deterministic randomness, which is provided by
io.random.Finally, the search for the size:
fn search(driver: Driver, options: struct { attempts: u32 = 100, }) !union(enum) { pass, fail: struct { size: u32, seed: u64 }, } { // ... }Here, we are going to find a smallest entropy size that crashes
sut. If we succeed, we return the seed and the size. The upper bound for the size is the space available in the pre-allocated entropy buffer.The search loop is essentially a binary search, with a twist — rather than using dichotomy on the
sizedirectly, we will be doubling astepwe use to change the size between iterations.That is, we start with a small size and step, and, on every iteration, double the step and add it to the size, until we hit a failure (or run out of buffer for the entropy).
Once we found a failure, we continue the serach in the other direction — halving the step and subtracting it from the
size, keeping the smallersizeif it still fails.On each step, we log the current size and outcome, and report the smallest failing size at the end.
var found_size: ?u32 = null; var found_seed: ?u64 = null; var pass: bool = true; var size: u32 = 16; var step: u32 = 16; for (0..1024) |_| { if (step == 0) break; const size_next = if (pass) size + step else size -| step; if (size > driver.buffer.len) break; const outcome = try driver.run_multiple(.{ .size = size_next, .attempts = options.attempts, }); switch (outcome) { .pass => log.info("pass: size={}", .{size_next}), .fail => |seed| { found_size = size_next; found_seed = seed; log.err("fail: size={} seed={}", .{ size_next, seed }); }, } const pass_next = (outcome == .pass); if (pass and pass_next) { step *= 2; } else if (!pass and !pass_next) { // Keep the step. } else { step /= 2; } if (pass or !pass_next) { size = size_next; pass = pass_next; } } else @panic("safety counter"); if (found_size == null) return .pass; return .{ .fail = .{ .size = found_size.?, .seed = found_seed.?, } };Finally, we wrap Driver’s functionality into main that works in two modes — either reproduces a given failure from seed and size, or searches for a minimal failure:
pub fn main( gpa: std.mem.Allocator, io: std.Io, sut: []const u8, operation: union(enum) { replay: struct { size: u32, seed: u64 }, search: struct { attempts: u32 = 100, size_max: u32 = 4 * 1024 * 1024, }, }, ) !void { const size_max = switch (operation) { .replay => |options| options.size, .search => |options| options.size_max, }; const buffer = try gpa.alloc(u8, size_max); defer gpa.free(buffer); var driver: Driver = .{ .io = io, .buffer = buffer, .sut = sut, }; switch (operation) { .replay => |options| { const outcome = try driver.run_once(.{ .size = options.size, .seed = options.seed, .quiet = false, }); log.info("{t}", .{outcome}); }, .search => |options| { const outcome = try driver.search(.{ .attempts = options.attempts, }); switch (outcome) { .pass => log.info("ok", .{}), .fail => |fail| { log.err("minimized size={} seed={}", .{ fail.size, fail.seed, }); }, } }, } }Running the search routine looks like this in a terminal:
Those final seed&size can then be used for
.replay, giving you a minimal reproducible failure for debugging!This … of course doesn’t look too exciting without visualizing a specific bug we can find this way, but the problem there is that interesting examples of systems to test in this way usually take more than 256 lines to implement. So I’ll leave it to your imagination, but you get the idea: if you can make a system fail under a “random” input, you can also systematically search the space of all inputs for the smallest counter-example, without adding knowledge about the system to the searcher. This article also provides a concrete (but somewhat verbose) example.
Here’s the full code:
https://gist.github.com/matklad/343d13547c8bfe9af310e2ca2fbfe109
-
🔗 Kevin Lynagh On sabotaging projects by overthinking, scope creep, and structural diffing rss
Hi friends,
I'll be attending Babashka Conf on May 8 and Dutch Clojure Days on May 9. If you're attending either (or just visiting Amsterdam), drop me a line!
On sabotaging projects by overthinking
When I have an idea for a project, it tends to go in one of these two directions:
-
I just do it. Maybe I make a few minor revisions, but often it turns out exactly how I'd imagined and I'm happy.
-
I think, "I should look for prior art". There's a lot of prior art, dealing with a much broader scope than I'd originally imagined. I start to wonder if I should incorporate that scope. Or perhaps try to build my thing on top of the existing sorta-nearby-solutions. Or maybe I should just use the popular thing. Although I could do a better job than that thing, if I put a bunch of time into it. But actually, I don't want to maintain a big popular project, nor do I want to put that much time into this project. Uh oh, now I've spent a bunch of time, having neither addressed the original issue nor experienced the joy of creating something.
I prefer the first outcome, and I think the pivotal factor is how well I've internalized my own success criteria.
For example, last weekend I hosted my friend Marcin and we decided it'd be fun to do some woodworking, so we threw together this shelf and 3d-printed hangers for my kitchen:

Absolute banger of a project:
- brainstormed the design over coffee
- did a few 3d-print iterations for the Ikea bin hangers (OnShape CAD, if you want to print your own)
- used material leftover from my workbench
- rounded the corner by eye with a palm sander
- sealed the raw plywood edge with some leftover paint from a friend
- done in a weekend
The main success criteria was to jam on woodworking with a friend, and that helped me not overthink the object-level success criteria: Just make a shelf for my exact kitchen!
In contrast, this past Friday I noticed difftastic did a poor job, so I decided to shop around for structural/semantic diff tools and related workflows (a topic I've never studied, that I'm increasingly interested in as I'm reviewing more and more LLM-generated code).
I spent 4 hours over the weekend researching existing tools (see my notes below), going through dark periods of both "semantic tree diffing is a PhD-level complex problem" and "why do all of these have MCP servers? I don't want an MCP server", before I came to my senses and remembered my original success criteria: I just want a nicer diffing workflow for myself in Emacs, I should just build it myself -- should take about 4 hours.
I'm cautiously optimistic that, having had this realization and committing myself to a minimal scope, I'll be able to knock out a prototype before running out of motivation.
However, other long-running interests of mine:
- interfaces for prototyping hardware (discussed September 2023)
- a programming language that fuses what I like about Clojure and Rust (November 2023)
- a programming language for CAD (constraints, bidirectional editing, other dubious ideas)
seem to be deep in the well of outcome #2.
That is, I've spent hundreds of hours on background research and little prototypes, but haven't yet synthesized anything that addresses the original motivating issue.
It's not quite that I regret that time -- I do love learning by reading -- but I have a nagging sense of unease that my inner critic (fear of failure?) is silencing my generative tendencies, keeping me from the much more enjoyable (and productive!) learning by doing.
I think in these cases the success criteria has been much fuzzier: Am I trying to replace my own usage of Rust/Clojure? Only for some subset of problems? Or is it that I actually just need a playground to learn about language design/implementation, and it's fine if I don't end up using it?
Ditto for CAD: Am I trying to replace my commercial CAD tool in favor of my own? Only for some subset of simple or particularly parametric parts? Do I care if it's useful for others? Does my tool need to be legibly different from existing open-source tools?
It's worth considering these questions, sure. But at the end of the day, I'd much rather have done a lot than have only considered a lot.
So I'm trying to embrace my inner clueless 20-year-old and just do things -- even if some turn out to be "obviously bad" in hindsight, I'll still be coming out ahead on net =D
Conservation of scope creep
Of course, there's only so much time to "just do things", and there's a balance to be had. I'm not sure how many times I'll re-learn YAGNI ("you ain't gonna need it") in my career, but I was reminded of it again after writing a bunch of code with an LLM agent, then eventually coming to my senses and throwing it all out.
I wanted a Finda-style filesystem-wide fuzzy path search for Emacs. Since I've built (by hand, typing the code myself!) this exact functionality before (walk filesystem to collect paths, index them by trigram, do fast fuzzy queries via bitmap intersections), I figured it'd only take a few hours to supervise an LLM to write all the code.
I started with a "plan mode" chat, and the LLM suggested a library, Nucleo, which turned up since I wrote Finda (10 years ago, eek!). I read through it, found it quite well- designed and documented, and decided to use it so I'd get its smart case and Unicode normalization functionality. (E.g., query
foomatchesFooandfoo, whereas queryFoowon 't matchfoo; similarly forcafeandcafé.)Finding a great library wasn't the problem, the problem was that Nucleo also supported some extra functionality: anchors (
^fooonly matches at the beginning of a line).This got me thinking about what that might mean in a corpus that consists entirely of file paths. Anchoring to the beginning of a line isn't useful (everything starts with
/), so I decided to try and interpret the anchors with respect to the path segments. E.g.,^foowould match/root/foobar/but not/root/barfoo/.But to do this efficiently, the index needs to keep track of segment boundaries so that the query can be checked against each segment quickly.
But then we also need to handle a slash occurring in an anchored query (e.g.,
^foo/bar) since that wouldn't get matched when only looking at segments individually (root,foo,bar, andbazof a matching path/root/foo/bar/baz/).Working through this took several hours: first throwing around design ideas with an LLM, having it write code to wrap Nucleo's types, then realizing its code was bloated and didn't spark joy, so finally writing my own (smaller) wrapper.
Then, after a break, I realized:
- I can't think of a situation where I'd ever wished Finda had anchor functionality
- In a corpus of paths, I can anchor by just adding
/to the start or end of a query (this works for everything except anchoring to the end of a filename).
So I tossed all of the anchoring code.
I'm pretty sure I still came out ahead compared to if I'd tried to write everything myself sans LLM or discussion with others, but I'm not certain.
Perhaps there's some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
Structural diffing
Speaking of unnecessary diversions, let me tell you everything I've learned about structural diffing recently -- if you have thoughts/feelings/references in this space, I'd love to hear about 'em!
When we're talking about code, a "diff" usually means a summary of the line- by-line changes between two versions of a file. This might be rendered as a "unified" view, where changed lines are prefixed with
+or-to indicate whether they're additions or deletions. For example:
We've removed
coffeeand addedapple.The same diff might also be rendered in a side-by-side view, which can be easier to read when there are more complex changes:

The problem with these line-by-line diffs is that they're not aware of higher- level structure like functions, types, etc. -- if some braces match up somehow between versions, they might not be shown at all, even if the braces "belong" to different functions.
There's a wonderful tool, difftastic, which tries to address this by calculating diffs using treesitter-provided concrete syntax trees. It's a huge improvement over line-based diffs, but unfortunately it doesn't always do a great job matching entities between versions.
Here's the diff that motivated this entire foray:

Note that it doesn't match up
struct PendingClick, it shows it deleted on the left and added on the right.I haven't dug into why difftastic fails to match here, but I do feel like it's wrong -- even if the overall diff would be longer, I'd still rather see
PendingClickRequestandPendingClickmatched up between both sides.Here's a summary of tools / references in the space:
-
The most "baked" and thoughtful semantic diff tool I found is, perhaps unsurprisingly, semanticdiff.com, a small German company with a free VSCode plugin and web app that shows diffs for github PRs. Unfortunately they don't have any code libraries I can use as a foundation for the workflow I want.
- this semanticdiff vs. difftastic blog post covers a lot of great details (including that difftastic doesn't even show semantically meaningful indentation changes in python !!!)
- one of the authors has great HN comments with hard-won background knowledge. E.g., they moved away from treesitter because it's unreliable for semantics:
Context-sensitive keywords in particular were a constant source of annoyance. The grammar looks correct, but it will fail to parse because of the way the lexer works. You don't want your tool to abort just because someone named their parameter "async".
-
- built on treesitter, has MCP server. README includes list of similar projects.
- lots of github stars, but doesn't seem particularly well-documented; I couldn't find an explanation of how it works, but the difftastic wiki says it "runs longest-common-subsequence on the leaves of the tree"
-
research / academic origin in 2014
- requires Java, so no-go for my use case of a quick tool I can use via Emacs
-
mergiraf: treesitter-based merge-driver written in rust
-
very nice architecture overview; tool uses Gumtree algorithm
- docs and adorable illustrations indicate this project was clearly written by a thoughtful human
- semanticdiff.com author in HN comments: > GumTree is good at returning a result quickly, but there are quite a few cases where it always returned bad matches for us, no matter how many follow-up papers with improvements we tried to implement. In the end we switched over to a dijkstra based approach that tries to minimize the cost of the mapping
-
weave: also a treesitter-based merge-driver written in Rust
-
feels a bit "HN-optimized" (flashy landing pages, lots of github stars, MCP server, etc.)
- I looked into their entity extraction crate, sem
- core diffing code is OK but pretty wordy
- greedy entity matching algorithm
- data model can't detect intra-file moves, even though those might be significant
- includes a lot of heuristic "impact" analysis, which feels like overreaching-scope to me since it'd require much tighter language integration before I'd trust it
- ran into buggy output when running
sem diff --verbose HEAD~4; it showed lines as having changed that…didn't change at all. - Too much 80%-done, hypothetically useful functionality for me to use as a foundation, but props for sure to the undergrad/student(?) who's built all this in just three months.
-
diffast: tree edit-distance of ASTs based on an algorithm from a 2008 academic paper.
-
supports "Python, Java, Verilog, Fortran, and C/C++ via dedicated parsers"
- has a nice gallery of example AST differences
- can export info in tuples for datalog
-
autochrome: Clojure-specific diffs based on dynamic programming
-
excellent visual explanation and example walkthrough
- Tristan Hume has a great article on Designing a Tree Diff Algorithm Using Dynamic Programming and A*
My primary use case is reviewing LLM output turn-by-turn -- I'm very much in- the-loop, and I'm not letting my agent (or dozens of them, lol) run wild generating 10k+ lines of code at a time.
Rather, I give an agent a scoped task, then come back in a few minutes and want to see an overview of what it did and then either revise/tweak it manually in Emacs or throw the whole thing out and try again (or just write it myself).
The workflow I want, then, is to
- see a high-level overview of the diff: what entities (types/functions/methods) were added/removed/changed?
- quickly see textual diffs on an entity-by-entity basis ("expanding" parts of the above summary)
- quickly edit any changes, without having to navigate elsewhere (i.e., do it inline, rather than having to switch from "diff" to "file)
Basically, I want something like Magit's workflow for reviewing and staging changes, but on an entity level rather than file/line level.
In light of the "minimal scope, just get your project done" lesson I've just re-learned for the nth time, my plan is to:
- throw together my own treesitter-based entity extraction framework (just Rust for now)
- do some simple greedy matching for now
- render the diff to the command line
Once that seems reasonable (i.e., it does a better job than difftastic did on that specific commit), I'll:
- wire into a more interactive Magit-like Emacs workflow (maybe I can reuse Magit itself!?!)
- add support for new languages, as I need them
- potentially explore more sophisticated score-based global matching rather than simple greedy matching
Mayyybe if I'm happy with it I'll end up releasing something. But I'm not trying to collect Github stars or HN karma, so I might just happily use it in the privacy of my own home without trying to "commercialize it".
After all, sometimes I just want a shelf.
Misc. stuff
-
I'm in the market for a few square meters of Tyvek or other translucent, non-woven material suitable for building a light diffuser -- let me know if you have any favorite vendors that can ship to the EU.
-
How They Made This - Coinbase Commercial Breakdown. Crypto is a negative-sum parasite on productive economic activity, but has the silver lining of funneling a lot of capital to weird creative folks.
-
The Easiest Way To Design Furniture…. Laura Kampf on getting off the computer and designing physical spaces with tape, lil' wood sticks, and cardboard.
-
Hotel California - Reimagined on the Traditional Chinese Guzheng
-
C is not a low-level language: Your computer is not a fast PDP-11.
-
Loon is a Lisp. Thrilled to discover I'm not the only one who wants to mash together Clojure and Rust. The current implementation seems to have been manically vibe-coded and I quickly ran into some terrible bugs, but on the other hand it exists so I'm not going to be a hater.
-
Made a print in place box so I can easily hand out printed bees🐝. "Im quite content with the result, the bees fit snugly and the box opens and closes nicely"
-
"There isn’t a lot of reliable information out there about how to buy a gas mask, especially for the specific purpose of living under state repression. But hopefully after reading this guide you’ll feel equipped to make an educated decision."
-
"This zoomable map shows every page of every issue of BYTE starting from the front cover of the first issue (top left) to the last page of the final edition (bottom right). The search bar runs RE2 regex over the full text of all 100k pages." A lovely reminder that user-interfaces can be extremely fast and information dense.
-
-




