- ↔
- →
to read (pdf)
- Neobrutalism components - Start making neobrutalism layouts today
- Debunking zswap and zram myths
- Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
- Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
- Letting AI Actively Manage Its Own Context | 明天的乌云
- April 06, 2026
-
🔗 backnotprop/plannotator v0.17.0 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.16.7 | Gemini CLI plan review, install script skills directory fix
v0.16.6 | Perforce support, Pi shared event API, suggested code prefill, file tree expand fix
v0.16.5 | Resize handle scrollbar fix, VS Code Marketplace publish
v0.16.4 | Compound planning improvement hook, GitHub Enterprise + self-hosted GitLab, dockview workspace, new themes
v0.16.3 | Pi phase configuration, CLI help, untracked file discovery fix, review scroll reset
v0.16.2 | Draggable comment popovers, cross-file annotation visibility, custom diff fonts, OpenCode verbose log fix
v0.16.1 | SSE stream idle timeout fix for external annotations API
v0.16.0 | GitHub Copilot CLI, external annotations API, bot callback URLs, interactive checkboxes, print support, diff display options
v0.15.5 | Custom display names, GitHub viewed file sync, expand/collapse all in file tree, search performance, WSL fix
v0.15.2 | Compound Planning skill, folder annotation,/plannotator-archiveslash command, skill installation via platform installers
v0.15.0 | Live AI chat in code review, plan archive browser, folder file viewer, resizable split pane, Pi full feature parity
v0.14.5 | GitLab merge request review, login page image fix, Windows install path fix
What's New in v0.17.0
v0.17.0 introduces AI-powered code review agents, token-level annotation in diffs, and merge-base diffs for PR-accurate comparisons. Three of the six PRs in this release came from external contributors, one of them a first-timer.
AI Code Review Agents
Codex and Claude Code can now run as background review agents directly from the Plannotator code review UI. Select an agent, launch it, and watch live log output stream into a detail panel while the agent works. When it finishes, its findings appear as external annotations in the diff viewer, tagged by severity.
Codex agents use their built-in
codex-reviewcommand and produce priority- level findings (P0 through P3). Claude agents use a custom multi-agent prompt covering bug detection, security, code quality, and guideline compliance, with each finding classified as important, nit, or pre-existing. Both agents' findings include reasoning traces that explain the logic behind each annotation.For PR reviews, the server automatically creates a local worktree so agents have full file access without affecting your working directory. Same-repo PRs use
git worktree; cross-repo forks use a shallow clone with tracking refs for both branches. Pass--no-localto skip the worktree if you don't need file access.The Pi extension has full agent review parity: stdin/stdout/stderr handling, live log streaming, result ingestion, and vendored review modules with import rewriting.
Token-Level Code Selection
The diff viewer now supports clicking individual syntax tokens to annotate them. Hover a token to see it underlined; click to open the annotation toolbar with the token's text and position as context (e.g., "Line 47:
processOrder"). Token metadata is stored on the annotation and surfaced in sidebar badges and exported feedback.Gutter-based line selection continues to work independently. The two selection modes don't interfere with each other.
Merge-Base Diffs
A new "Current PR Diff" option in the diff type selector uses
git merge-baseto find the common ancestor between your branch and the default branch, then diffs from that point. This produces the same diff you'd see on a GitHub pull request page. The existing "vs main" option (git diff main..HEAD) is still available but includes upstream changes that arrived after you branched, which can be noisy.Additional Changes
- @ file reference support in annotate. OpenCode-style
@file.mdreferences now resolve correctly in/plannotator-annotate. The resolver strips the leading@as a fallback when the literal filename doesn't exist, while still preferring real files named@something.mdif present (#488 by @Exloz) - Markdown hard line breaks and list continuations. Two-trailing-space and backslash hard breaks now render as
<br>elements. Indented continuation lines after list items merge into the preceding bullet instead of becoming orphan paragraphs (#483, closing #482) - Explicit local mode override. Setting
PLANNOTATOR_REMOTE=0orfalsenow forces local mode, bypassing SSH auto-detection. Previously only1/truehad explicit meaning (#481 by @foxytanuki, closing #480) - PR file content merge-base fix. File contents for expandable diff context are now fetched at the merge-base commit instead of the base branch tip. When the base branch has moved since the PR was created, the old file contents didn't match the diff hunks, causing crashes in the diff renderer. The fix fetches the merge-base SHA via GitHub's compare API and falls back gracefully if unavailable
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".Copilot CLI:
/plugin marketplace add backnotprop/plannotator /plugin install plannotator-copilot@plannotatorGemini CLI: The install script auto-detects
~/.geminiand configures hooks, policy, and slash commands. Seeapps/gemini/README.mdfor manual setup.OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extensionVS Code Extension: Install from the VS Code Marketplace. Tested with Claude Code running in VS Code's integrated terminal. Not currently compatible with Anthropic's official VS Code extension due to upstream hook bugs.
What's Changed
- feat(review): token-level code selection for annotations by @backnotprop in #500
- feat(review): AI review agents, local worktree, and UI polish by @backnotprop in #491
- fix(annotate): support @ markdown file references by @Exloz in #488
- feat(review): add merge-base diff option for PR-style diffs by @yonihorn in #485
- fix: handle markdown hard line breaks and list continuations by @backnotprop in #483
- fix(remote): support explicit local override by @foxytanuki in #481
- fix(review): use merge-base SHA for PR file contents by @backnotprop
New Contributors
Contributors
@Exloz contributed the
@file reference fix for OpenCode's annotate mode (#488), including comprehensive test coverage for edge cases like real@-prefixed filenames and quoted input. First contribution.@yonihorn returned with the merge-base diff option (#485), giving PR reviews the same diff semantics GitHub uses.
@foxytanuki continued contributing with the explicit local mode override (#481), their third PR after the CLI help message and SSE timeout fix.
Community members who reported issues addressed in this release:
- @rcdailey: #482 (markdown hard line breaks not rendering)
- @foxytanuki: #480 (PLANNOTATOR_REMOTE=false semantics)
Full Changelog :
v0.16.7...v0.17.0 - @ file reference support in annotate. OpenCode-style
-
🔗 r/Yorkshire Richmond gleaming in the spring sunshine today. rss
submitted by /u/Still_Function_5428
[link] [comments] -
🔗 r/Yorkshire No better place.. rss
| Average photo. submitted by /u/Melodic_Position_590
[link] [comments]
---|--- -
🔗 r/york What’s the name of the trio who play in York? rss
They are a three piece, violin, guitair and double bass and they play covers in York. They’re bloody fantastic but cannot remember their name
submitted by /u/rjle_x
[link] [comments] -
🔗 jesseduffield/lazygit v0.61.0 release
The big one in this release is support for GitHub pull requests. They are shown as little GitHub icons next to each branch that has one, and you can open a PR in the browser by pressing shift-G. To enable this, all you need to do is install the
ghtool if you haven't already, and log in usinggh auth login.What's Changed
Features ✨
- Show pull requests against branches by @jesseduffield in #2781
Enhancements 🔥
- Add support for clicking on arrows in the file list to expand/collapse directories by @blakemckeany in #5365
- Remove empty directories after discarding untracked files by @stefanhaller in #5408
- Make file sort order and case sensitivity configurable, and default to mix files and folders by @stefanhaller in #5427
- Allow customizing the window width/height thresholds for when to use portrait mode by @stefanhaller in #5452
- Log hashes of local branches when deleting them by @stefanhaller in #5441
- Add condition field to custom command prompts by @mrt181 in #5364
Fixes 🔧
- Fix staging only some lines of a block of consecutive changes by @stefanhaller in #5396
- Fix the expanded layout of the branches panel (half and full screen modes) by @stefanhaller in #5413
- Fix searching commits or main view after switching repos by @stefanhaller in #5424
- Scroll to top when showing subcommits by @stefanhaller in #5425
- Fix patch commands when git config has color=always by @matthijskooijman in #5405
- Don't stage out-of-date submodules when asking user to auto-stage after resolving conflicts by @stefanhaller in #5440
Maintenance ⚙️
- Remove go-git dependency by @stefanhaller in #5420
- Make Debian/Ubuntu install command architecture-independent by @discapes in #5386
- Bump github.com/buger/jsonparser from 1.1.1 to 1.1.2 by @dependabot[bot] in #5423
- fix: pin 7 unpinned action(s), extract 1 inline secret to env var by @dagecko in #5439
- Fix dependabot config file by @stefanhaller in #5443
- Bump actions/cache from 4 to 5 by @dependabot[bot] in #5444
- Bump actions/download-artifact from 7 to 8 by @dependabot[bot] in #5445
- Bump actions/upload-artifact from 6 to 7 by @dependabot[bot] in #5446
- Bump github.com/lucasb-eyer/go-colorful from 1.3.0 to 1.4.0 by @dependabot[bot] in #5447
- Bump github.com/spf13/afero from 1.9.5 to 1.15.0 by @dependabot[bot] in #5448
- Bump github.com/creack/pty from 1.1.11 to 1.1.24 by @dependabot[bot] in #5449
- Bump github.com/stretchr/testify from 1.10.0 to 1.11.1 by @dependabot[bot] in #5450
- Bump github.com/sanity-io/litter from 1.5.2 to 1.5.8 by @dependabot[bot] in #5451
- Bump github.com/adrg/xdg from 0.4.0 to 0.5.3 by @dependabot[bot] in #5456
- Bump github.com/spkg/bom from 0.0.0-20160624110644-59b7046e48ad to 1.0.1 by @dependabot[bot] in #5457
- Bump github.com/integrii/flaggy from 1.4.0 to 1.8.0 by @dependabot[bot] in #5458
- Bump github.com/sahilm/fuzzy from 0.1.0 to 0.1.1 by @dependabot[bot] in #5459
- Bump github.com/sasha-s/go-deadlock from 0.3.6 to 0.3.9 by @dependabot[bot] in #5460
Docs 📖
- Add a note about AI to CONTRIBUTING.md by @stefanhaller in #5404
- Update redo keybinding in README.md by @unikitty37 in #5387
- Fix grammar in the contributor guide by @Rohan5commit in #5392
I18n 🌎
- Update translations from Crowdin by @stefanhaller in #5476
Performance Improvements 📊
- Improve performance of discarding many files by @stefanhaller in #5407
New Contributors
- @blakemckeany made their first contribution in #5365
- @discapes made their first contribution in #5386
- @unikitty37 made their first contribution in #5387
- @Rohan5commit made their first contribution in #5392
- @matthijskooijman made their first contribution in #5405
- @dagecko made their first contribution in #5439
- @mrt181 made their first contribution in #5364
Full Changelog :
v0.60.0...v0.61.0 -
🔗 @binaryninja@infosec.exchange Tired of unzipping your password-protected malware samples just to analyze mastodon
Tired of unzipping your password-protected malware samples just to analyze them? We've got you covered.
Our latest blog post covers Container Transforms and how Binja now handles nested binary formats with structure and provenance intact.
Read it here: https://binary.ninja/2026/03/31/container- transforms.html
-
🔗 r/wiesbaden Sprach Schule in Frankfurt/Wiesbaden rss
submitted by /u/Alert-Count8542
[link] [comments] -
🔗 r/Yorkshire Hand painted Yorkshire artworks by Paul Halmshaw. rss
| submitted by /u/Far-Elephant-2612
[link] [comments]
---|--- -
🔗 r/york My Visit To The City Today - lots of photos. rss
| submitted by /u/danum1962
[link] [comments]
---|--- -
🔗 r/york Original Ghost Walk (1973) vs. Mad Alice, which one should I book ? rss
Hi All, I'll be visiting York soon and I badly want to do a ghost tour. Ive been looking for choices and Im torn between 2.
I really love the fact that the Original Ghost Walk is the oldest in the world, that authenticity is pulling me.
But I see everyone raving about Mad Alice (The Bloody Tour) for the performance. For those who have done both, which one feels more like a genuine dive into York's history ? (or) should I even care about history and just look to have fun ?
I’m staying overnight specifically to do one of these, so I want to make sure I pick the one that actually feels worth after dark.
submitted by /u/Lanky_Cartoonist_743
[link] [comments] -
🔗 sacha chua :: living an awesome life YE12: Categorizing Emacs News, epwgraph, languages rss
View in the Internet Archive, watch or comment on YouTube, or email me.
Chapters:
- 00:41:21 epwgraph
- 00:54:56 learning languages
Thanks for your patience with the audio issues! At some point, I need to work out the contention between all the different processes that I want to be listening to the audio from my mic. =)
In this livestream, I categorize Emacs News for 2026-04-06, show epwgraph for managing Pipewire connections from Emacs, and share some of my language learning workflows.
You can e-mail me at sacha@sachachua.com.
-
🔗 r/york Jumble sale! rss
🛍️ Jumble Sale – Saturday 11th April! 🛍️
A fantastic jumble sale will be taking place on Saturday 11th April, 2pm – 4pm at the Sheriff Hutton Village Hall, in support of Shopmobility York.
The wonderful Sheriff Hutton Jumblies will be running the sale on our behalf – and if you’ve been before, you’ll know it’s always a brilliant event with plenty of bargains to be found!
✨ Details:
• ⏰ Time: 2pm – 4pm
• 📍 Location: Village Hall, Sheriff Hutton Road, York YO60 6RA
• 💷 Entry: Just 50p
• 🚶 It’s always popular – arriving early to join the queue is highly recommended!
🎟️ Don’t miss the tombola, and be sure to visit the cake stall for some delicious homemade treats!
🙏 Donations still welcome! If anyone is still wanting to donate items, please contact to arrange collection or drop off.
Come along, grab a bargain, and support a great cause – we’d love to see you there!
JumbleSale #ShopmobilityYork #CommunityEvent
submitted by /u/Single-Ad-5317
[link] [comments] -
🔗 r/reverseengineering Cracking a Malvertising DGA From the Device Side rss
submitted by /u/AdTemporary2475
[link] [comments] -
🔗 r/york Walking into York by the Ouse rss
| submitted by /u/York_shireman
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life 2026-04-06 Emacs news rss
There's a lot of buzz around the remote code execution thing that involves Git, but it seems to be more of a Git issue than an Emacs one. This might be a workaround if you want, and in the meantime, don't check out git repositories you don't trust. There's no page for the Emacs Carnival for April yet, but you can start thinking about the theme of "newbies/starter kits" already, and I'm sure Cena or someone will round things up afterwards. Enjoy!
- Workaround for the Git-related security issue that lots of people are talking about (@stackeffect@social.tchncs.de)
- Upcoming events (iCal file, Org):
- Emacs.si (in person): Emacs.si meetup #4 2026 (v #živo) https://dogodki.kompot.si/events/c4ee8c26-c668-491e-91b3-b466578b83e2 Mon Apr 6 1900 CET
- Emacs Paris: S: Emacs workshop in Paris (online) https://emacs-doctor.com/ Tue Apr 7 0830 America/Vancouver - 1030 America/Chicago - 1130 America/Toronto - 1530 Etc/GMT - 1730 Europe/Berlin - 2100 Asia/Kolkata - 2330 Asia/Singapore
- OrgMeetup (virtual) https://orgmode.org/worg/orgmeetup.html Wed Apr 8 0900 America/Vancouver - 1100 America/Chicago - 1200 America/Toronto - 1600 Etc/GMT - 1800 Europe/Berlin - 2130 Asia/Kolkata – Thu Apr 9 0000 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri Apr 10 1800 Europe/Paris
- London Emacs (in person): Emacs London meetup https://www.meetup.com/london-emacs-hacking/events/313909207/ Tue Apr 14 1800 Europe/London
- Emacs Berlin: In-Person-Only Emacs-Berlin Stammtisch https://emacs-berlin.org/ Tue Apr 14 1900 Europe/Berlin
- M-x Research: TBA https://m-x-research.github.io/ Wed Apr 15 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Protesilaos Stavrou: Emacs live stream with Sacha Chua on 2026-04-16 17:30 Europe/Athens
- Emacs configuration:
- Announcing Anju (Reddit) - mouse interactions for modeline, context menu, and main menu
- Emacs Redux: Repeat Mode: Stop Repeating Yourself (Irreal)
- backpack 0.4.0 - adds self-documenting inventory browser (Reddit)
- Emacs Lisp:
- Almighty Lisp: Lisp & Emacs Essentials - almightylisp.com (HN)
- Dave Pearson: nukneval.el v1.3 unload and re-evaluate
- Creating an Emacs Package from Concept to MELPA (Part 7) (57:13)
- How to run a function when my buffer selection changes? - updated with window-state-change-hook
- Yay Emacs live: Reorganizing my Emacs configuration so that my defuns are tangled to separate files (01:48:56)
- Sacha Chua: YE11: Fix find-function for Emacs Lisp from org-babel or scratch (YouTube, 08:19)
- Appearance:
- Navigation:
- Writing:
- Denote:
- Org Mode:
- [EMACS LAB] #3: Introduction to Org Mode (01:57:55)
- aravindps/org-gtd: Things 3 style GTD for Emacs — org-mode agenda views, ⌘ keybindings, context tags. Works with Doom and vanilla Emacs. · GitHub (r/emacs, r/orgmode)
- [BLOG] #27 bbb:OrgMeetup on Wed, February 11, 19:00 UTC+3 - Ihor Radchenko (@yantar92@fosstodon.org) notes
- Import, export, and integration:
- lopeztel/ox-dnd-html: Emacs export org files to D&D themed html · GitHub (Reddit)
- Adding org-protocol support (Reddit)
- Sacha Chua: Extract PDF highlights into an Org file with Python (YouTube 04:27)
- James Endres Howell: My first advice! (in Emacs Lisp) - specifying HTML boilerplace for org-static-blog
- RSS feeds for your org-mode website (@bgtdsword@toot.io)
- Org-mode - Various font sizes LaTeX (04:14)
- Org development:
- Completion:
- Emacs Redux: Declutter M-x with read-extended-command-predicate
- [RELEASE] let-completion v0.2.0: full overhaul of Elisp completion - 46 binding forms, function argument candidates, expandable registry, fully customizable two-column annotations (Reddit)
- rougier/nano-vertico: Emacs / nano + vertico · GitHub (Reddit)
- Coding:
- Shells:
- Mail, news, and chat:
- Multimedia:
- Fun:
- Anybody interested in writing SDL games in Emacs Lisp?
- Dave Pearson: eg.el v1.2 Norton Guide?, thinks.el v1.13 thought bubbles, binclock.el v1.12 binary clock, obfusurl.el v2.2 obfuscating URLs
- AI:
- Community:
- Emacs ATX Meetup. April 2026. - YouTube (2:04:58)
- Sacha Chua: #YayEmacs 10: Emacs coaching with Prot: Emacs workflows and streaming (YouTube 01:06:30)
- Emacs Carnival March 2026: Mistakes and Misconceptions
- Prot Asks: Hjalmar about Emacs for music, the joy of art, and Internet sociability (02:04:24)
- Alvaro Ramirez: …and then there were three (expect delays) (Irreal)
- A Cult AI Computer’s Boom and Bust - YouTube (Irreal)
- Other:
- Emacs development:
- On keybindings and the slow erosion of help's utility - long discussion
- New option vc-dir-auto-hide-up-to-date
- * lisp/vc/diff-mode.el (diff-mode-read-only-map): Bind 'v'.
- * etc/NEWS: Announce Org update.
- ; Fix documentation of last change
- Recursively check dependencies for package compatibility
- Inform macOS Accessibility Zoom of cursor position (bug#80624)
- New macro setopt-local and function set-local (bug#80709)
- Add xref-edit-mode (bug#80616)
- New packages:
- compilation-history: Track compilation history in SQLite (MELPA)
- corg: Header completion for org-mode (MELPA)
- evim: Evil Visual Multi - Multiple cursors for evil-mode (MELPA)
- ghostel: Terminal emulator powered by libghostty (MELPA)
- meshmonitor-chat: Chat client for MeshMonitor (Meshtastic) (MELPA)
- occult: Collapse and reveal buffer regions (MELPA)
- org-dt: Dynamic templating loader (MELPA)
- org-grimoire: Emacs-native static site generator (MELPA)
- org-invox: Invoice management for contractors using Org mode (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Yorkshire I don’t think any place can match this vibe that Yorkshire has✨ rss
| @ travelandchill1 submitted by /u/ScrollAndThink
[link] [comments]
---|--- -
🔗 r/Leeds Few more photos rss
Couple more photos this morning, although I got told off. Apparently Wellington Place don't permit commercial photography without prior agreement.
I'm in my work clothes with a Red S9 posting pics on Reddit lol.
submitted by /u/Phil-pot
[link] [comments] -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/wiesbaden Karosseriebauer / Lackierer rss
Moin,
mir ist jemand gegen das geparkte Auto gedonnert.
Verkleidung der Frontstoßstange und Kotflügel müssen gemacht werden.
Der Unfallverursacher ist bekannt und seine Versicherung zahlt.
Habt ihr Tipps für einen wirklich guten Karosseriebauer / Lackierer?
Und eventuell auch einen gescheiten Anwalt für Verkehrsrecht?
submitted by /u/BabaJoe
[link] [comments] -
🔗 r/LocalLLaMA I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM rss
| Hardware: • Stock iMac G3 Rev B (October 1998). 233 MHz PowerPC 750, 32 MB RAM, Mac OS 8.5. No upgrades. • Model: Andrej Karpathy’s 260K TinyStories (Llama 2 architecture). ~1 MB checkpoint. Toolchain: • Cross-compiled from a Mac mini using Retro68 (GCC for classic Mac OS → PEF binaries) • Endian-swapped model + tokenizer from little-endian to big-endian for PowerPC • Files transferred via FTP to the iMac over Ethernet Challenges: • Mac OS 8.5 gives apps a tiny memory partition by default. Had to use MaxApplZone() + NewPtr() from the Mac Memory Manager to get enough heap • RetroConsole crashes on this hardware, so all output writes to a text file you open in SimpleText • The original llama2.c weight layout assumes n_kv_heads == n_heads. The 260K model uses grouped-query attention (kv_heads=4, heads=8), which shifted every pointer after wk and produced NaN. Fixed by using n_kv_heads * head_size for wk/wv sizing • Static buffers for the KV cache and run state to avoid malloc failures on 32 MB It reads a prompt from prompt.txt, tokenizes with BPE, runs inference, and writes the continuation to output.txt. Obviously the output is very short, but this is definitely meant to just be a fun experiment/demo! Here’s the repo link: https://github.com/maddiedreese/imac-llm submitted by /u/maddiedreese
[link] [comments]
---|--- -
🔗 r/Yorkshire Anybody here ever been to Market Weighton? Easily one of the nicest small towns in East Yorkshire in my opinion. rss
I haven't been to Market Weighton since around 2012 but plan on visiting again when I'm next in Hull again, always loved visiting Market Weighton when I lived in East Yorkshire.
submitted by /u/AcadiaNo1039
[link] [comments]
-
- April 05, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-05 rss
IDA Plugin Updates on 2026-04-05
Activity:
- IDAPluginList
- b4f15eca: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- IDAssist
- python-elpida_core.py
- tc_deer
- b080caf8: docs: update README.md and ida-plugin.json
- IDAPluginList
-
🔗 r/york Evil Eye rss
I went to Evil Eye several years ago when it had a fantastic gin shop.
Its website suggests that’s no longer the case.
Im coming over next week - help get my expectations right! And if it doesn’t, where’s the next best gin shop?
submitted by /u/Sitheref0874
[link] [comments] -
🔗 sacha chua :: living an awesome life YE11: Fix find-function for Emacs Lisp from org-babel or scratch rss
Watch on Internet Archive, watch/comment on YouTube, download captions, or email me
Where can you define an Emacs Lisp function so that you can use
find-functionto jump to it again later?- A: In an indirect buffer from Org Mode source
block with your favorite eval function like
eval-defunC-c '(org-edit-special) inside the block; execute the defun withC-M-x(eval-defun),C-x C-e(eval-last-sexp), oreval-buffer.(defun my-test-1 () (message "Hello"))
B: In an Org Mode file by executing the block with C-c C-c
(defun my-test-2 () (message "Hello"))C: In a .el file
file:///tmp/test-search-function.el : execute the defun with
C-M-x(eval-defun),C-x C-e(eval-last-sexp), oreval-bufferD: In a scratch buffer, other temporary buffer, or really any buffer thanks to eval-last-sexp
(defun my-test-4 () (message "Hello"))
Only option C works - it's gotta be in an .el file for
find-functionto find it. But I love jumping to function definitions usingfind-functionorlispy-goto-symbol(which is bound toM-.if you use lispy and set uplispy-mode) so that I can look at or change how something works. It can be a little frustrating when I try to jump to a definition and it says, "Don't know where blahblahblah is defined." I just defined it five minutes ago! It's there in one of my other buffers, don't make me look for it myself. Probably this will get fixed in Emacs core someday, but no worries, we can work around it today with a little bit of advice.I did some digging around in the source code. Turns out that
symbol-filecan't find the function definition in theload-historyvariable if you're not in a .el file, sofind-function-search-for-symbolgets called withnilfor the library, which causes the error. (emacs:subr.el)I wrote some advice that searches in any open
emacs-lisp-modebuffers or in a list of other files, like my Emacs configuration. This is how I activate it:(setq sacha-elisp-find-function-search-extra '("~/sync/emacs/Sacha.org")) (advice-add 'find-function-search-for-symbol :around #'sacha-elisp-find-function-search-for-symbol)Now I should be able to jump to all those functions wherever they're defined.
(my-test-1) (my-test-2) (my-test-3) (my-test-4)Note that by default,
M-.inemacs-lisp-modeusesxref-find-definitions, which seems to really want files. I haven't figured out a good workaround for that yet, but lispy-mode makesM-.work and gives me a bunch of other great shortcuts, so I'd recommend checking that out.Here's the source code for the find function thing:
(defvar sacha-elisp-find-function-search-extra nil "List of filenames to search for functions.") ;;;###autoload (defun sacha-elisp-find-function-search-for-symbol (fn symbol type library &rest _) "Find SYMBOL with TYPE in Emacs Lisp buffers or `sacha-find-function-search-extra'. Prioritize buffers that do not have associated files, such as Org Src buffers or *scratch*. Note that the fallback search uses \"^([^ )]+\" so that it isn't confused by preceding forms. If LIBRARY is specified, fall back to FN. Activate this with: (advice-add 'find-function-search-for-symbol :around #'sacha-org-babel-find-function-search-for-symbol-in-dotemacs)" (if (null library) ;; Could not find library; search my-dotemacs-file just in case (progn (while (and (symbolp symbol) (get symbol 'definition-name)) (setq symbol (get symbol 'definition-name))) (catch 'found (mapc (lambda (buffer-or-file) (with-current-buffer (if (bufferp buffer-or-file) buffer-or-file (find-file-noselect buffer-or-file)) (let* ((regexp-symbol (or (and (symbolp symbol) (alist-get type (get symbol 'find-function-type-alist))) (alist-get type find-function-regexp-alist))) (form-matcher-factory (and (functionp (cdr-safe regexp-symbol)) (cdr regexp-symbol))) (regexp-symbol (if form-matcher-factory (car regexp-symbol) regexp-symbol)) (case-fold-search) (regexp (if (functionp regexp-symbol) regexp-symbol (format (symbol-value regexp-symbol) ;; Entry for ` (backquote) macro in loaddefs.el, ;; (defalias (quote \`)..., has a \ but ;; (symbol-name symbol) doesn't. Add an ;; optional \ to catch this. (concat "\\\\?" (regexp-quote (symbol-name symbol))))))) (save-restriction (widen) (with-syntax-table emacs-lisp-mode-syntax-table (goto-char (point-min)) (if (if (functionp regexp) (funcall regexp symbol) (or (re-search-forward regexp nil t) ;; `regexp' matches definitions using known forms like ;; `defun', or `defvar'. But some functions/variables ;; are defined using special macros (or functions), so ;; if `regexp' can't find the definition, we look for ;; something of the form "(SOMETHING <symbol> ...)". ;; This fails to distinguish function definitions from ;; variable declarations (or even uses thereof), but is ;; a good pragmatic fallback. (re-search-forward (concat "^([^ )]+" find-function-space-re "['(]?" (regexp-quote (symbol-name symbol)) "\\_>") nil t))) (progn (beginning-of-line) (throw 'found (cons (current-buffer) (point)))) (when-let* ((find-expanded (when (trusted-content-p) (find-function--search-by-expanding-macros (current-buffer) symbol type form-matcher-factory)))) (throw 'found (cons (current-buffer) find-expanded))))))))) (delq nil (append (sort (match-buffers '(derived-mode . emacs-lisp-mode)) :key (lambda (o) (or (buffer-file-name o) ""))) sacha-elisp-find-function-search-extra))))) (funcall fn symbol type library)))I even figured out how to write tests for it:
(ert-deftest sacha-elisp--find-function-search-for-symbol--in-buffer () (let ((sym (make-temp-name "--test-fn")) buffer) (unwind-protect (with-temp-buffer (emacs-lisp-mode) (insert (format ";; Comment\n(defun %s () (message \"Hello\"))" sym)) (eval-last-sexp nil) (setq buffer (current-buffer)) (with-temp-buffer (let ((pos (sacha-elisp-find-function-search-for-symbol nil (intern sym) nil nil))) (should (equal (car pos) buffer)) (should (equal (cdr pos) 12))))) (fmakunbound (intern sym))))) (ert-deftest sacha-elisp--find-function-search-for-symbol--in-file () (let* ((sym (make-temp-name "--test-fn")) (temp-file (make-temp-file "test-" nil ".org" (format "#+begin_src emacs-lisp\n;; Comment\n(defun %s () (message \"Hello\"))\n#+end_src" sym))) (sacha-elisp-find-function-search-extra (list temp-file)) buffer) (unwind-protect (with-temp-buffer (let ((pos (sacha-elisp-find-function-search-for-symbol nil (intern sym) nil nil))) (should (equal (buffer-file-name (car pos)) temp-file)) (should (equal (cdr pos) 35)))) (delete-file temp-file))))This is part of my Emacs configuration.You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- A: In an indirect buffer from Org Mode source
block with your favorite eval function like
-
🔗 jellyfin/jellyfin 10.11.8 release
🚀 Jellyfin Server 10.11.8
We are pleased to announce the latest stable release of Jellyfin, version 10.11.8! This minor release brings several bugfixes to improve your Jellyfin experience. As always, please ensure you take a full backup before upgrading!
Note : This release fixes several regressions from 10.11.7, with the goal to get people onto an updated release due to the forthcoming (t-minus 9 days) release of the GHSAs/CVEs that were fixed in 10.11.7. Please upgrade to this release as soon as you can.
You can find more details about and discuss this release on our forums.
Changelog (3)
📈 General Changes
-
🔗 r/LocalLLaMA Gemma 4 just casually destroyed every model on our leaderboard except Opus 4.6 and GPT-5.2. 31B params, $0.20/run rss
| Tested Gemma 4 (31B) on our benchmark. Genuinely did not expect this. 100% survival, 5 out of 5 runs profitable, +1,144% median ROI. At $0.20 per run. It outperforms GPT-5.2 ($4.43/run), Gemini 3 Pro ($2.95/run), Sonnet 4.6 ($7.90/run), and absolutely destroys every Chinese open-source model we've tested — Qwen 3.5 397B, Qwen 3.5 9B, DeepSeek V3.2, GLM-5. None of them even survive consistently. The only model that beats Gemma 4 is Opus 4.6 at $36 per run. That's 180× more expensive. 31 billion parameters. Twenty cents. We double-checked the config, the prompt, the model ID — everything is identical to every other model on the leaderboard. Same seed, same tools, same simulation. It's just this good. Strongly recommend trying it for your agentic workflows. We've tested 22 models so far and this is by far the best cost-to-performance ratio we've ever seen. Full breakdown with charts and day-by-day analysis: foodtruckbench.com/blog/gemma-4-31b FoodTruck Bench is an AI business simulation benchmark — the agent runs a food truck for 30 days, making decisions about location, menu, pricing, staff, and inventory. Leaderboard at foodtruckbench.com EDIT — Gemma 4 26B A4B results are in. Lots of you asked about the 26B A4B variant. Ran 5 simulations, here's the honest picture: 60% survival (3/5 completed, 2 bankrupt). Median ROI: +119%, Net Worth: $4,386. Cost: $0.31/run. Placed #7 on the leaderboard — above every Chinese model and Sonnet 4.5, below everything else. Both bankruptcies were loan defaults — same pattern we see across models. The 3 surviving runs were solid, especially the best one at +296% ROI. But here's the catch. The 26B A4B is the only model out of 23 tested that required custom output sanitization to function. It produces valid tool-call intent, but the JSON formatting is consistently broken — malformed quotes, trailing garbage tokens, invalid escapes. I had to build a 3-stage sanitizer specifically for this model. No other model needed anything like this. The business decisions themselves are unmodified — the sanitizer only fixes JSON formatting, not strategy. But if you're planning to use this model in agentic workflows, be prepared to handle its output format. It does not produce clean function calls out of the box. TL;DR: 31B dense → 100% survival, $0.20/run, #3 overall. 26B A4B → 60% survival, $0.31/run, #7 overall, but requires custom output parsing. The 31B is the clear winner. Updated leaderboard: foodtruckbench.com submitted by /u/Disastrous_Theme5906
[link] [comments]
---|--- -
🔗 r/reverseengineering IW8 is safe and works fine? I'm interested in using it to learn how the game was developed, as well as the files, compilation process, etc. rss
submitted by /u/Strikewr
[link] [comments] -
🔗 r/Harrogate What is the working class accent of Harrogate? rss
I've got a weird obsession with accents across the UK, especially Yorkshire. Harrogate is by far the nicest big place in Yorkshire I've been to but I never met anyone who's actually from there when I went. With it being a quite posh place, I'd assume it has a posh northern accent?
submitted by /u/montgomery_quinckle
[link] [comments] -
🔗 r/Yorkshire We went to see the jousting at Leeds Royal Armouries! Here are some highlights rss
| It was a fantastic day out and the museum is free but they told us the next jousting tournament over the summer had been cancelled due to funding :( submitted by /u/mbloomer04
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B rss
| Sure you can't do agentic coding with the Gemma 4 E2B, but this model is a game-changer for people learning a new language. Imagine a few years from now that people can run this locally on their phones. They can point their camera at objects and talk about them. And this model is multi-lingual, so people can always fallback to their native language if they want. This is essentially what OpenAI demoed a few years ago. Repo: https://github.com/fikrikarim/parlor submitted by /u/ffinzy
[link] [comments]
---|--- -
🔗 r/york Little walk around Walmgate Stray rss
| Was cold windy and rainy but I love this place. Interesting to see how the development of the Retreat is progressing. submitted by /u/DentistKitchen
[link] [comments]
---|--- -
🔗 r/york The Retreat, Heslington, York rss
| submitted by /u/DentistKitchen
[link] [comments]
---|--- -
🔗 r/reverseengineering Revived "Sniper Shooter Free" — patched to work on modern Android rss
submitted by /u/mnaoumov
[link] [comments] -
🔗 r/Yorkshire Easby Abbey! rss
| I visited Easby Abbey today, and it was great - so here's some pictures I took. I will admit that I added a bit more turqouise to the colour mix for the sky - it was a lovely day, but not quite that continental. Anyway, thank you for having me Yorkshire, I had a lovely time this weekend. edit - I have no idea why that third picture is all janky resolution wise! submitted by /u/ErsatzNihilist
[link] [comments]
---|--- -
🔗 r/Yorkshire Perfect Easter Sunday in N. Yorks rss
| Down from Glasgow visiting this weekend and you’ve done us proud as per usual submitted by /u/damo74uk
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Per-Layer Embeddings: A simple explanation of the magic behind the small Gemma 4 models rss
Many of you seem to have liked my recent post "A simple explanation of the key idea behind TurboQuant". Now I'm really not much of a blogger and I usually like to invest all my available time into developing Heretic, but there is another really cool new development happening with lots of confusion around it, so I decided to make another quick explainer post.
You may have noticed that the brand-new Gemma 4 model family includes two small models: gemma-4-E2B and gemma-4-E4B.
Yup, that's an "E", not an "A".
Those are neither Mixture-of-Experts (MoE) models, nor dense models in the traditional sense. They are something else entirely, something that enables interesting new performance tradeoffs for inference.
What's going on?
To understand how these models work, and why they are so cool, let's quickly recap what Mixture-of-Experts (MoE) models are:
gemma-4-26B-A4B is an example of an MoE model. It has 25.2 billion parameters (rounded to 26B in the model name). As you may know, transformer language models consist of layers, and each layer contains a so-called MLP (Multi-Layer Perceptron) component, which is responsible for processing the residual vector as it passes through the layer stack. In an MoE model, that MLP is split into "experts", which are sub-networks that learn to specialize during training. A routing network decides for each token which experts are the most appropriate for the token, and only those expert networks are actually used while processing that token.
In other words, while an MoE model has many parameters, only a fraction of them are required to predict the next token at any specific position. This is what the model name means: gemma-4-26B-A4B has 26 billion (actually 25.2 billion) total parameters, but only 4 billion of those (actually 3.8 billion) are active during any single inference step.
The good news is that this means that we can do inference much faster than for a dense 26B model, as only 3.8 billion parameters are involved in the computations. The bad news is that we still need to be able to load all 25.2 billion parameters into VRAM (or fast RAM), otherwise performance will tank because we don't know in advance which parameters we'll need for a token, and the active experts can differ from token to token.
Now gemma-4-E2B is a very different beast: It has 5.1 billion parameters, but 2.8 billion of those are embedding parameters. Google claims that those parameters "don't count", so they say that there are only 2.3 billion effective parameters. That's what the "E2B" part stands for.
Wut? Why don't the embedding parameters count?
If you have read or watched even a basic introduction to language models, you probably know what embeddings are: They are high-dimensional vectors associated with each token in the vocabulary. Intuitively speaking, they capture the "essence" of what a token stands for, encoded as a direction- magnitude combination in the embedding space.
Embeddings are static and position-independent. The embedding vector associated with a specific token is always the same, regardless of where the token occurs in the input and which other tokens surround it. In the mathematical formulation, embeddings are often expressed as a matrix, which can be multiplied with a matrix of one-hot encoded tokens, giving a matrix of embedding vectors for those tokens.
The small Gemma 4 models make use of Per-Layer Embeddings (PLE): Instead of a single large embedding matrix that is applied right after the tokenizer at the beginning of processing, there are additional (smaller) embedding matrices for each layer. Through training, they acquire specialized knowledge that can re-contextualize the token for the semantic specialization of each layer, which greatly improves processing quality. The layer-based embedding vectors are combined with the residuals through a series of operations, adding locally relevant information.
For gemma-4-E2B, the matrices holding these Per-Layer Embeddings make up more than half of all model parameters.
Okay, but why don't the embedding parameters count?!?
Because the "Introduction to Transformers" tutorials you've been watching have lied to you. While applying embeddings via matrix multiplication is incredibly elegant mathematically, it's complete dogshit in practice. No inference engine actually does that.
Remember that embedding vectors are:
- Static (they only depend on the token itself)
- Position-independent (there is only one embedding vector for each token)
- Fixed (they are precomputed for the entire vocabulary)
So the "embedding matrix" is a list of embedding vectors, with as many elements as there are tokens in the vocabulary. There are no cross-column interactions at all. That's not a matrix, that's a lookup table. So we don't actually have to do matrix multiplication to get the embeddings. We just pull the entries for the token IDs from a fixed-size array. And we aren't even going to need the vast majority of entries. Modern tokenizer vocabularies typically contain around 250,000 different tokens. But if our input is 1000 tokens, we are only going to look at a tiny fraction of those.
We don't need CUDA cores or optimized kernels for that. We don't need those embedding matrices to be in VRAM. We don't even necessarily need to store them in CPU RAM. In fact, we can store them on disk. The plan seems to be to store them in flash memory on mobile devices, and possibly combine that with in-flash processing for further speedups in the future.
And that's the secret of Per-Layer Embeddings: They are huge, but we need such a tiny part of them for each inference step that we can store them wherever we like. And that's why they are fast.
submitted by /u/-p-e-w-
[link] [comments] -
🔗 r/Yorkshire A great view through Mercury Bridge on the Swale in Richmond. rss
| for more about Richmond Yorks visit the new subreddit. submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/reverseengineering Inside WannaCry: Exploit, Worming, and TOR Communication Explained rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/Leeds Anyone up for table tennis / badminton at Quarry House? rss
Hey everyone,
Looking to see if anyone’s up for playing table tennis (or even badminton) at the Quarry House Leisure Centre in Leeds?
Trying to get a bit more active, don’t really know many folks here who play sports, so thought I’d put this out there. Let me know if you’re interested!submitted by /u/SubstantialHorror422
[link] [comments] -
🔗 r/reverseengineering Reverse engineering PerimeterX’s new VM rss
submitted by /u/B9ph0met
[link] [comments] -
🔗 r/wiesbaden Beachvolleyball-Gruppe rss
Hallo,
ich bin noch relativ neu in Wiesbaden und suche eine gemischte Beachvolleyballgruppe, die sich zum Beachen und abhängen trifft. Bin kein Profispieler - eher eine Mischung aus Semi und Spaß, sich auch Mal in den Sand zu werfen.
Am Schlachthof scheint ein guter Treff. Freue mich auf Vorschläge bzw. vielleicht können wir ja auch eine Gruppe bilden. Danke vorab und schönen Sonntag :)
submitted by /u/nate23x
[link] [comments] -
🔗 r/LocalLLaMA Minimax 2.7: Today marks 14 days since the post on X and 12 since huggingface on openweight rss
| I think it would make a nice Easter egg to release today! submitted by /u/LegacyRemaster
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Gemma 4 26b is the perfect all around local model and I'm surprised how well it does. rss
I got a 64gb memory mac about a month ago and I've been trying to find a model that is reasonably quick, decently good at coding, and doesn't overload my system. My test I've been running is having it create a doom style raycaster in html and js
I've been told qwen 3 coder next was the king, and while its good, the 4bit variant always put my system near the edge. Also I don't know if it was because it was the 4bit variant, but it always would miss tool uses and get stuck in a loop guessing the right params. In the doom test it would usually get it and make something decent, but not after getting stuck in a loop of bad tool calls for a while.
Qwen 3.5 (the near 30b moe variant) could never do it in my experience. It always got stuck on a thinking loop and then would become so unsure of itself it would just end up rewriting the same file over and over and never finish.
But gemma 4 just crushed it, making something working after only 3 prompts. It was very fast too. It also limited its thinking and didn't get too lost in details, it just did it. It's the first time I've ran a local model and been actually surprised that it worked great, without any weirdness.
It makes me excited about the future of local models, and I wouldn't be surprised if in 2-3 years we'll be able to use very capable local models that can compete with the sonnets of the world.
submitted by /u/pizzaisprettyneato
[link] [comments] -
🔗 navidrome/navidrome v0.61.1 release
This patch release addresses a WebP performance regression on low-power hardware introduced in v0.61.0, adds a new
EnableWebPEncodingconfig option and a configurable UI cover art size, and includes several Subsonic API and translation fixes.Configuration Changes
Status | Option | Description | Default
---|---|---|---
New |EnableWebPEncoding| Opt-in to WebP encoding for resized artwork. Whenfalse(default), Navidrome uses JPEG/PNG (preserving the original source format), avoiding the WebP WASM encoder overhead that caused slow image processing on low-power hardware in v0.61.0. Set totrueto re-enable WebP output. Replaces the internalDevJpegCoverArtflag. (#5286) |false
New |UICoverArtSize| Size (in pixels, 200–1200) of cover art requested by the web UI. It was increased from 300px to 600px in 0.61.0; now configurable and defaulting to 300px to reduce image encoding load on low-power hardware. Users on capable hardware can raise it for sharper thumbnails. (#5286) |300
Changed |DevArtworkMaxRequests| Default lowered frommax(4, NumCPU)tomax(2, NumCPU/2)to reduce load on low-power hardware. (#5286). (Note: this is an internal configuration and can be removed in future releases) |max(2, NumCPU/2)
Removed |DevJpegCoverArt| Replaced by the user-facingEnableWebPEncodingoption. (#5286) | —For a complete list of all configuration options, see the Configuration Options documentation.
Server
- Add missing viper defaults for
MPVPath,ArtistImageFolder, andPlugins.LogLevelso they can be overridden via environment variables and config files. (220019a9f by @deluan) - Update
go-sqlite3to v1.14.38 andgo-tomlto v2.3.0. (6109bf519 by @deluan)
Artwork
- Address WebP performance regression on low-power hardware by preserving original image format when WebP encoding is disabled, and adding encoder/decoder selection logging. (#5286 by @deluan)
- Preserve animation for square thumbnails with animated images. (4030bfe06 by @deluan)
Smart Playlists
Subsonic API
- Strip OpenSubsonic extensions from playlists for legacy clients to improve compatibility. (23f355637 by @deluan)
- Return proper artwork ID format in
getInternetRadioStations. (c60637de2 by @deluan)
Translations
- Update Esperanto and Dutch translations from POEditor. (#5301 by @deluan)
- Update Basque localisation. (#5278 by @xabirequejo)
Full Changelog :
v0.61.0...v0.61.1Helping out
This release is only possible thanks to the support of some awesome people!
Want to be one of them?
You can sponsor, pay me a Ko- fi, or contribute with code.Where to go next?
- Add missing viper defaults for
-
🔗 r/LocalLLaMA One year ago DeepSeek R1 was 25 times bigger than Gemma 4 rss
I'm mind blown by the fact that about a year ago DeepSeek R1 came out with a MoE architecture at 671B parameters and today Gemma 4 MoE is only 26B and is genuinely impressive. It's 25 times smaller, but is it 25 times worse?
I'm exited about the future of local LLMs.
submitted by /u/rinaldo23
[link] [comments]
-
- April 04, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-04 rss
IDA Plugin Updates on 2026-04-04
New Releases:
Activity:
-
🔗 r/reverseengineering How I stole AES keys from a microcontroller using power analysis (ChipWhisperer walkthrough) rss
submitted by /u/PassengerKnown4806
[link] [comments] -
🔗 jj-vcs/jj v0.35.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
-
Workspaces can now have their own separate configuration. For instance, you
can usejj config set --workspaceto update a configuration option only in
the current workspace. -
After creating a local bookmark, it is now possible to use
jj bookmark track
to associate the bookmark with a specific remote before pushing it. When
pushing a tracked bookmark, it is not necessary to use--allow-new. -
The new
jj git colocation enableandjj git colocation disablecommands
allow converting between colocated and non-colocated workspaces.
Breaking changes
-
The
remote_bookmarks(remote=pattern)revset now includes Git-tracking
bookmarks if the specifiedpatternmatchesgit. The default is
remote=~exact:"git"as before. -
The deprecated flag
--summaryofjj abandonhas been removed. -
The deprecated command
jj backouthas been removed, usejj revertinstead. -
The following deprecated config options have been removed:
signing.sign-allcore.watchman.register_snapshot_triggerdiff.format
Deprecations
-
jj bisect run --command <cmd>is deprecated in favor of
jj bisect run -- <cmd>. -
jj metaedit --update-committer-timestampwas renamed to
jj metaedit --force-rewritesince the old name (and help text)
incorrectly suggested that the committer name and email would not
be updated.
New features
-
Workspaces may have an additional layered configuration, located at
.jj/workspace-config.toml.jj configsubcommands which took layer options
like--reponow also support--workspace. -
jj bookmark trackcan now associate new local bookmarks with remote.
Tracked bookmarks can be pushed without--allow-new.
#7072 -
The new
jj git colocationcommand provides sub-commands to show the
colocation state (status), to convert a non-colocated workspace into
a colocated workspace (enable), and vice-versa (disable). -
New
jj tag set/deletecommands to create/update/delete tags locally.
Created/updated tags are currently always exported to Git as lightweight
tags. If you would prefer them to be exported as annotated tags, please give
us feedback on #7908. -
Templates now support a
.split(separator, [limit])method on strings to
split a string into a list of substrings. -
-Gis now available as a short form of--no-graphinjj log,jj evolog,
jj op log,jj op showandjj op diff. -
jj metaeditnow accepts-m/--messageoption to non-interactively update
the change description. -
The
CryptographicSignature.key()template method now also works for SSH
signatures and returns the corresponding public key fingerprint. -
Added
template-aliases.empty_commit_marker. Users can override this value in
their config to change the "(empty)" label on empty commits. -
Add support for
--when.workspacesconfig scopes. -
Add support for
--when.hostnamesconfig scopes. This allows configuration to
be conditionally applied based on the hostname set inoperation.hostname. -
jj bisect runaccepts the command and arguments to pass to the command
directly as positional arguments, such as
jj bisect run --range=..main -- cargo check --all-targets. -
Divergent changes are no longer marked red in immutable revisions. Since the
revision is immutable, the user shouldn't take any action, so the red color
was unnecessarily alarming. -
New commit template keywords
local/remote_tagsto show only local/remote
tags. These keywords may be useful in non-colocated Git repositories where
local and exported@gittags can point to different revisions. -
jj git clonenow supports the--branchoption to specify the branch(es)
to fetch during clone. If present, the first matching branch is used as the
working-copy parent. -
Revsets now support logical operators in string patterns.
Fixed bugs
-
jj metaedit --author-timestamptwice with the same value no longer
edits the change twice in some cases. -
jj squash: fixed improper revision rebase when both--insert-afterand
--insert-beforewere used. -
jj undocan now revert "fetch"/"import" operation that involves tag updates.
#6325 -
Fixed parsing of
files(expr)revset expression including parentheses.
#7747 -
Fixed
jj describe --stdinto append a final newline character.
Contributors
Thanks to the people who made this release happen!
- Alpha Chen (@kejadlen)
- Angel Ezquerra (@AngelEzquerra)
- ase (@adamse)
- Austin Seipp (@thoughtpolice)
- Benjamin Brittain (@benbrittain)
- bipul (@bipulmgr)
- Brian Schroeder (@bts)
- Bryce Berger (@bryceberger)
- Cole Helbling (@cole-h)
- Daniel Luz (@mernen)
- David Higgs (@higgsd)
- Defelo (@Defelo)
- Fedor (@sheremetyev)
- Gabriel Goller (@kaffarell)
- Gaëtan Lehmann (@glehmann)
- George Christou (@gechr)
- Ilya Grigoriev (@ilyagr)
- Isaac Corbrey (@icorbrey)
- James Coman (@jamescoman)
- Joseph Lou (@josephlou5)
- Lander Brandt (@landaire)
- Martin von Zweigbergk (@martinvonz)
- Michael Chirico (@MichaelChirico)
- Owen Brooks (@owenbrooks)
- Peter Schilling (@schpet)
- Philip Metzger (@PhilipMetzger)
- Remo Senekowitsch (@senekor)
- Ross Smyth (@RossSmyth)
- Scott Taylor (@scott2000)
- Steve Fink (@hotsphink)
- Steve Klabnik (@steveklabnik)
- Theo Buehler (@botovq)
- Theodore Dubois (@tbodt)
- Theodore Keloglou (@sirodoht)
- Yuya Nishihara (@yuja)
-
-
🔗 jj-vcs/jj v0.36.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- The documentation has moved from https://jj-vcs.github.io/jj/ to
https://docs.jj-vcs.dev/.
301 redirects are being issued towards the new domain, so any existing links
should not be broken.-
Fixed race condition that could cause divergent operations when running
concurrentjjcommands in colocated repositories. It is now safe to
continuously run e.g.jj logwithout--ignore-working-copyin one
terminal while you're running other commands in another terminal.
#6830 -
jjnow ignores$PAGERset in the environment and usesless -FRXon most
platforms (:builtinon Windows). See the docs for
more information, and #3502 for
motivation.
Breaking changes
-
In filesets or path patterns, glob matching
is enabled by default. You can usecwd:"path"to match literal paths. -
In the following commands, string pattern
arguments are now parsed the same way they
are in revsets and can be combined with logical operators:jj bookmark delete/forget/list/move,jj tag delete/list,jj git clone/fetch/push -
In the following commands, unmatched bookmark/tag names is no longer an
error. A warning will be printed instead:jj bookmark delete/forget/move/track/untrack,jj tag delete,jj git clone/push -
The default string pattern syntax in revsets will be changed to
glob:in a
future release. You can opt in to the new default by setting
ui.revsets-use-glob-by-default=true. -
Upgraded
scm-recordfrom v0.8.0 to v0.9.0. See release notes at
https://github.com/arxanas/scm-record/releases/tag/v0.9.0. -
The minimum supported Rust version (MSRV) is now 1.89.
-
On macOS, the deprecated config directory
~/Library/Application Support/jj
is not read anymore. Use$XDG_CONFIG_HOME/jjinstead (defaults to
~/.config/jj). -
Sub-repos are no longer tracked. Any directory containing
.jjor.git
is ignored. Note that Git submodules are unaffected by this.
Deprecations
-
The
--destination/-darguments forjj rebase,jj split,jj revert,
etc. were renamed to--onto/-o. The reasoning is that--onto,
--insert-before, and--insert-afterare all destination arguments, so
calling one of them--destinationwas confusing and unclear. The old names
will be removed at some point in the future, but we realize that they are
deep in muscle memory, so you can expect an unusually long deprecation period. -
jj describe --editis deprecated in favor of--editor. -
The config options
git.auto-local-bookmarkandgit.push-new-bookmarksare
deprecated in favor ofremotes.<name>.auto-track-bookmarks. For example:[remotes.origin]auto-track-bookmarks = "glob:*"
For more details, refer to
the docs.- The flag
--allow-newonjj git pushis deprecated. In order to push new
bookmarks, please track them withjj bookmark track. Alternatively, consider
setting up an auto-tracking configuration to avoid the chore of tracking
bookmarks manually. For example:[remotes.origin]auto-track-bookmarks = "glob:*"
For more details, refer to
the docs.New features
-
jj commit,jj describe,jj squash, andjj splitnow accept
--editor, which ensures an editor will be opened with the commit
description even if one was provided via--message/-m. -
All
jjcommands show a warning when the providedfilesetexpression
doesn't match any files. -
Added
files()template function toDiffStats. This supports per-file stats
likelines_added()andlines_removed() -
Added
join()template function. This is different fromseparate()in that
it adds a separator between all arguments, even if empty. -
RepoPathtemplate type now has aabsolute() -> Stringmethod that returns
the absolute path as a string. -
Added
format_path(path)template alias that controls how file paths are printed
withjj file list. -
New built-in revset aliases
visible()andhidden(). -
Unquoted
*is now allowed in revsets.bookmarks(glob:foo*)no longer
needs quoting. -
jj prev/next --no-editnow generates an error if the working-copy has some
children. -
A new config option
remotes.<name>.auto-track-bookmarkscan be set to a
string pattern. New bookmarks matching it will be automatically tracked for
the specified remote. See
the docs. -
jj lognow supports a--countflag to print the number of commits instead
of displaying them.
Fixed bugs
-
jj fixnow prints a warning if a tool failed to run on a file.
#7971 -
Shell completion now works with non‑normalized paths, fixing the previous
panic and allowing prefixes containing.or..to be completed correctly.
#6861 -
Shell completion now always uses forward slashes to complete paths, even on
Windows. This renders completion results viable when using jj in Git Bash.
#7024 -
Unexpected keyword arguments now return a parse failure for the
coalesce()
andconcat()templating functions. -
Nushell completion script documentation add
-foption, to keep it up to
date.
#8007 -
Ensured that with Git submodules, remnants of your submodules do not show up
in the working copy after runningjj new.
#4349
Contributors
Thanks to the people who made this release happen!
- abgox (@abgox)
- ase (@adamse)
- Björn Kautler (@Vampire)
- Bryce Berger (@bryceberger)
- Chase Naples (@cnaples79)
- David Higgs (@higgsd)
- edef (@edef1c)
- Evan Mesterhazy (@emesterhazy)
- Fedor (@sheremetyev)
- Gaëtan Lehmann (@glehmann)
- George Christou (@gechr)
- Hubert Lefevre (@Paluche)
- Ilya Grigoriev (@ilyagr)
- Jonas Greitemann (@jgreitemann)
- Joseph Lou (@josephlou5)
- Julia DeMille (@judemille)
- Kaiyi Li (@06393993)
- Kyle Lippincott (@spectral54)
- Lander Brandt (@landaire)
- Lucio Franco (@LucioFranco)
- Luke Randall (@lukerandall)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Mitchell Skaggs (@magneticflux-)
- Peter Schilling (@schpet)
- Philip Metzger (@PhilipMetzger)
- QingyaoLin (@QingyaoLin)
- Remo Senekowitsch (@senekor)
- Scott Taylor (@scott2000)
- Stephen Jennings (@jennings)
- Steve Klabnik (@steveklabnik)
- Tejas Sanap (@whereistejas)
- Tommi Virtanen (@tv42)
- Velociraptor115 (@Velociraptor115)
- Vincent Ging Ho Yim (@cenviity)
- Yuya Nishihara (@yuja)
- The documentation has moved from https://jj-vcs.github.io/jj/ to
-
🔗 jj-vcs/jj v0.37.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- A new syntax for referring to hidden and divergent change IDs is available:
xyz/nwherenis a number. For instance,xyz/0refers to the latest
version ofxyz, whilexyz/1refers to the previous version ofxyz.
This allows you to perform actions likejj restore --from xyz/1 --to xyzto
restorexyzto its previous contents, if you made a mistake.
For divergent changes, the numeric suffix will always be shown in the log,
allowing you to disambiguate them in a similar manner.Breaking changes
-
String patterns in revsets, command
arguments, and configuration are now parsed as globs by default. Use
substring:orexact:prefix as needed. -
remotes.<name>.auto-track-bookmarksis now parsed the same way they
are in revsets and can be combined with logical operators. -
jj bookmark track/untracknow accepts--remoteargument. If omitted, all
remote bookmarks matching the bookmark names will be tracked/untracked. The
old<bookmark>@<remote>syntax is deprecated in favor of<bookmark> --remote=<remote>. -
On Windows, symlinks that point to a path with
/won't be supported. This
path is invalid on Windows. -
The template alias
format_short_change_id_with_hidden_and_divergent_info(commit)
has been replaced byformat_short_change_id_with_change_offset(commit). -
The following deprecated config options have been removed:
git.push-bookmark-prefixui.default-descriptionui.diff.formatui.diff.tool- The deprecated
commit_id.normal_hex()template method has been removed.
-
Template expansion that did not produce a terminating newline will not be
fixed up to provide one byjj log,jj evolog, orjj op log. -
The
diffconflict marker style can now use\\\\\\\markers to indicate
the continuation of a conflict label from the previous line.
Deprecations
- The
git_head()andgit_refs()functions will be removed from revsets and
templates.git_head()should point to thefirst_parent(@)revision in
colocated repositories.git_refs()can be approximated as
remote_bookmarks(remote=glob:*) | tags().
New features
-
Updated the executable bit representation in the local working copy to allow
ignoring executable bit changes on Unix. By default we try to detect the
filesystem's behavior, but this can be overridden manually by setting
working-copy.exec-bit-change = "respect" | "ignore". -
jj workspace addnow also works for empty destination directories. -
jj git remotefamily of commands now supports different fetch and push URLs. -
[colors]table now supportsdim = trueattribute. -
In color-words diffs, context line numbers are now rendered with decreased
intensity. -
Hidden and divergent commits can now be unambiguously selected using their
change ID combined with a numeric suffix. For instance, if there are two
commits with change IDxyz, then one can be referred to asxyz/0and the
other can be referred to asxyz/1. These suffixes are shown in the log when
necessary to make a change ID unambiguous. -
jj util gcnow prunes unreachable files in.jj/repo/store/extrato save
disk space. -
Early version of a
jj file searchcommand for searching for a pattern in
files (likegit grep). -
Conflict labels now contain information about where the sides of a conflict
came from (e.g.nlqwxzwn 7dd24e73 "first line of description"). -
--insert-beforenow accepts a revset that resolves to an empty set when
used with--insert-after. The behavior is similar to--onto. -
jj tag listnow supports--sortoption. -
TreeDiffEntrytype now has adisplay_diff_path()method that formats
renames/copies appropriately. -
TreeDiffEntrynow has astatus_char()method that returns
single-character status codes (M/A/D/C/R). -
CommitEvolutionEntrytype now has apredecessors()method which
returns the predecessor commits (previous versions) of the entry's commit. -
CommitEvolutionEntrytype now has ainter_diff()method which
returns aTreeDiffbetween the entry's commit and its predecessor version.
Optionally accepts a fileset literal to limit the diff. -
jj file annotatenow reports an error for non-files instead of succeeding
and displaying no content. -
jj workspace forgetnow warns about unknown workspaces instead of failing.
Fixed bugs
-
Broken symlink on Windows. #6934.
-
Fixed failure on exporting moved/deleted annotated tags to Git. Moved tags are
exported as lightweight tags. -
jj gerrit uploadnow correctly handles mixed explicit and implicit
Change-Ids in chains of commits (#8219) -
jj git pushnow updates partially-pushed remote bookmarks accordingly.
#6787 -
Fixed problem of loading large Git packfiles.
GitoxideLabs/gitoxide#2265 -
The builtin pager won't get stuck when stdin is redirected.
-
jj workspace addnow prevents creating an empty workspace name. -
Fixed checkout of symlinks pointing to themselves or
.git/.jjon Unix. The
problem would still remain on Windows if symlinks are enabled.
#8348 -
Fixed a bug where jj would fail to read git delta objects from pack files.
GitoxideLabs/gitoxide#2344
Contributors
Thanks to the people who made this release happen!
- Anton Älgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Bryce Berger (@bryceberger)
- Carlos Knippschild (@chuim)
- Cole Helbling (@cole-h)
- David Higgs (@higgsd)
- Eekle (@Eekle)
- Gaëtan Lehmann (@glehmann)
- Ian Wrzesinski (@isuffix)
- Ilya Grigoriev (@ilyagr)
- Julian Howes (@jlnhws)
- Kaiyi Li (@06393993)
- Lukas Krejci (@metlos)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Ori Avtalion (@salty-horse)
- Scott Taylor (@scott2000)
- Shaoxuan (Max) Yuan (@ffyuanda)
- Stephen Jennings (@jennings)
- Steve Fink (@hotsphink)
- Steve Klabnik (@steveklabnik)
- Theo Buehler (@botovq)
- Thomas Castiglione (@gulbanana)
- Vincent Ging Ho Yim (@cenviity)
- xtqqczze (@xtqqczze)
- Yuantao Wang (@0WD0)
- Yuya Nishihara (@yuja)
- A new syntax for referring to hidden and divergent change IDs is available:
-
🔗 jj-vcs/jj v0.38.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- Per-repo and per-workspace config is now stored outside the repo, for security
reasons. This is not a breaking change because we automatically migrate
legacy repos to this new format..jj/repo/config.tomland
.jj/workspace-config.tomlshould no longer be used.
Breaking changes
-
The minimum supported
gitcommand version is now 2.41.0. macOS users will
need to either upgrade "Developer Tools" to 26 or install Git from
e.g. Homebrew. -
Deprecated
ui.always-allow-large-revsetssetting andall:revset modifier
have been removed. -
<name>@<remote>revset symbols can also be resolved to remote tags. Tags are
prioritized ahead of bookmarks. -
Legacy placeholder support used for unset
user.nameoruser.emailhas been
removed. Commits containing these values will now be pushed withjj git push
without producing an error. -
If any side of a conflicted file is missing a terminating newline, then the
materialized file in the working copy will no longer be terminated by a
newline.
Deprecations
- The revset function
diff_contains()has been renamed todiff_lines().
New features
-
jj git fetchnow shows details of abandoned commits (change IDs and
descriptions) by default, matching thejj abandonoutput format.
#3081 -
jj workspace rootnow accepts an optional--nameargument to show
the root path of the specified workspace (defaults to the current one). When
given a workspace that was created before this release, it errors out. -
jj git push --bookmark <name>will now automatically track the bookmark if
it isn't tracked with any remote already. -
Add
git_web_url([remote])template function that converts a git remote URL
to a web URL, suitable for opening in a browser. Defaults to the "origin"
remote. -
New
divergent()revset function for divergent changes. -
String pattern values in revsets and templates can now be substituted by
aliases. For example,grep(x) = description(regex:x)now works. -
A new config option
remotes.<name>.auto-track-created-bookmarksbehaves
similarly toauto-track-bookmarks, but it only applies to bookmarks created
locally. Setting it to"*"is now the closest replacement for the deprecated
git.push-new-bookmarksoption. -
jj tag listcan now be filtered by revset. -
Conflict markers will use LF or CRLF as the line ending according to the
contents of the file.
#7376 -
New experimental
jj git fetch --tagflag to fetch tags in the same way as
bookmarks. If specified, tags won't be fetched implicitly, and only tags
matching the pattern will be fetched as<name>@<remote>tags. The fetched
remote tags will be tracked by the local tags of the same name. -
New
remote_tags()revset function to query remote tags. -
New builtin
hyperlink()template function that gracefully falls back to
text when outputting to a non-terminal, instead of emitting raw OSC 8 escape
codes. #7592
Fixed bugs
-
jj git init --colocatenow refuses to run inside a Git worktree, providing
a helpful error message with alternatives.
#8052 -
jj git pushnow ensures that tracked remote bookmarks are updated even if
there are no mappings in the Git fetch refspecs.
#5115 -
jj git fetch/pushnow forwards most ofgitstderr outputs such as
authentication requests. #5760 -
Conflicted bookmarks and tags in
trunk()will no longer generate verbose
warnings. The configuredtrunk()alias will temporarily be disabled.
#8501 -
Dynamic shell completion for
jj config unsetnow only completes
configuration options which are set.
#7774 -
Dynamic shell completion no longer attempts to resolve aliases at the
completion position. This previously prevented a fully-typed alias from
being accepted on some shells and replaced it entirely with its expansion on
bash. Now, the completion will only resolve the alias, and suggest candidates
accordingly, after the cursor has been advanced to the next position.
#7773 -
Setting the editor via
ui.editor,$EDITOR, orJJ_EDITORnow respects shell quoting. -
jj gerrit uploadwill no longer swallow errors and surface if changes fail
to get pushed to gerrit.
#8568 -
jj file track --include-ignorednow works whenfsmonitor.backend="watchman".
#8427 -
Conflict labels are now preserved correctly when restoring files from commits
with different conflict labels. -
The empty tree is now always written when the working copy is empty.
#8480 -
When using the Watchman filesystem monitor, changes to .gitignore now trigger
a scan of the affected subtree so newly unignored files are discovered.
#8427 -
--quietnow hides progress bars.
Contributors
Thanks to the people who made this release happen!
- Benjamin Davies (@Benjamin-Davies)
- Bryce Berger (@bryceberger)
- Chris Rose (@offbyone)
- Daniel Morsing (@DanielMorsing)
- David Fröhlingsdorf (@2079884FDavid)
- David Higgs (@higgsd)
- David Rieber (@drieber)
- Federico G. Schwindt (@fgsch)
- Gaëtan Lehmann (@glehmann)
- George Christou (@gechr)
- itstrivial
- Jeff Turner (@jefft)
- Jonas Greitemann (@jgreitemann)
- Jonas Helgemo (@jonashelgemo)
- Joseph Lou (@josephlou5)
- Kaiyi Li (@06393993)
- Lukas Wirth (@Veykril)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Paul Smith (@paulsmith)
- Pavan Kumar Sunkara (@pksunkara)
- Philip Metzger (@PhilipMetzger)
- Remo Senekowitsch (@senekor)
- Sami Jawhar (@sjawhar)
- Scott Taylor (@scott2000)
- Simone Cattaneo (@simonecattaneo91)
- Steve Klabnik (@steveklabnik)
- tom (@lecafard)
- Vincent Ging Ho Yim (@cenviity)
- WD (@0WD0)
- xtqqczze (@xtqqczze)
- Yuya Nishihara (@yuja)
- yz (@yzheng453)
- Per-repo and per-workspace config is now stored outside the repo, for security
-
🔗 jj-vcs/jj v0.39.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
-
jj arrangecommand brings up a TUI where you can reorder and abandon
revisions. #1531 -
jj bookmark advanceautomatically moves bookmarks forward to a
target revision (defaults to@) using customization points
revsets.bookmark-advance-fromandrevsets.bookmark-advance-to.
It is heavily inspired by the longstanding community aliasjj tug.
Breaking changes
-
Dropped support for legacy index files written by jj < 0.33. New index files
will be created as needed. -
The following deprecated config options have been removed:
core.fsmonitorcore.watchman.register-snapshot-trigger- The deprecated command
jj op undohas been removed. Usejj op revertor
jj undo/redoinstead.
Deprecations
jj debug snapshotis deprecated in favor ofjj util snapshot. Although
this was an undocumented command in the first place, it will be removed after
6 months (v0.45.0) to give people time to migrate away.
New features
-
Add support for push options in
jj git pushwith the--optionflag.
This allows users to pass options to the remote server when pushing commits.
The short alias-ois also supported. -
jj newnow evaluates thenew_descriptiontemplate to populate the
initial commit description when no-mmessage is provided. -
Templates now support
first(),last(),get(index),reverse(),
skip(count), andtake(count)methods on list types for more flexible
list manipulation. -
New
builtin_draft_commit_description_with_difftemplate that includes the
diff in the commit description editor, making it easier to review changes
while writing commit messages. -
Revsets and templates now support
name:xpattern aliases such as'grep:x' = 'description(regex:x)'. -
Filesets now support user aliases.
-
jj workspace addnow links with relative paths. This enables workspaces to work
inside containers or when moved together. Existing workspaces with absolute paths
will continue to work as before. -
jj undonow also outputs what operation was undone, in addition to the
operation restored to. -
Bookmarks with two or more consecutive
-characters no longer need to be quoted
in revsets. For example,jj diff -r '"foo--bar"'can now be written asjj diff -r foo--bar. -
New flag
--simplify-parentsonjj rebaseto apply the same transformation
asjj simplify-parentson the rebased commits.
#7711 -
jj rebase --branchandjj rebase --sourcewill no longer return an error
if the given argument resolves to an empty revision set
(jj rebase --revisionsalready behaved this way). Instead, a message will be
printed to inform the user why nothing has changed. -
Changed Git representation of conflicted commits to include files from the
first side of the conflict. This should prevent unchanged files from being
highlighted as "added" in editors when checking out a conflicted commit in a
colocated workspace. -
New template function
Timestamp::since(ts)that returns theTimestampRange
between two timestamps. It can be used in conjunction with.duration()in
order to obtain a human-friendly duration between twoTimestamps. -
Added new
jj util snapshotcommand to manually or programmatically trigger a
snapshot. This introduces an official alternative to the
previously-undocumentedjj debug snapshotcommand. The Watchman integration
has also been updated to use this command instead. -
Changed background snapshotting to suppress stdout and stderr to avoid long
hangs. -
jj gerrit uploadnow supports a variety of new flags documented in
gerrit's documentation.
This includes, for example,--reviewer=foo@example.comand
--label=Auto-Submit. -
jj gerrit uploadnow recognizes Change-Id explicitly set via the alternative
trailerLink, and will generate aLink: <review-url>/id/<change-id>trailer
ifgerrit.review-urloption is set. -
jj gerrit uploadno longer requires the-rflag, and will default to
uploading what you're currently working on. -
Templates now support
Serializeoperations on the result ofmap()and
if(), when supported by the underlying type. -
jj bookmark renamenow supports--overwrite-existingto allow renaming a
bookmark even if the new name already exists, effectively replacing the
existing bookmark. -
Conditional configuration based on environment variables with
--when.environments.
#8779
Fixed bugs
-
Windows: use native file locks (
LockFileEx) instead of polling with file
creation, fixing issues with "pending delete" semantics leaving lock files
stuck. -
jjnow safely detaches theHEADof alternate Git worktrees if their
checked-out branch is moved or deleted during Git export. -
jj file track --include-ignorednow works whenfsmonitor.backend="watchman".
#8427
Contributors
Thanks to the people who made this release happen!
- Aaron Christiansen (@AaronC81)
- Andy Brenneke (@abrenneke)
- Anton Älgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Benjamin Tan (@bnjmnt4n)
- Bram Geron (@bgeron)
- Bryce Berger (@bryceberger)
- Caleb White (@calebdw)
- countskm (@countdigi)
- David Higgs (@higgsd)
- Evan Simmons (@estk)
- Fedor Sheremetyev (@sheremetyev)
- Gaëtan Lehmann (@glehmann)
- George Christou (@gechr)
- Hubert Lefevre (@Paluche)
- Ian (@chronologos)
- Ilya Grigoriev (@ilyagr)
- Jaen (@jaens)
- Joseph Lou (@josephlou5)
- Josh Steadmon (@steadmon)
- Martin von Zweigbergk (@martinvonz)
- Matt Kulukundis (@fowles)
- Matt Stark (@matts1)
- max (@pr2502)
- Nika Layzell (@mystor)
- Philip Metzger (@PhilipMetzger)
- Richard Smith (@zygoloid)
- Scott Taylor (@scott2000)
- Steve Klabnik (@steveklabnik)
- Theodore Dubois (@tbodt)
- William Phetsinorath (@shikanime)
- xtqqczze (@xtqqczze)
- Yuya Nishihara (@yuja)
-
-
🔗 jj-vcs/jj v0.40.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
None
Breaking changes
None
Deprecations
None
New features
-
New
diff_lines_added()anddiff_lines_removed()revset functions for
matching content on only one side of a diff. -
The
endparameter in theString.substr(start, end)templating method is
now optional. If not given,substr()returns fromstartto the end of the
string. -
WorkspaceReftemplates now provide a.root()method to show the absolute
path to each workspace root. -
The
jj arrangeTUI now includes immediate parents and children. They are not
selectable and are dimmed by default. -
jj arrangeuses the default log template (builtin_log_compact) instead of
the shorter commit summary style. -
In the
jj arrangeTUI, the "swap up/down" actions now move along graph edges
even if the commit rows are not adjacent. -
Diff colors can now be configured
differently for each format. -
jj op lognow includes the name of the workspace the operation was created
from. -
The
config()template function now accepts aStringifyexpression instead
ofLiteralString. This allows looking up configuration values dynamically. -
jj op show,jj op diff,jj op log -pnow only show "interesting"
revisions by default (defined byrevsets.op-diff-changes-in). A new flag,
--show-changes-in, can be used to override this. #6083
Fixed bugs
-
.gitignorewith UTF-8 BOM can now be parsed correctly. -
Fix incompatibility with gpgsm 2.5.x.
Contributors
Thanks to the people who made this release happen!
- Aaron Sutton (@aaronjsutton)
- Adam Sandberg Eriksson (@adamse)
- Anton Älgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Benjamin Tan (@bnjmnt4n)
- Ben Warren (@warrenbhw)
- Bryant Chandler (@brychanrobot)
- David Higgs (@higgsd)
- Filip Weiss (@fiws)
- Gabriel Goller (@kaffarell)
- Gaëtan Lehmann (@glehmann)
- Ilya Grigoriev (@ilyagr)
- Jeff Turner (@jefft)
- Joseph Lou (@josephlou5)
- Josh Steadmon (@steadmon)
- KITAGAWA Yasutaka (@kit494way)
- Liam (@terror)
- Li-Wen Hsu (@lwhsu)
- Martin von Zweigbergk (@martinvonz)
- Philip Metzger (@PhilipMetzger)
- Poliorcetics (@poliorcetics)
- Remo Senekowitsch (@senekor)
- Rob Pilling (@bobrippling)
- Scott Taylor (@scott2000)
- Shnatu
- Stephen Prater (@stephenprater)
- Yuya Nishihara (@yuja)
- Zeyi Fan (@fanzeyi)
-
-
🔗 r/Leeds D&D for absolute beginners? rss
Eyup! I’m sure it’s been asked tons of times before, but I’m wondering if there are any D&D groups about that are happy to take on a newbie? I’m in my 30s and fast realising I spend far too much time at home, I need to socialise!
I’m based in Bramley and had a look at leodis/grand strategium (I think) online but couldn’t see anything that was D&D newbie specific!
TIA!
submitted by /u/amzlrr
[link] [comments] -
🔗 r/Leeds Rate my Day rss
Context : https://www.reddit.com/r/Leeds/s/ybY3WkHnkU
I asked for the perfect day in Leeds.
This is what we did:
Leeds Art Gallery Cafe - Croissant & Coffee
Leeds Art Gallery
Henry Moore Institute
Crash Records
Alfonsos Bodega Deli - Fit Sandwich
Kirkgate Market
Corn Exchange - lots of overwhelmingly cute indie shops
Vinyl Ground - great coffee/wine bar
Salt Calls Landing - cute little balcony
NQ64 - rinsed my student ID as a 38 year old and played time crisis til my hearts content
Belgrave Music Hall & Canteen - don’t order the sausage fest
submitted by /u/Iamtheonlylauren
[link] [comments] -
🔗 r/reverseengineering I built a CTF-style AI security game. Looking for feedback from students and professionals rss
submitted by /u/delphisecurity
[link] [comments] -
🔗 r/wiesbaden Wo. Gibt's. Hier. Bärlauch. rss
Brauche genaue Infos. Danke.
submitted by /u/DonerTheBonerDonor
[link] [comments] -
🔗 r/LocalLLaMA Gemma 4 31B beats several frontier models on the FoodTruck Bench rss
| Gemma 4 31B takes an incredible 3rd place on FoodTruck Bench, beating GLM 5, Qwen 3.5 397B and all Claude Sonnets! I'm looking forward to how they'll explain the result. Based on the previous models that failed to finish the run, it would seem that Gemma 4 handles long horizon tasks better and actually listens to its own advice when planning for the next day of the run. EDIT: I'm not the author of the benchmark, I just like it, looks fun unlike most of them. submitted by /u/Nindaleth
[link] [comments]
---|--- -
🔗 r/Leeds [UPDATE ❤️] Delivery guy left the gate open, now there’s an old, sort of deaf and very silly dog on the loose. Anyone seen owt? rss
After thirty hours away from home; lovely little Eddi was found by a group of fantastic young lads, who spotted her and guarded her while a couple them went to call.
Shes been through the wars, ending up in a field miles away. I guess using the stream as her water supply. She’s very frail and shellshocked, but she’s home.
We don’t have enough words to express how grateful we are. Thank you, thank you, thank you ❤️
We’ve had neighbours searching. Local dog walkers keeping an eye out. People sharing online. The local vet ambulance team spending their shift looking, and again today. Volunteers with drones who spent their evenings searching the fields nearby. And kind words from so many of you on here to keep our spirits up.
Thank you so much. Anyone with a pet knows they are family, and the amazing people of Leeds have helped reunite Eddi to hers.
Thank you ❤️
Link to the original post here: https://www.reddit.com/r/Leeds/comments/1sbegoe/delivery_guy_left_the_gate_open_now_theres_an_old/
submitted by /u/RoyaleForFree
[link] [comments] -
🔗 r/wiesbaden Wo finde ich gutes Bibimbap & Mantu in Wiesi? rss
wo finde ich gutes? kann auch in Mainz oder Frankfurt sein. freue mich über gute Empfehlungen gerne auch mit vegetarischen Varianten.💘
submitted by /u/aurelocaramelo
[link] [comments] -
🔗 r/reverseengineering Analysis of WannaCry rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/Yorkshire That moment when you realise that your nuts aren’t where you left them…. Snaizeholme rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 r/reverseengineering UE5 DX12 Hook (ImGui Overlay, CommandQueue Tracking, No Flicker) rss
submitted by /u/_Renz1337
[link] [comments] -
🔗 r/reverseengineering Segway-Ninebot Mobility App BLE protocol reversing rss
submitted by /u/Thin-Engineer-9191
[link] [comments] -
🔗 r/Yorkshire York City Council vote to support proportional representation rss
| On Thursday March 26, during a ‘full council’ meeting at York’s Guildhall, an 80% supermajority of City of York Council (CYC) members voted in favour of a motion titled ‘Fair votes for all’ which endorsed Proportional Representation (PR) and called for a National Commission on Electoral Reform. CYC is the first local authority to approve such a motion since the 2024 general election. Despite their longstanding rivalry on CYC, both the governing Labour Party and opposition Liberal Democrats supported the motion, with every councillor of those parties present for the debate voting in its favour. In her introductory remarks, Cllr Hook said the motion was about ‘whether our democracy is fair and whether people can see that fairness reflected in the results that produces’, which ‘they can’t’ under First Past the Post (FPTP). Seconder Cllr Knight spoke about how ‘the current system feels from the perspective of the people we represent’, saying that because ‘too many people cast their vote and see it make little or no difference to the outcome [of an election]’, they end up ‘questioning the value of taking part at all’. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 r/reverseengineering Shopee.tw App Reverse Engineer Need rss
submitted by /u/khalidalsaba
[link] [comments] -
🔗 r/reverseengineering x64dbg Reversing a Jump Tutorial | Breakpoints, Zero Flag, Binary Patching & Cracking Basics - YouTube rss
submitted by /u/paulpjoby
[link] [comments] -
🔗 r/wiesbaden Suggestions for Accommodation please rss
Hi everyone,
My fiancé and I are currently looking for a 1.5–2 room apartment in Mainz or Wiesbaden. We’re both students at JGU and would really love to find something reasonably close to campus.
Our budget is around €1,200 warm. We’ve been actively searching for about three months now but unfortunately haven’t had any luck so far.
If anyone has any leads, knows of something becoming available, or can point us in the right direction, we would be incredibly grateful 🙏
Thank you so much in advance for any help!
submitted by /u/Orph3us_151
[link] [comments] -
🔗 r/LocalLLaMA Apple: Embarrassingly Simple Self-Distillation Improves Code Generation rss
submitted by /u/Mike_mi
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases rss
sync repo: +2 releases ## New releases - [IDAssist](https://github.com/symgraph/IDAssist): 1.6.0 - [hexinlay](https://github.com/milankovo/hexinlay): 1.2.1 -
🔗 sacha chua :: living an awesome life #YayEmacs 10: Emacs coaching with Prot: Emacs workflows and streaming rss
I realized that one of the mistakes I often make with Emacs is not asking other people for help, so I experimented with a coaching session with Prot. With his permission, here's a recording of our conversation.
View in the Internet Archive, watch/comment on YouTube, download the captions, or e-mail me your thoughts.
Resources
Chapters
- 00:00 Intro
- 00:50 Organizing my config into multiple modules and org-babel-post-tangle-hook
- 04:45 Changing namespace, renaming functions
- 07:11 Defining aliases for old functions
- 08:30 Improving my streaming setup
- 12:09 Keeping things from accidentally airing
- 14:50 Livestreaming and recording
- 15:09 Keeping track of interesting moments
- 18:19 Editing
- 20:26 Writing
- 22:34 Packaging
- 25:40 Responding to email
- 29:21 Development workflow
- 29:59 Testing
- 33:46 Learning and reminders
- 35:31 Encapsulating workflows into functions
- 37:05 Popping up notes
- 38:21 Rediscovering things in my config
- 40:31 Catching up on Emacs developments
- 41:29 diffs
- 43:08 Thinking about the community
- 44:00 org-link-preview
- 45:31 Prioritizing things to work on
- 46:39 Modelines
- 48:50 Themes would be nice to have per-frame
- 49:27 Livestreaming conversations with Prot
- 50:11 Getting together
- 54:44 Namespaces
- 55:46 Verbose function names
- 56:45 Naming conventions for ERT tests
- 57:14 shorthands
- 58:27 Bisecting config in multiple files
- 58:46 "I don't write bugs."
Rough notes to clean up over the next little while
- Meta: learning things
- Don't try to remember too many things
- Build a command that does those for you
- Ex: preparing for videos, prot-streaming-mode
- line numbers
- disable spacious padding
- long names: more chances to match it
- new frame and then making it disappear: org-capture, timer; I can use that for notes
- Tip: prefix keys are also helpful; for example, replace C-z, no one needs to suspend Emacs anyway
defvar-keymap:prefixdefines how it should be called as a command, which is good for handling updates to keymaps as well
- Emacs Lisp development workflow
- diff-buffer-with-file buffer-file-name - diff current buffer
- Renaming a symbol
- single file
- substitute
- also noting function aliases, obsolete
- substitute
- multiple files?
-wgrep
- keyboard macros from dired and substitute
- single file
- Bisecting config in modules?
- "I don't write bugs… Of course I'm kidding."
- Ah, I can probably use bug-hunter with a setup file
- Testing
- I think I just need to get the hang of:
- ERT, modus-themes-test–modus-themes-load-theme
- nameless -> shorthands
- Tip: Docstring as a declaration of intent; the docstring is the source of truth, not the code. If you write more than the minimum, then you are helping future you.
- setting things up at the beginning (Makefiles, continuously running batch mode tests, etc.)
- navigating to where I want to write the tests
- mocking functions
- I think I just need to get the hang of:
- Making more of my config reusable
- "I implement the package that I want."
- Workflows for writing, making videos, livestreaming
- wide monitor is awesome
- different font configuration with fontaine
- private stuff:
- private vertico
- turning off preview for consult
- keeping files organized
- marking chapters and highlights: using his memory for this
- just capture a timestamp and possibly a note
- could also just do the offsets manually by saving the time
- display line numbers to help people orient themselves and so they can mention it in the chat
- writing: splitting it into modules helps
- Ooh, idea, theme for streaming
- Other stuff I forgot to mention
- TODO: link preview - update my code for svgs
- Emacs modeline? Smaller lighters, buffer name, view narrowed, read-only, keyboard macro;
- streaming, microphone
Transcript
Transcript0:01: Intro: Sacha: Fantastic, this is great, I finally get to talk to you. I appreciate that you blogged so quickly about some of the things that I mentioned, and we can certainly dive right into that, or you have a lot more experience with how these conversations go, so I can let you take the lead.
Prot: Since you put in the effort to write, we already have a very good structure. The idea is, let's have your screen, so you can share your screen with Jitsi.
Sacha: Yeah. I will share my screen.
Prot: And we can go right into it. Let's see. So if you hover over… Okay, yeah, you have it.
Sacha: yeah oh you know if if at some point I should be really like fancy… Future session, we should get crdt working because that's fun.
Prot: Ah, yes. Oh, that would be nice. Yes.
Sacha: Yeah, that would be nice. All right.
0:50: Organizing my config into multiple modules and org-babel-post-tangle-hook: Sacha: So I've been making good progress in splitting up my config into multiple modules. I just have to iron out a couple of things like do I actually have to load the autoloads from the user list directory or does it automatically take care of that? Because sometimes it doesn't seem like it's doing the thing. Anyway. It's making good progress. And in fact, I came across something that I'm not sure you know about yet, or maybe you know about it and you decided not to do it. I found out that, so, okay, so here's the context. You know, when you do your literate config, you have your modules and they're actually just one big file, like one big source block with a commentary and everything in it. Yeah, yeah. So I found out that you can use a hook if you want to, to add stuff to the tangled files afterwards. So the way I set it up with my config is I still want all the different functions scattered all over the place because I'm not yet as organized as you in terms of the modules. So the org-babel-post-tangle-hook here, post. Yeah, yeah, yeah, post
Prot: So what did you do with that? Let's see.
Sacha: and boilerplate… has that boilerplate here we go so what it's what this does is when it tangles it it then goes back into the file and it inserts all that extra text and the footer into the tangled files so I still have my links to
Prot: Nice.
Sacha: the different source files where it comes from. So this is the section where it comes from but I also have all the extra lovely commentary and stuff so I'm like…
Prot: Ah, that's smart. That's good. That's good. Yes.
Sacha: That way, you don't have to keep repeating things. Although I guess if you really wanted to repeat things you could you could theoretically have the license just as a no web reference and then have it go in there automatically. anyway so I thought that was really cool so I'm making progress on the things that I had mentioned in the in the blog post about organizing my config into multiple modules and other yeah…
Prot: And how far are you in that project? How far are you?
Sacha: Let me see. I can look at the sacha.el here and I can do an occur on the files that have the lines that have the defun. I only have 482 defuns to get rid of. This is already a lot less than what I started with because like you, I have a very large… Almost 40,000 lines in this sacha.org.
Prot: Yeah, yeah, that's massive. Yeah.
Sacha: It's fun and it's interesting. It is a little reassuring to know that people still rely on your published modules instead of actually, like, do people take your config? I know you've got stuff in the config that makes it possible for people to just load it and add their customizations on top, but do you hear from a lot of people who do that?
Prot: From a few of them, yes. And this is why I actually created those customizations. But I must say, I have been trying to
Sacha: Yeah, yeah.
Prot: make it more difficult for them. So I used to have a use package, but now I disabled it on purpose and I have my own macros, so that somebody doesn't just copy-paste. And I don't do this to be mean, but I do it because this way somebody will have to think about, like, okay, what is this? What am I doing here?
Sacha: yeah I figure making making them still do that okay what am I doing here while still being able to automatically load all the function definitions will probably get them over that you know like make it a little bit easier for them so at least that way like right now it is difficult to copy things from my config like like you're so like okay maybe this is a feature but you know, maybe changing it will be nice.
4:45: Changing namespace, renaming functions: Sacha: The other big thing that I need to do with my config is I'm thinking about shifting everything to the sacha- namespace instead of the my- namespace, which is going to be a lot of renaming, which is actually, it was actually the question that I had about renaming things, not necessarily coming up with clever names that have good acronyms like you do. And I love that the humor that you have in there, but like, like just mechanically, are we talking wgrep is like, is there a more modern, emacs 31 way to rename things? Am I just using erefactor or like replace-regexp? What do you do when you need to rename a symbol in possibly multiple files?
Prot: If it's in multiple files, I do the grep approach. So it's not that sophisticated, but it works. Because the thing with the multiple files is, and it goes also to what you were telling me in that article, is first you organize, and then you refactor. It's that idea. The multiple files will not have a lot of extraneous information. You will not be matching, at least in theory, you will not be matching too many false positives.
Sacha: Yeah, and if you're doing a single file,
Prot: So you won't have to sort it.
Sacha: what do you like to do?
Prot: I have a package called substitute. One of the ways I do it is just substitute the symbol at point. But of course, this is just a minor convenience. You can do that with a query-replace. I'm not saying that you really need the package. But the idea is that you do it and you know that it works. Like, for me… I know that it works in the file. So for me, that's very reliable. But the other thing I should mention is keyboard macros from dired combined with substitute. So you start from a dired buffer, and you go file by file. That's the general idea. And in each file, you will perform, for example, a search to the symbol. Once you are on the symbol, you do the substitute-replace command, and then you move to the next file. So that is the workflow. And I do that a lot, for example, with my themes, because they have a lot of repetitive code, like each theme.
7:11: Defining aliases for old functions: Sacha: Okay, the other thing that I was thinking of as a workflow improvement here, because I'm sure that I'm going to keep calling them by their old names, especially interactively, like have a lot of commands that go off the meta x, my do this and this and this is I might also need to think about adding a function alias automatically. And one way I was thinking of doing that was just, you know, iterating over our array and bulk defining aliases so that all the sacha- stuff is now named my- stuff but I was wondering if that was a or actually also manually inserting the like well not of course manually but but creating forms for like defining the aliases somewhere but I was wondering if this was something that you already did as part of your workflow like do you when you rename things okay
Prot: No, I haven't. When I rename things for my packages, I do use aliases. But for my own code, if I rename it, basically, it's
Sacha: yeah yeah
Prot: just the latest name. So I don't try to keep aliases around. Because I eventually use a similar name, it won't be very different.
Sacha: huh all right yeah yeah I mean like it's it's there you
Prot: But what you said about the obarray makes perfect sense.
Sacha: might as well do do it automatically right okay all right that's me oh okay okay I can
8:30: Improving my streaming setup: Sacha: I can do my jitsi thing so I can see everyone and the screen at the same time the screen is very small okay so so that's I do have a dual monitor setup, which
Prot: Yeah, you need that dual monitor setup,
Sacha: is why I was like, OK, maybe I can start looking at your B-frame things. And in fact, in the minutes before I called, I figured out how to use MasterChat CLI to get the YouTube live chat into a command line program, which means that it can be run from call-process or make-process or other such wonderful things. So now it's an Emacs buffer. And then I was thinking, okay, maybe I'll make a pause frame or like a B-framed like dedicated frame for it so that I can have the chat of a live stream displayed within Emacs because you know, it's Emacs. We should do that. Yeah, yeah, yeah.
Prot: Nice. Yes, of course. And you can have it in a side window dedicated buffer.
Sacha: Although I might actually have to write my own like Node.js program so that I can also send text from it, from Emacs. I think the masterchat CLI, it only displays, but the library that it uses, if you pass it your browser cookie, you can use it to send messages back to chat as well. The reason I had liked Twitch before was because Twitch had some kind of IRC type thing that you could connect to. And then that meant, of course, that you can use ERC from within Emacs in order to send stuff to it. Anyway, live streaming and chatting, I've also been getting into that lately. And I was going to pick your brain about this whole like workflow for making videos or live streaming them. And more importantly, going back afterwards and remembering to post them or edit it, in case you forgot something or whatever. So if you happen to have any tips or the things that you like about your setup, I would love to hear about that.
Prot: Though in my case, the setup is really simple, and I admit that I could improve it. But it's really simple right now, where I have a wide monitor. So it's one display, I don't have two, but it's like 2500 pixels instead of 920. So I have a little sidebar on the side, and there on the sidebar I put OBS, for example, and I put everything I need there on the sidebar. And then I have enough space to have whatever it is I am displaying and maybe another widget on the side. So that is in terms of the physical layout of the monitor here. And then in terms of the Emacs side, I don't have a lot going on. I have one package to load the different font configuration. So when I do streaming or videos, I will load basically the presentation setup.
Sacha: It is a spontaneous.
Prot: That's Fontaine, exactly. But again, it's not a matter of the package. You could have a function that just changes the default phase, the height attribute.
Sacha: I have this monitor and then laptop, so this is my workaround for not having enough space in this desk for a super wide monitor. My husband has a super wide monitor which I like to borrow during EmacsConf. Hello child who is wonderful and likes to make cameos during my EmacsConf. Okay, I'm going to hug you.
Prot: Hello!
Sacha: Yes. So okay, so live streaming and then you just basically hop on the stream and talk about stuff.
12:09: Keeping things from accidentally airing: Sacha: I know you've mentioned things like just starting Emacs with your Scratch buffer, but yeah, how about the interesting workflows for not accidentally spilling secret stuff online?
Prot: Part of that is… so I use Vertico normally for my completions. I have some configurations for private Vertico, where by default, it doesn't display what Vertico normally displays. It's just a blank minibuffer, the way it is with the default Emacs minibuffer UI, right? But what happens is as soon as you do TAB or as soon as you move up and down, then it displays Vertico. So that, that is one way for me to make sure that I'm not showing anything I didn't want to show. The other thing is when I do videos, I don't use consult actually, even though I like it, because of the preview functionality. I don't want to be switching between files and then consult shows us something which is private. Right. So. So the private VertiCo, it's a small extension that I have with a few functions for Vertico.
Sacha: I've been thinking about modifying the console preview states so that I can elide more, I can skip over things that might be private. And things like, I already have a filter function for marginalia so that it doesn't show me the values of variables that might be private. But yeah, just turning off all these things makes it a little bit easier to say, okay, I'm just going to jump on the live stream and do this thing. Some of the other fun stuff that I've been doing along the
Prot: So there is that. And for private, of course, the other thing with privacy is that you want to have a generally good sense of where you put your files. So for example, in my pictures folder, I know that I don't have anything private there. But there are some sub folders which are like personal. So I know not to go there. So it might happen, I need to show a picture, okay, I just go to the pictures folder, and I show it, no problem.
Sacha: lines of keeping things organized is if I have a stream tag on a task, I know that's safe to show on screen. And then I modified my Org jump stuff. There's a hook that you can use to narrow things to just that subtree. So at least I can jump to it and not have to worry about the rest of the context in my inbox. Trying to slowly slowly get the hang of this
14:50: Livestreaming and recording: Sacha: Okay. So it's live stream. Do you like to live stream and record at the same time locally or just live stream and then go into the YouTube thing afterwards to download?
Prot: I just do the latter.
Sacha: It takes a little bit of a while,
Prot: I just download it from Youtube afterwards
Sacha: so I'm like… I could get started on the transcription.
15:09: Keeping track of interesting moments: Sacha: Do you have anything to keep track of interesting moments that you want to revisit, or do you just, I don't know, skip around in the video, look at the transcript, whatever?
Prot: I remember, I know this sounds bad, but I remember.
Sacha: People with good memories, boo!
Prot: And generally I try to also sharpen my memory. So whenever I can practice something, I will do it like that. But otherwise, if you really need to take a note of something, you can always have a small function that just records the timestamp. Like, what is the current time? And then you know when you started, so you will know where you are in the video. Like, it would be a very simple function that simply prints the current time, you know, format-time-string,
Sacha: Yeah. I just have to write something that gets the time
Prot: in a buffer at the bottom of a buffer. And that buffer is like your interesting moments kind of thing. And if you really want, you can make that prompt you for some text, like here is the timestamp and here is like, you know Prot said a joke or whatever, you know, like…
Sacha: started from YouTube and then calculates the offset automatically, so that I can say okay, here are my chapters roughly.
Prot: Yeah, that's even more fancy. Or you could do the other thing, which is all local, which is the moment the stream starts, you hit this command, like you invoke it, so it resets the time and then it performs the calculation locally. So you can do calculations with time in Emacs. So you can perform that as well.
Sacha: Yeah, that's really straightforward. Okay, so that's definitely something that I'm going to want to think about, because video is great for enthusiasm and showing cool stuff that you might otherwise forget to mention, but it's just so slow to review afterwards.
Prot: Yeah, of course, of course, of course. Just to say another thing with video, what I have found that is really helpful is to display line numbers.
Sacha: Oh yeah? Huh.
Prot: Me personally, I don't use line numbers, but I have found that when I am displaying something that others need to follow, line numbers help them. Because for example, earlier you were jumping around trying to find that hook, you were…
Sacha: Oh yeah, yeah, yeah. Okay. Display now.
Prot: And of course, me, I have experience, so I kind of know what you are doing, but somebody who is not really into it will be like, what is happening? Why are things moving up and down so quickly, right?
Sacha: Okay. And they can mention it, too, in the comments,
Prot: And they cannot track where you are.
Sacha: which is nice.
Prot: Yes, yes, of course. And also, when you are displaying something, you can say, look, on line 100, what I am doing, for example.
Sacha: I have to steal your config for the highlight line
Prot: And it's easy for everybody. Yeah.
Sacha: priority because I've been using the highlight line for that. But sometimes it overwrites things. I'm like, OK. Today it is well-behaved, so I'm glad for that.
18:19: Editing: Sacha: Making videos, all right. Just go ahead and make the videos, you just do it pretty straight, you don't do a lot of editing afterwards, I'm hearing, following the same kind of philosophy that you use for your blog posts?
Prot: That's the idea.
Sacha: All right, I should just go do things and not worry about whether the live stream demo that I just made of how I can highlight the PDF of your literate config and extract the stuff into whatever has a bug in it. And I'm like, oh, I just demonstrated that. It's okay, I can update it in the show notes. Oh, that's true, especially since
Prot: Or even better, you do a second video afterwards, a follow up.
Sacha: now I figured out that you can use org-pdfview view to link to pages in the PDF. So now my index.org has the highlights from your config, and it takes me back to the page that it was on. Very very cool stuff.
Prot: That's nice.
Sacha: Okay, so I just gotta do it.
Prot: I think Org-noter also is another package you could use for that.
Sacha: Yeah, probably. and then I just need to get… I think I've got PDF tools or PDF view set up. And then reader of course looks very interesting also. So I've got to tweak my config a little bit more to get it running because it has an external dependency. Anyway, so just got to do the live streaming. I was delighted. People have actually been dropping by and commenting or chatting during the live streams, which is great because I get to remember, oh yeah, I should explain that part, taking it for granted.
Prot: The thing with a live stream,
Sacha: So all of that is good stuff.
Prot: because it's something you also wrote, like getting used to talking to yourself, right? So, of course, that takes some practice, but I think, yeah, you have the hang of it already.
Sacha: Something is ringing. Hang on, sorry. I forgot. That was just my reminder that the kiddo is back to school. Virtual school is fine. Anyways, OK, so so just got to do it.
20:26: Writing: Sacha: Thank you for the tips. This is very helpful for
Prot: You're welcome.
Sacha: writing. I'm getting better at actually remembering to include more bits and pieces from my config, and I'm sure that now that I have them in different files, it'll be easier for me to then write the post that links to, oh yeah, here's the five other functions you need in order to make this little snippet work. But do you happen to, knowing the kinds of stuff that we like to write about, do you have any other tips from your workflow?
Prot: When it comes to sharing code like that, I already noticed while you were moving around that you have many things like my-consult, my-org, etc. What helps there is to just make those their own module right away. And from there, you know that, okay, this is either self-contained or it has an explicit require, so I can already know where I need to search for dependencies. So it's really that. It's because, for example, if you take just a my-consult function, right, of course, you know by the name that it depends on consult, but you don't know if it depends on my- common functions, for example. Right. Whereas if you have it in its own file, there will be a require at the top. So, you know, OK, require my-common-functions. And that way you can tell, okay, there is a dependency here. So then when you are to share this function, you can search for, okay, my-common-functions, is it mentioned here? Yes or no. And then you know what the dependency is.
Sacha: And I think this process of moving things into those separate files will make it easier for then, for people to say, okay, yes, I do want to try that thing. Let me check out the repository required, just load-file that particular file and then be off to the races. So we'll see how it works. I don't know if people actually… Sometimes people mention borrowing stuff from my blog. So maybe people are actually reading the non-Emacs News posts. We'll get to see that.
22:34: Packaging: Sacha: Sometimes I feel like a lot of my tweaks are very idiosyncratic, right?
Prot: Yes, what I found that has helped me is I implement the
Sacha: They're very suited to the particular need that I have. And then it's difficult to say, OK, if I were going to generalize this for other people, what kind of defcustoms will I need? What kind of options? And there's always that trade-off between, yeah, but I just want to implement the next little thing that I want to make for myself versus, well, if I put in the polishing effort, then possibly other people could use it, and learn from it, and then contribute their own ideas, and then everything gets better without me having to do the work myself. So it's a bit of a balance.
Prot: package that I want. So for example with denote, but this applies to everything, denote version 0.1 is the package that I wanted. So basically, it works for me. Ever since, I have been adding other things that people want, which are, of course, good things to have. They improve the package, but I have already been using the package that I want since the beginning. So ever since, it's just adding stuff and learning about how people use it and refining the code, which everybody benefits from. So whenever you have an idea that you are like, okay, this may be too idiosyncratic, don't worry about it. Make it into a package, and then what other people need will become apparent, and then over time it will change, but the core package is still what you want.
Sacha: Although it is interesting to see, for example, with the university calendar, institution-calendar thing, it's like, okay, you can get it to work for a small number of institutions, ELPA wants… they want it to work for everyone, everywhere, all the time. Okay, that might be too general. You might need to actually have lots of other people saying what they need in order to make that happen in the first place, right?
Prot: Which at that point, of course, what you want is to write the documentation. So for example, with the institution calendar, I wrote a couple of examples. Okay, how do you extend this? And yeah, I think that helps. But then of course, you cannot cover every use case like people have to also make contributions if they really care about.
Sacha: Yeah, so I think at the moment, I've been writing for n equals one, the audience is really just me. And occasionally I hear from people who are like, oh, that's an interesting idea, let me adapt it. Sometimes if I'm really lucky, they will go and write their own package on top of the stuff that I shared, which is the ideal situation, because then I can just like, oh, yeah, I'm going to borrow that and use it. It'll have more features and they're in charge of dealing with that. But I suppose at some point it behooves me to practice. OK, I'm just going to write it as a package, pretending that this is something, as you said, this is something that I want to be able to install and use myself. Then if other people find it useful, it's a lot easier for them to experiment with and then add on to.
25:40: Responding to email: Sacha: Which goes to my second thing. Doing this and making things open to other people probably means being more responsive to email. And this is, for me, this is a bit of a challenge. I'm starting to feel less time-starved, which is good. I'm starting to actually be able to schedule things. One of these days, we should probably see if we can schedule a Prot Asks thing. I don't know if I can do two hours, but maybe I can do one hour or whatever. Anyway, the rest of it involves actually doing
Prot: For sure.
Sacha: the responsible thing and responding to issues and emails and whatever. It's always a bit of a trade-off, like, oh, do I implement this other crazy idea I have, or do I answer my email?
Prot: For that, of course, it's a challenge. I must say that a lot of the maintenance work I do is via email. Or email or Signal or Telegram. People will ask me, hey, Prat, what is this? And many of the issues are not with my package. I had an issue earlier with the modus themes, no, the ef-themes, and eventually it was something to do with the user's configuration of some function of center tabs. But I had to go into it and check. So, of course, there will be that. But I must say, it's not too bad. It's not a big issue. You can always have in your email, like, hey, please don't use this for issues. And it's not a replacement for that. Just use the issue tracker.
Sacha: I know I just have to… I think I just have to like reframe my perspective. This is a gift. Other people are taking their time and effort to do this. It's wonderful that they're trying things out and putting their… actually doing things themselves and then reaching out in case… 'cause it would be nice to get things working on more people's computers. I think that the stuff that I've been building around learning languages and doing voice input into Emacs probably… There are a lot of these things already, but they tend to also be very individual workflows and individual setups. So it'll be interesting to get to the point where we can start to even have a conversation with shared code.
Prot: About the individual workflow, again, it's not a problem because what is individual now will eventually become kind of a standard workflow. Think about org, the beginning of org. You have Carsten Dominik, who is like, you know what, this outline mode isn't enough. I need more stuff on top. And eventually we have Org. In the beginning, I imagine org was basically Carsten's org, and it became this uh this package that everybody can use however they feel like.
Sacha: I used to maintain Planner Mode before Org Mode got super popular and I remember feeling very embarrassed when someone very, very kindly said "I appreciate the work that you do; incidentally, the latest update kind of deleted a lot of my notes." So this is like, when you make something that other people use, sometimes your mistakes will affect more people than just you. But I'm hoping now that now that the disks are in the spaces of terabytes instead of whatever, people are just backing up everything and version controlling everything and everything will be fine.
Prot: Yeah, of course, of course. Writing packages, of course, is a responsibility. The upside, though, is that because you know that it is a responsibility, you try to write cleaner code at the outset. Whereas if it's just for your own configuration, you're like, okay, this will work and I will fix it later.
- 29:21: Development workflow: Sacha: Yeah, and that actually brings me back to this Emacs Lisp development workflow thing. So I think one of the things that I just need to do is I just need to set up the Makefiles and the snippets and the shortcuts to say that if I'm starting a new proto-package, the thing to run the tests is there, and whatever it is that maybe even continuously runs the test when I make a change, and lets me mock up functions so that I can test some of the things that might be more interactive or might require deleting files or whatever. It's just changing my buffer configuration and whatever.
29:59: Testing: Sacha: So I occasionally write ERT tests when I feel diligent. Sometimes I'm starting to write the test first and then write the code that makes a thing, but if you happen to have any parts of your workflow that you particularly like when it comes to testing things, I would love to hear about them because I haven't gotten to that part of your config yet
Prot: Yeah, so I don't have a lot going on for that. So it's simply ERT. But what I do with the tests is really basic. So ERT, M-x ert, and then I pick the test that I want. And I must say that when it comes to tests, I can be better myself. So there are some packages I write where they have good tests, but there are others that have zero tests. So I want to reach a point where everything has tests, but it takes a lot of work.
Sacha: Yeah. I mean, like every so often I feel like very, very diligent and I'm like, okay, let's do code coverage. So I can see things with undercover. Let's write a function and make sure there's a test associated with it. And let's write a keyboard shortcut that lets me jump from the thing to the test that's associated with it or to run it. And in fact, I still need to get embark to do all these things for me so I can be looking at a function and say just rerun the test for this, please.
Prot: Just to say one low-tech feature that has helped me a lot, low-tech, Is i use the docstring as a declaration of intent. So in the docstring, I say what is the function or the variable meant to do, like what is it meant to provide. And then if I look at the code and I'm like, ah, this doesn't work, I know that the doc string is what I wanted. It's never the code. So there is this idea that the code is a source of truth. For me, it's the opposite. It's like the doc. It's the specification. And then the code is… I was wrong. I was sloppy. I wasn't paying attention. I missed something or whatever. And the reason for that is the following. It's because with the code, you may have used the symbol wrongly, or you may be calling something that you don't mean to call, or there is another function. Or, for example, you use mapc instead of mapcar, so you don't get the return value you expect, that sort of thing. So something, basically you don't deal with sloppy problems. So you don't have confusion there. You know that, okay, the source of truth is the docstring. This is my intention.
Sacha: I should do that more often. Now that I've changed my yasnippet for inserting functions to automatically have the docstring, I feel a little guiltier when I delete the docstring, so I am compelled to instead fill it out. But if I specify it in more detail, as you do with it becoming the statement of intent, then I can be like, OK, let's try that. It's a good practice. And then I can write the test.
Prot: And the thing with docstrings is that, of course, you are
Sacha: Yeah? This is me.
Prot: motivated to just write the minimum necessary so that you don't get the warnings, right, from checkdoc. But if you write more, then you are rewarded yourself. It's something that helps you, future you, and of course other users, because you always have to consider yourself as basically a user. I don't remember why I wrote this six months ago, so of course having the docstring there, actually spell it out, helps me.
33:46: Learning and reminders: Sacha: I definitely have problems with working memory and long-term attention. Which actually touches on this other thing that I mentioned in my post, which is, in your experience coaching other people and also in your personal practice, what are you finding as good ways to keep reminding yourself, okay, these are the keyboard shortcuts I want to internalize, or this is the the workflow tweak that I wanted to try naturally. I was thinking, maybe I make an Org file or maybe I make a quick help thing or whatever. But it's always interesting to hear about other people's workflows.
Prot: What I find most useful is to not try to memorize too many things, but whenever you are in the flow of, oh, this is a process that I want to be doing, to actually implement it as a command or whatever, as a package or whatever, like basically don't try to memorize the steps and of course the key bindings. Try to build a function that does those for you. A command basically that does those for you. So for example, to be concrete, I mentioned earlier that for video purposes, I will enable line numbers. And I will also enable the line highlight. And I have another thing where I disable spacious padding. So the package I have. And all this, of course, I know the key binding. So it's F7 and F8 and F6 or whatever, right? But I'm like, I cannot remember all that. I will just write a function, and it will be prot-streaming-mode. And I enable prot-streaming-mode, and it does what I want it to do, and then I disable prot-streaming-mode, and I'm back to where I need to be.
35:31: Encapsulating workflows into functions: Sacha: Yeah, I have a prepare-for-screencast that does something similar, changes font size, etc, etc. Tt's so wonderful that in Emacs, you can
Prot: Exactly.
Sacha: just keep collapsing things into functions that do the thing that you wanted, and it has access to pretty much everything. I just need to remember to actually call the thing and remember what the thing was actually called. Aliases are very helpful, so it's orderless, but it's like…
Prot: Another thing that might help is long names. Because with long names, you have more chances to match it. For example, in this case, it could be called prot-streaming-mode, but you could also call it prot-streaming-and-video-demonstrations-mode. And of course it sounds ridiculous, but if you think about it, I might search for, I do M-x and I search stream. I find it. I search video. I find it, right. I search demo. I find it. So, if you care about something, you can name it that way, and then you find it more easily. Or, of course, with aliases, you do the same, right? prot-streaming-mode, alias, prot-video-mode, alias, you know how it is. But, yeah, either of those would work. Basically, benefit from the fact that you have completion, and I imagine you also have orderless.
Sacha: So definitely that. And then
Prot: So, yeah.
Sacha: for the free form notes thing, it just occurred to me.
37:05: Popping up notes: Sacha: So in addition to your posframe stuff in your config for quickly popping up an Emacs posframe for some commands, like, do you have some things? I suppose I could just use that directly for my notes and for the chat. Do you have any other of those "quickly pop up something so that you can do something with it and then make it disappear?"
Prot: No, I haven't built a lot on that. So I have some functions I do
Sacha: That's your main thing.
Prot: with that. Specifically, I have it for the timers. For me, that's very useful. And for org-capture, but I haven't elaborated on it. Of course, I could do that more. Another that I… By the way, it's not a poframe. Technically, what I have is just a new frame. But the idea is the same, right? It pops up and it disappears. And I can share the code for that. It's in the prot-window package, actually.
Sacha: I have it highlighted here in my…
Prot: So it's a small macro there.
Sacha: So this is the thing that I was telling you about earlier where it just extracts all the things that I've highlighted. It's very, very cool. It's in one of these, I'll grab it eventually. Which is good because I have to go over my config at some point.
38:21: Rediscovering things in my config: Sacha: There's so much in there that I've completely forgotten writing about. And so I'm like reading this now as I'm splitting it into different modules and saying, oh yeah, I automated that. I'm doing it manually again.
Prot: The other thing that might help is a prefix key. So I have done that with C-z. So it's a prefix key, and then either with which-key or with Embark, you know, the Embark… When Embark replaces C-h. I forget how it's called now. You can always see, OK, what do I have? Like, what are the groups? And then you can cluster things there. And it's very easy. Ever since defvar-keymap, ever since that became a thing, it's very easy to write prefix keymaps, because it has a keyword called prefix, and then with that prefix you define how the keymap should be called as a command.
Sacha: That's interesting. I should definitely look into that. Finds how it should be called. That's a command. So you can just add it to other key maps as needed. That sounds cool.
Prot: So consider this difference, like right now, you can take a defvar, which is a keymap, right? And you can bind it to a key, the keymap itself, without the quote, you can bind it to a key. So you will do define key in the global map, like whatever you want, and then bind it. What happens though with that is that you're binding the value of the keymap to the key, which means if you make changes to the keymap, your key doesn't know about them.
Sacha: I've been running into that. I get annoyed and I have to keep re-evaluating my definitions. So yeah, okay, that's what I do.
Prot: Whereas if you have the prefix, which is now a command, you have created an indirection. So now you define key to the symbol that you have specified. And that, of course, is that indirection, which now gets the up-to-date value of the keymap.
40:31: Catching up on Emacs developments: Sacha: So this is Emacs stuff that I have been missing out on, because for the past 10 years I've just been squeezing things into whatever moments I can have before somebody comes and says hello and says mom mom mom mom, and now that I have a little bit more focus time, I'm looking forward to finding out about all the cool stuff that has gone into Emacs and that I'm not currently taking advantage of. So things like, for example, I only scratch the surface of using Lispy, and I want to do other things as expressions because it's all magical. And if you have similar, like, oh yeah, this is a new thing in Emacs 30 or 31 that is super helpful and not everyone knows about it, I'd love to know about it. I mean, I know it's on Emacs News, but sometimes I'm like, whoosh, it goes past my radar and I don't have the time to dig in.
Prot: Yeah, right now I cannot think of something. But yeah, I will.
41:29: diffs: Prot: Oh, a very small thing that helps me a lot when I make any kind of edit. You know, there is this function diff buffer with file. So that's good. For me, what I always want is
Sacha: that sounds like a little tweak
Prot: diff-buffer with a current file. I don't want to diff a buffer with some random file. So what I have is a very small extension, a very small function, which is diff-buffer-buffer-file-name. So buffer-file-name is the variable for the current buffer's file. And then I do the buffer file name. And for me, that's very useful. Whenever I make an edit or I'm not sure what happened, I do that and I already see the diff. I use that a lot.
Sacha: that I would love to to pick up as well. There's all sorts of interesting workflow things that I am looking forward to discovering as I figure out the better way to watch videos and then also make videos, because one of the things I find is whenever you demonstrate something, sometimes, if you're really lucky, someone will say, oh yeah do you know about this thing that does the whole thing, which is great. One of my favorite reasons for sharing things is learning from other people. All right. You write this really long blog
Prot: Same. Like you put it out there and somebody will be like, hey, you could do it this way instead.
Sacha: post about this clever thing that you just figured out and then five minutes later, oh yeah, that's been built into Org since, you know, version 9.7.
Prot: Exactly, exactly.
43:08: Thinking about the community: Sacha: Which actually leads me to: what can we do? We've got about 20, 15 minutes left in this hour. Taking advantage of your very large context window for all things Emacs community, you know, those kinds of stuff that we are interested in, what are some of the things that we could do to make things even better? This is a very open question, of course, but yeah.
Prot: Even better, you mean Emacs in general or Org in particular? Because Org got a very nice feature lately, Org 9.8, which is the ability to preview images for any link type. So that's very useful. Before it was like the file type. Now it's any link type. And of course, if you ever want to do something with a custom link type, there you have it.
44:00: org-link-preview: Sacha: Which is good because I, in fact, have an override for a custom link type where I had done it before. So I just basically copied and pasted the image preview link so that I could have my SVGs either included in it as a whole or just preview. Anyway, so yes, I'm going to switch over to the new one. Link preview, update my code for SVGs.
Prot: Yeah, for example, now imagine this. Imagine you have a custom link type, which is called image or something, and you just give the image a name, nothing else. And internally, this link type knows to go in a specific directory and get the image from there, maybe even have copies of the image, so it can give you a copy that matches some parameter or whatever, like some user option maybe. You could have fancy things like this. I have been thinking about it, but I haven't written anything yet.
Sacha: I would probably like… Things like my audio waveforms could go in there very easily and things like that. I'm very curious about this idea of mixing more things into other places in Emacs. And one of the things that I've been meaning to dig into is how LECDraw does SVG interaction, because it uses mouse events to be able to drag things around and whatever. Because I think if we can get richer interactivity and more graphical elements, that could be really fun.
45:31: Prioritizing things to work on: Sacha: Anyway, but yes, so I've got basically three months of focus time before the kid goes on summer vacation and wants my attention at probably the majority of the day at an irregular interval. So it'll be a lot harder for me to schedule things then. I can set aside maybe 10 hours a week to work on Emacs-y things, including possibly working on infrastructure for the upcoming EmacsConf, or tweaking Emacs News or hosting meetups or whatever. Taking advantage of you as an external perspective, are there things that would be a good idea for me to particularly focus on? Things that you've been wishing you could say, Sacha, hey, just do this thing and it'll be awesome.
Prot: I think you already have a very good setup, actually. So I don't think there is much to be done in terms of adding things. Maybe the work here is to be removing things, and that's the more difficult part.
Sacha: No! Delegating things. Passing things to other people, maybe. Making it possible for other people to help.
- 46:39: Modelines: Prot: There is a very small thing which maybe is useful, maybe it isn't. I don't know how much you use the mode line, how much you rely on that, but the newer version of Emacs makes it possible to shrink the lighters for the minor modes.
46:52: Modelines: Sacha: Yeah, I don't use the mode-line as much. I ended up moving keycast to the header line because it's a little bit more visible in videos. Sometimes when closed captioning is on, it obscures the mode line. So I don't tend to look at the mode line for much, and I'm wondering what I'm missing out on. And I'll probably also want to add: am I streaming?
Prot: Yeah, not much. Not much is the answer, but maybe you could declutter it in that regard so that then it is useful. For me, where it really is useful is to know some things such as, of course, what is the buffer name? Is the view narrowed? That's, for me, really important. Maybe is it a read-only file? And am I running a keyboard macro?
Sacha: Is my microphone on?
Prot: Yes. Good, good. You see, there are all sorts of good ideas. And you can think of those as just one character, right? And you can have that one character with a face, which has, for example, a background. So is my microphone on? That's a green background. Am I streaming? That's a red background or whatever. And you just see the colors there and you know everything is all right.
Sacha: Although, actually, now that we're talking about it, I'm thinking maybe I should just revive websockets. So I made an obs-websocket.el thing before, and someone has… The benefits of making a package: someone has actually updated it to work with the new WebSocket protocols. I just have to get the whole thing set up again so I can communicate with OBS. I can use a different theme, most likely another Modus theme, when I'm streaming, so that it's a little bit more in my face: okay I'm looking at the correct colors, I am public.
Prot: That's the other thing. Yeah, that's good. That's good.
48:50: Themes would be nice to have per-frame: Prot: With themes, unfortunately, that's actually something I would like to have. We cannot have them per frame, which is strange because if you do set-face-attribute, you can specify a frame argument. But if you do something like custom-set-faces, you cannot.
Sacha: I'm sure that once you start messing around with Emacs internals, you might be able to figure out the way to do that.
Prot: Yeah, now that I say it, it shouldn't be too difficult. Yeah. Famous last words.
Sacha: Yeah, yeah, yeah. That's really fun. Okay, so that gives me stuff to work on.
- 49:27: Livestreaming conversations with Prot: Sacha: I brought up briefly the idea of possibly setting up some kind of streaming things because I think, for example, this conversation that we have… I have so far managed to not share anything that is too private, except for, of course, the time when the kid is like, hello, mom, I need your attention and I want to be on stream. She likes to make cameos. So we could share this, and we could potentially think about having these kinds of conversations as something that other people could join in on, because it causes more questions, it's more interesting, and it also gets stuff out there without me having to type the lessons learned. So is that maybe something we can consider doing, I don't know, once a month for the next three months?
50:11: Getting together: Prot: For me, yes. Even more frequently than once a month. Whatever works for you. For me, it works. That's the point. And also not in the context of coaching or whatever, but generally as a collaboration, I'm totally okay with that. Basically, more events for the community. I'm all for it.
Sacha: Yeah, because it is different. I very much had missed doing Emacs chats, and I'm so delighted that you've got Prot Asks. I'm looking forward to watching the one that you just released, because it's a community event, right? You get to know about interesting things about people. And there are a lot of things that come up through conversations that don't come up when you're just writing by yourself.
Prot: Yes, yes, yes. It's really that. It's really that. And for me, it's also another thing, which is it's more inviting. Like, it's like you are telling people, hey, show up like you can participate. Actually, we are friendly. Like, here we are. You see us. I think that kind of encouragement helps.
Sacha: So if you want to do, like, Emacs office hours on a regular basis, either something that you schedule in yours… Is it a YouTube thing where we can both schedule a live and then both have it, or not? I think they've got a collab thing now. I don't know.
Prot: I haven't explored it. So on the technical side, I really don't know. But in terms of intention, I'm all for it. So we can of course figure out the technicality.
Sacha: You have the bigger channel.
Prot: But I really don't know. We can do it twice a month, or even if you want, if you are really
Sacha: If you want to set it up, then Thursdays are probably good. Or if you want me to set it up, then I can do that. And then we can figure out the platform details and the non-YouTube way for people to join… probably IRC. We've got all this lovely infrastructure for EmacsConf, which I dust off every month for meetups. So that's certainly something we can slide right in there too. Okay, so if we do it once a month, that just gives me three sessions of practice, but if we do it like twice a month or more, I am also okay with that. I think we can squeeze that in and make that happen.
Prot: into it, once a week, a live stream once a week. And yeah, people can join, and we can always have a topic and talk about it and take it from there. We could also do it. Now, I don't know whatever makes more sense, but we could do it on my channel. And then, of course, with a prominent link to your channel, or we can do it one on your channel, one on my channel or always on your channel. Me, I don't mind at all. Like me, I'm in for the fun.
Sacha: We'll figure out the technical details and whatever off-stream. It could be interesting because then that gives people a friendly place to drop by and chat. And also because I know you're there and I'm there, it gets away from the talking to myself. When it's just me talking and then it's just like chat is silent, it just feels like I have this unfairly privileged position. So yeah, that's definitely something we're going to look into. We can structure that as one of these coaching thingies if I'm looking for excuses to use the Google Open Source Peer Bonus. I still haven't really made a justifiably good plan for it. So yes. Okay. Oh, this has been very helpful. I've got like all these tips. If you're okay with it, I am totally fine with posting this recording online. If you want, you can also post it. I think there's some kind of collab thing.
Prot: Me, I don't have a recording. So you can do whatever you want. So it's really up to you. Me, I don't mind. The reason I don't have recordings of my meetings is because I really have this policy of, you know, it's private. Your name is never known. Nobody has seen this. That's the idea. Of course, in your case, you're making it public. So, of course, that's fine.
Sacha: Yeah, my stance is always, well, I'm going to learn stuff, but A, I'm very forgetful, so I need to be able to search it and find it again. And B, other people can pick up stuff too. I might as well expand the learning and do the learning out loud. So all that is good. And then for next time, which will probably be in two weeks, or maybe earlier if I manage to get my act together,
54:44: Namespaces: Sacha: I'd like to see if I can get my stuff properly split up into different modules that have the different namespace. I really think I'm going to end up shifting to the sacha- namespace instead of all the my- stuff. I used to use the my- namespace prefix so that people could copy and paste things more easily into their code. But now I'm like, well, if I put it in sacha-, then I'm not polluting their namespace if they're loading the whole library.
Prot: Yes, yes, exactly. Exactly, exactly. That's a good thing.
Sacha: So that's on my to-do list.
Prot: And with naming things, of course, I also hinted that in the article I wrote in response to your blog post. It really helps to think about the names. Also, with what we said earlier about finding things like so don't try to be too terse, too economical with the names like make the most of it.
Sacha: I'm using nameless anyway to hide the prefixes. Got to get the hang of using the keyboard shortcuts to insert things.
55:46: Verbose function names: Sacha: Yeah, so I do like having very verbose function names and just practically full sentences in the thing. All that is very good. So that's my main thing. Aand then of course, getting into more ERT… I have this function that now that lets me try to jump to the test or the file that's related to this thing. So we'll see how it goes, especially as I move things into these different functions.
Prot: Okay, okay I'm not sure how you are doing that, but if I were to implement something like that myself, what I do with the ERT tests, it's always the prefix of the ERT file and then the name of the original function, double dash and then the name of the original function. So, for example, let's say, modus-themes-tests, right? So then it's modus-themes-tests–modus-themes-load-theme, for example.
56:45: Naming conventions for ERT tests: Sacha: Okay, so that's your naming convention.
Prot: That's a convention. That's a convention, yes.
Sacha: I should try that. I've just been basically naming things as function-name. And then I was, like, maybe I should be calling them function-name-test. Or in this case, you know, package.
Prot: Just to add something to this, because you also named this, so the nameless user. So there is built into Emacs this thing called shorthands.
57:14: shorthands: Sacha: Yeah, I read about that, but you did mention that some people have been going back and forth about whether it's worth using it or whether it confuses things more. I think just leaving the names as is and then just displaying it differently seems to be like an in-between step.
Prot: So that's what shorthand does. The name is, for example, modus-themes-test. And shorthand, effectively, is a buffer local variable which takes the small prefix and maps it to the larger prefix. So modus-themes-test can be mtt, for example.
Sacha: Okay. All right. So basically it's a more powerful nameless, more configurable, and it's built in. So I should check that out also.
Prot: Yeah, you can check it. It's not configurable, like it doesn't give you too many options. But the point is that for this simple case, at least for the tests, I find it useful because I don't want to have like a railroad of a function name, right? So I just want to be looking at something that I can understand. And basically, the prefix of the test is just there for it to have a prefix. And then I know what the function I am testing is.
- 58:27: Bisecting config in multiple files: Sacha: I had a quick question about the config. So you have, in addition to your modules, you also have… Your Emacs's configuration is also split up into multiple files. How do you bisect these things when you're tracking down the bug?
58:46: "I don't write bugs.": Prot: I don't write bugs. No, no, no, of course, I'm kidding.
Sacha: That's going to go in the quotes. Okay, I don't write bugs. I write a lot of bugs. That's going to go to the blog post. It's going to be very large. So you never have to use bug-hunter because you just don't write bugs in the first place. Bravo. Good for you.
Prot: Why didn't people think about that? Now, of course, I'm kidding. So the way it works is that they are actually standalone packages. So there is a distinction, actually, in my configuration. So there are the modules, which is the configuration blocks, what would be done with. And then there are the libraries, which are actually packages, like I could just publish them right now. For example, for the mode line, there is prot-mode-line. That could be a package tomorrow, no problem. So if there is a bug there, I will go and deal with it the way I would deal with any package, like edebug, toggle-debug-on-error, whatever it is that I am doing. So there never is a scenario where the code is in all sorts of places, scattered across the file, and then, of course, it's very difficult to track it.
Sacha: But for your config, if it's in multiple files and you need to bisect it… Bisecting can get you to this load-file over here, this require over here is where things break down, but then you have to… okay, I want to load everything above that point and then bisect into the thing, which is slightly more annoying.
Prot: In practice, it's not difficult, because the way I
Sacha: I don't know. How does that work?
Prot: load my packages, so in the modules themselves. So I have this macro, which has a condition case in it. Of course, usePackage has the same, but with usePackage, you have to have everything as a package, whereas what I have here is even if it's not a package. So condition case, and basically if there is an error, it tells me where the error is, and then I can find it very easily. I have never had a scenario (of course I was joking, but actually I'm serious)… I've never had a scenario where I was confused as to what was happening. It was always very easy to find the error. If it's a bug… Yeah.
Sacha: Errors are fairly straightforward because it complains about it, but when it runs but it just produces the wrong behavior eventually, then that's the annoying part that I've been using bug hunter for.
Prot: The only scenario I think now that I had an issue like that was with the mode line, actually. Because with the mode line, if you give it like a wrong face or something, I don't remember, it will print like several messages for everything that changes on the mode line. So you will get like, well, invalid face, and there will be like, in square brackets, 100 times of this message. So That's the sort of thing that indeed is more tricky, but that was not because of my code. It was because of one small tweak that affects the mode line, and then it was about figuring out what the error is there, what's the bug there. But if you have the configuration split up in ways that are logical or thematic, if you want, whatever bug is always in one spot. It won't cut between files. So for example i have a module which is theme in the wider set but the theme also includes fonts. because fonts are, in Emacs terms, part of faces, themes deal with faces, that sort of thing. So whenever it's something related to appearance, I know that it's in the theme. It cannot be somewhere else because of how I have written it. Of course, depending on how you split things up, you will end up in a scenario where you have bugs that go across files. For example, a common one is where people will have, for example, evil mode, right? And then they will load everything, and then they will have a separate configuration module, which is for key bindings. And basically, that's a disaster, because whenever there is some problem, you don't know which key binding relates to which package, and you are always in a state that it's hard to predict. And basically, you have to do every key binding with eval after load, this package, this key binding kind of thing.
Sacha: Oh, that's going to be fun. I do have a bunch of key bindings in my file, so I'll just have to see how that all gets organized.
Prot: If you have them, organize them by package. Define them close to the context. Okay.
Sacha: That's actually mostly what I've been doing, mostly because I think of it, I think of the key binding when I'm adding the package to my config, so it's right there. I just realized I could probably just copy the top of my config file with requires or whatever to a setup file, which bug-hunter can then load. So I can still probably use
Prot: Okay, good.
Sacha: bug-hunter with that. Anyway, thank you so much.
Prot: Yeah, sure. I just wanted to ask the last thing. What is the kind of bug that you have encountered? What kind of bugs are we talking about here?
Sacha: Recently, in my shifting of everything to the new system, I also happened to realize that I had updated my Emacs and then stuff wasn't highlighting in the mini buffer. I eventually found out that it was because I needed to upgrade certain packages. But in the meantime, I was like, what do you mean? Okay, emacs -Q, sometimes it's working, sometimes it's not working. Okay, let's start narrowing it down. And that was fun. The other thing that I recently had to bisect was: I was exporting my really large config after having split things up into different modules. One of the lines was causing it to go into like a debugging thing, but it would not tell me what it actually debugged. You know, the backtrace would just not happen. So then I actually had to narrow to region and then export the specific sections of my file until I narrowed it down to, okay, my defvar custom link needs fixing. So I do this kind of bisection a lot. Ideally, whenever I can, I like to be able to just write an assertion so that Emacs can do the work of narrowing down when this happens but sometimes it's just, you know, you gotta pick your range and then execute the thing and see what happens. So I'm always looking for tools because I write a lot of bugs. I'm sure by the time I see you again, it may be either next week or next next week, I will have more bugs to share and more things to learn from. But this is very helpful and I am looking forward to updating you once I get all of the stuff checked off my to-do list.
Prot: Very good. Let me know how it goes.
Sacha: Yeah, yeah, awesome. Thank you so much.
Prot: And for the live streams, we see how it goes. Yeah. You will tell me. Yeah.
Sacha: And it's okay to post this recording if you want to?
Prot: Whatever you want. Whatever you want.
Sacha: Awesome, all right, see you around.
Prot: Take care, Sacha. Bye bye.
Ideas for next steps
Oh, do I ever have a lot of ideas to follow up on. =) But I'm making myself get used to writing them down so that I can post these notes instead of trying to squeeze in just one more tweak… Anyway, plenty to explore!
- ☑ Add chapters to video
- ☑ Edit transcript - rough
- ☑ combine multiple captions
- ☑ Post the video
- ☑ Post notes (this one!)
- ☑ Schedule next session and open it up
- ☑ Try Internet Archive
- ☑ Combine transcripts and use speaker tags; style the output
- [-] Redact part of the video
- ☐ Write about compile-media updates
- ☑ Get my GPU working for ffmpeg
- ☐ Get my GPU working for whisperx
- ☐ Select the coordinates from Emacs
- Streaming and video
- ☐ Write about two-speaker workflow
- ☐ Make sure vtime link type works with this player
- ☐ Figure out a workflow for adding intros or wrap-ups
- ☐ Display YouTube chat in Emacs
- ☐ Find a command-line way to send text to the YouTube chat
- ☐ Extract part of a video as a clip
- [-] Make a global minor mode for doing things publicly (Mode for streaming)
- ☑ Change theme
- ☑ Turn on line numbers
- ☑ Turn on keycast
- ☑ Change agenda files and inbox
- ☐ Save narration
- ☐ Consider consult previews, marginalia
- ☐ Make a todo link type that creates the TODO item and publishes a link to it when finished
- ☑ Make public-ish Org files
- ☐ Send a URL to the stream as QR and text chat
- ☑ Send text to the stream
- ☐ Calculate timestamp offsets into a recording
- ☐ Quickly log times and notes to current task and stream log
- ☐ Make a nicer combined transcript PDF for review
- ☐ Reorganize my configuration
- ☐ Finish extracting the rest of my functions
- ☐ Rename my- to sacha-
- ☐ Write about my org-babel-post-tangle-hook
- ☐ Try out substitute, especially with the replace-regexp-as-diff idea
- ☐ Define function aliases
- ☐ Try shorthands
- ☐ Try defvar-keymap :prefix
- ☐ Practise using docstrings to declare intent
- ☐ Convert my custom link preview code
- ☐ Replace C-z
- ☐ Testing
- ☐ Set up a Makefile snippet for tests
- ☐ Settle into a naming convention for tests
- ☐ Practise mocking up functions in order to test things that are more interactive
- ☐ Make code coverage more habitual
- ☐ Finish reading Prot's config and process my notes
- ☐ Set up crdt just in case
- ☐ Play with the idea of XP (experience points) as a reward for postponing a task and then picking it up again
- ☐ Write about deleting windows vertically; consider beframe and shortcuts to arrange frames
- ☐ Pop up and dismiss my notes
- ☐ Make my notes contextual
Want to join us on Thu April 16 10:30 AM America/Toronto, 5:30 PM Europe/Athens? Check out the livestream we've penciled in for April 16 - come join us!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/LocalLLaMA FINALLY GEMMA 4 KV CACHE IS FIXED rss
YESSS LLAMA.CPP IS UPDATED AND IT DOESN'T TAKE UP PETABYTES OF VRAM
submitted by /u/FusionCow
[link] [comments] -
🔗 Rust Blog docs.rs: building fewer targets by default rss
Building fewer targets by default
On 2026-05-01 , docs.rs will make a breaking change to its build behavior.
Today, if a crate does not define a
targetslist in its docs.rs metadata, docs.rs builds documentation for a default list of five targets.Starting on 2026-05-01 , docs.rs will instead build documentation for only the default target unless additional targets are requested explicitly.
This is the next step in a change we first introduced in 2020, when docs.rs added support for opting into fewer build targets. Most crates do not compile different code for different targets, so building fewer targets by default is a better fit for most releases. It also reduces build times and saves resources on docs.rs.
This change only affects:
- new releases
- rebuilds of old releases
How is the default target chosen?
If you do not set
default-target, docs.rs uses the target of its build servers:x86_64-unknown-linux-gnu.You can override that by setting
default-targetin your docs.rs metadata:[package.metadata.docs.rs] default-target = "x86_64-apple-darwin"How do I build documentation for additional targets?
If your crate needs documentation to be built for more than the default target, define the full list explicitly in your
Cargo.toml:[package.metadata.docs.rs] targets = [ "x86_64-unknown-linux-gnu", "x86_64-apple-darwin", "x86_64-pc-windows-msvc", "i686-unknown-linux-gnu", "i686-pc-windows-msvc" ]When
targetsis set, docs.rs will build documentation for exactly those targets.docs.rs still supports any target available in the Rust toolchain. Only the default behavior is changing.
-
🔗 Rust Blog Changes to WebAssembly targets and handling undefined symbols rss
Rust's WebAssembly targets are soon going to experience a change which has a risk of breaking existing projects, and this post is intended to notify users of this upcoming change, explain what it is, and how to handle it. Specifically, all WebAssembly targets in Rust have been linked using the
--allow-undefinedflag towasm-ld, and this flag is being removed.What is
--allow-undefined?WebAssembly binaries in Rust today are all created by linking with
wasm-ld. This serves a similar purpose told,lld, andmold, for example; it takes separately compiled crates/object files and creates one final binary. Since the first introduction of WebAssembly targets in Rust, the--allow- undefinedflag has been passed towasm-ld. This flag is documented as:--allow-undefined Allow undefined symbols in linked binary. This options is equivalent to --import-undefined and --unresolved-symbols=ignore-allThe term "undefined" here specifically means with respect to symbol resolution in
wasm-lditself. Symbols used bywasm-ldcorrespond relatively closely to what native platforms use, for example all Rust functions have a symbol associated with them. Symbols can be referred to in Rust throughextern "C"blocks, for example:unsafe extern "C" { fn mylibrary_init(); } fn init() { unsafe { mylibrary_init(); } }The symbol
mylibrary_initis an undefined symbol. This is typically defined by a separate component of a program, such as an externally compiled C library, which will provide a definition for this symbol. By passing--allow- undefinedtowasm-ld, however, it means that the above would generate a WebAssembly module like so:(module (import "env" "mylibrary_init" (func $mylibrary_init)) ;; ... )This means that the undefined symbol was ignored and ended up as an imported symbol in the final WebAssembly module that is produced.
The precise history here is somewhat lost to time, but the current understanding is that
--allow-undefinedwas effectively required in the very early days of introducingwasm-ldto the Rust toolchain. This historical workaround stuck around till today and hasn't changed.What's wrong with
--allow-undefined?By passing
--allow-undefinedon all WebAssembly targets, rustc is introducing diverging behavior between other platforms and WebAssembly. The main risk of--allow-undefinedis that misconfiguration or mistakes in building can result in broken WebAssembly modules being produced, as opposed to compilation errors. This means that the proverbial can is kicked down the road and lengthens the distance from where the problem is discovered to where it was introduced. Some example problematic situations are:-
If
mylibrary_initwas typo'd asmylibraryinitthen the final binary would import themylibraryinitsymbol instead of calling the linkedmylibrary_initC symbol. -
If
mylibrarywas mistakenly not compiled and linked into a final application then themylibrary_initsymbol would end up imported rather than producing a linker error saying it's undefined. -
If external tooling is used to process a WebAssembly module, such as
wasm-bindgenorwasm-tools component new, these tools don't know what to do with"env"imports by default and they are likely to provide an error message of some form that isn't clearly connected back to the original source code and where the symbols was imported from. -
For web users if you've ever seen an error along the lines of
Uncaught TypeError: Failed to resolve module specifier "env". Relative references must start with either "/", "./", or "../".this can mean that"env"leaked into the final module unexpectedly and the true error is the undefined symbol error, not the lack of"env"items provided.
All native platforms consider undefined symbols to be an error by default, and thus by passing
--allow-undefinedrustc is introducing surprising behavior on WebAssembly targets. The goal of the change is to remove this surprise and behave more like native platforms.What is going to break, and how to fix?
In theory, not a whole lot is expected to break from this change. If the final WebAssembly binary imports unexpected symbols, then it's likely that the binary won't be runnable in the desired embedding, as the desired embedding probably doesn't provide the symbol as a definition. For example, if you compile an application for
wasm32-wasip1if the final binary importsmylibrary_initthen it'll fail to run in most runtimes because it's considered an unresolved import. This means that most of the time this change won't break users, but it'll instead provide better diagnostics.The reason for this post, however, is that it's possible users could be intentionally relying on this behavior. For example your application might have:
unsafe extern "C" { fn js_log(n: u32); } // ...And then perhaps some JS code that looks like:
let instance = await WebAssembly.instantiate(module, { env: { js_log: n => console.log(n), } });Effectively it's possible for users to explicitly rely on the behavior of
--allow-undefinedgenerating an import in the final WebAssembly binary.If users encounter this then the code can be fixed through a
#[link]attribute which explicitly specifies thewasm_import_modulename:#[link(wasm_import_module = "env")] unsafe extern "C" { fn js_log(n: u32); } // ...This will have the same behavior as before and will no longer be considered an undefined symbol to
wasm-ld, and it'll work both before and after this change.Affected users can also compile with
-Clink-arg=--allow-undefinedas well to quickly restore the old behavior.When is this change being made?
Removing
--allow-undefinedon wasm targets is being done in rust- lang/rust#149868. That change is slated to land in nightly soon, and will then get released with Rust 1.96 on 2026-05-28. If you see any issues as a result of this fallout please don't hesitate to file an issue on rust-lang/rust. -
-
🔗 Armin Ronacher Absurd In Production rss
About five months ago I wrote about Absurd, a durable execution system we built for our own use at Earendil, sitting entirely on top of Postgres and Postgres alone. The pitch was simple: you don't need a separate service, a compiler plugin, or an entire runtime to get durable workflows. You need a SQL file and a thin SDK.
Since then we've been running it in production, and I figured it's worth sharing what the experience has been like. The short version: the design held up, the system has been a pleasure to work with, and other people seem to agree.
A Quick Refresher
Absurd is a durable execution system that lives entirely inside Postgres. The core is a single SQL file (absurd.sql) that defines stored procedures for task management, checkpoint storage, event handling, and claim-based scheduling. On top of that sit thin SDKs (currently TypeScript, Python and an experimental Go one) that make the system ergonomic in your language of choice.
The model is straightforward: you register tasks, decompose them into steps, and each step acts as a checkpoint. If anything fails, the task retries from the last completed step. Tasks can sleep, wait for external events, and suspend for days or weeks. All state lives in Postgres.
If you want the full introduction, the original blog post covers the fundamentals. What follows here is what we've learned since.
What Changed
The project got multiple releases over the last five months. Most of the changes are things you'd expect from a system that people actually started depending on: hardened claim handling, watchdogs that terminate broken workers, deadlock prevention, proper lease management, event race conditions, and all the edge cases that only show up when you're running real workloads.
A few things worth calling out specifically.
Decomposed steps. The original design only had
ctx.step(), where you pass in a function and get back its checkpointed result. That works well for many cases but not all. Sometimes you need to know whether a step already ran before deciding what to do next. So we addedbeginStep()/completeStep(), which give you a handle you can inspect before committing the result. This turned out to be very useful for modeling intentional failures and conditional logic. This in particular is necessary when working with "before call" and "after call" type hook APIs.Task results. You can now spawn a task, go do other things, and later come back to fetch or await its result. This sounds obvious in hindsight, but the original system was purely fire-and-forget. Having proper result inspection made it possible to use Absurd for things like spawning child tasks from within a parent workflow and waiting for them to finish. This is particularly useful for debugging with agents too.
absurdctl. We built this out as a proper CLI tool. You can initialize schemas, run migrations, create queues, spawn tasks, emit events, retry failures from the command line. It's installable via
uvxor as a standalone binary. This has been invaluable for debugging production issues. When something is stuck, being able to justabsurdctl dump-task --task-id=<id>and see exactly where it stopped is a very different experience from digging through logs.Habitat. A small Go application that serves up a web dashboard for monitoring tasks, runs, checkpoints, and events. It connects directly to Postgres and gives you a live view of what's happening. It's simple, but it's the kind of thing that makes the system more enjoyable for humans.
Agent integration. Since Absurd was originally built for agent workloads, we added a bundled skill that coding agents can discover and use to debug workflow state via
absurdctl. There's also a documented pattern for making pi agent turns durable by logging each message as a checkpoint.What Held Up
The thing I'm most pleased about is that the core design didn't need to change all that much. The fundamental model of tasks, steps, checkpoints, events, and suspending is still exactly what it was initially. We added features around it, but nothing forced us to rethink the basic abstractions.
Putting the complexity in SQL and keeping the SDKs thin turned out to be a genuinely good call. The TypeScript SDK is about 1,400 lines. The Python SDK is about 1,900 but most of this comes from the complexity of supporting colored functions. Compare that to Temporal's Python SDK at around 170,000 lines. It means the SDKs are easy to understand, easy to debug, and easy to port. When something goes wrong, you can read the entire SDK in an afternoon and understand what it does.
The checkpoint-based replay model also aged well. Unlike systems that require deterministic replay of your entire workflow function, Absurd just loads the cached step results and skips over completed work. That means your code doesn't need to be deterministic outside of steps. You can call
Math.random()ordatetime.now()in between steps and things still work, because only the step boundaries matter. In practice, this makes it much easier to reason about what's safe and what isn't.Pull-based scheduling was the right choice too. Workers pull tasks from Postgres as they have capacity. There's no coordinator, no push mechanism, no HTTP callbacks. That makes it trivially self-hostable and means you don't have to think about load management at the infrastructure level.
What Might Not Be Optimal
I had some discussions with folks about whether the right abstraction should have been a durable promise. It's a very appealing idea, but it turns out to be much more complex to implement in practice. It's however in theory also more powerful. I did make some attempts to see what absurd would look like if it was based on durable promises but so far did not get anywhere with it. It's however an experiment that I think would be fun to try!
What We Use It For
The primary use case is still agent workflows. An agent is essentially a loop that calls an LLM, processes tool results, and repeats until it decides it's done. Each iteration becomes a step, and each step's result is checkpointed. If the process dies on iteration 7, it restarts and replays iterations 1 through 6 from the store, then continues from 7.
But we've found it useful for a lot of other things too. All our crons just dispatch distributed workflows with a pre-generated deduplication key from the invocation. We can have two cron processes running and they will only trigger one absurd task invocation. We also use it for background processing that needs to survive deploys. Basically anything where you'd otherwise build your own retry-and-resume logic on top of a queue.
What's Still Missing
Absurd is deliberately minimal, but there are things I'd like to see.
There's no built-in scheduler. If you want cron-like behavior, you run your own scheduler loop and use idempotency keys to deduplicate. That works, and we have a documented pattern for it, but it would be nice to have something more integrated.
There's no push model. Everything is pull. If you need an HTTP endpoint to receive webhooks and wake up tasks, you build that yourself. I think that's the right default as push systems are harder to operate and easier to overwhelm but there are cases where it would be convenient. In particular there are quite a few agentic systems where it would be super nice to have webhooks natively integrated (wake on incoming POST request). I definitely don't want to have this in the core, but that sounds like the kind of problem that could be a nice adjacent library that builds on top of absurd.
The biggest omission is that it does not support partitioning yet. That's unfortunate because it makes cleaning up data more expensive than it has to be. In theory supporting partitions would be pretty simple. You could have weekly partitions and then detach and delete them when they expire. The only thing that really stands in the way of that is that Postgres does not have a convenient way of actually doing that.
The hard part is not partitioning itself, it's partition lifecycle management under real workloads. If a worker inserts a row whose
expires_atlands in a month without a partition, the insert fails and the workflow crashes. So you need a separate maintenance loop that always creates future partitions far enough ahead for sleeps/retries, and does that for every queue.On the delete side, the safe approach is
DETACH PARTITION CONCURRENTLY, but getting that to run frompg_crondoesn't work because it cannot be run within a transaction, butpg_cronruns everything in one.I don't think it's an unsolvable problem, but it's one I have not found a good solution for and I would love to get input on.
Does Open Source Still Matter?
This brings me a bit to a meta point on the whole thing which is what the point of Open Source libraries in the age of agentic engineering is. Durable Execution is now something that plenty of startups sell you. On the other hand it's also something that an agent would build you and people might not even look for solutions any more. It's kind of … weird?
I don't think a durable execution library can support a company, I really don't. On the other hand I think it's just complex enough of a problem that it could be a good Open Source project void of commercial interests. You do need a bit of an ecosystem around it, particularly for UI and good DX for debugging, and that's hard to get from a throwaway implementation.
I don't think we have squared this yet, but it's already much better to use than a few months ago.
If you're using Absurd, thinking about it, or building adjacent ideas, I'd love your feedback. Bug reports, rough edges, design critiques, and contributions are all very welcome—this project has gotten better every time someone poked at it from a different angle.
-
- April 03, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-03 rss
IDA Plugin Updates on 2026-04-03
New Releases:
Activity:
- Artifact-for-replication_gpt4o
- 02380cf4: update
- capa
- efiXplorer
- Greffe
- aca38123: Merge pull request #66 from Lixhr/65-core-set-and-call-handler
- 90c26345: set literal pools
- 1c7972c6: Add the compiled blob
- 51080081: Fix mixed relocations/branch-back
- 5b690860: Call handler
- c5b8b023: Merge pull request #63 from Lixhr/clean_refactor
- 8f02633d: using offet_to_addr
- f1755d57: Del parentheses on return
- da24467f: Align class headers
- 9da6bf36: Delete c snprintf
- 17f75c59: Clean
- d411e0dc: Done relocations
- 3b193124: Merge pull request #62 from Lixhr/58-reloc-implement-the-relocations
- 021b5d42: Delete old relocator
- hexinlay
- 52f00d16: bump version to 1.2.1
- IDA-NO-MCP
- 9e118742: Merge pull request #12 from heheda123123/main
- IDAPluginList
- bbac6fd8: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- IDAssist
- ce7134db: Resolve semantic graph targets and repair GraphRAG schema
- playlist
- 7d1987e0: Stir Bar Ran
- python-elpida_core.py
- 221cef10: feat: replace BODY Perplexity with free DDG+Groq — saves $50/mo
- revdb
- bf82fc7e: chore: ignore local artifacts and untrack local planning files
- 4113ba0d: 更新.gitignore
- 0b66ba56: docs: localize cli help text
- 971cfc16: feat: align importer with IDA-NO-MCP exports
- 8f52e6d6: Add 'IDA-NO-MCP/' from commit '9e118742f571707f3fe0f488c3b0cebd5e930263'
- ced9060f: init
- 9e118742: Merge pull request #12 from heheda123123/main
- sighthouse
- tc_deer
- 4b2ac3dc: add demo gif
- Artifact-for-replication_gpt4o
-
🔗 r/york Least dead clubs on Easter weekend ? rss
My brother is visiting me in York for the weekend and wants to hit the club but I am afraid everything will be empty if not closed. What’s my safest bet for Saturday night ? Or do I give up ? Thank you.
submitted by /u/jskdjjdjdjd
[link] [comments] -
🔗 r/Yorkshire Clearing heavy rain leaves some nice late afternoon sunshine. rss
| Taken near Osmotherley on the western edge of the North Yorkshire Moors. Looking towards Richmond with the Yorkshire Dales in the far distance. submitted by /u/MsJone5
[link] [comments]
---|--- -
🔗 r/Harrogate Bilton caller rss
submitted by /u/Critical-End-8129
[link] [comments] -
🔗 r/Yorkshire Bradford Council grants aim to bring 'cafe culture' to city centre rss
| City centre businesses have been awarded grants to allow them to invest in outdoor seating areas and bring "café culture" to Bradford. Six businesses including pubs and cafes received funding from Bradford Council to install outdoor furniture and equipment "to help the visitor economy" in the pedestrianised city centre, the authority said. The grants, totalling about £18,000, have helped the businesses expand their offer by using outside spaces on the newly traffic-free areas such as Market Street and Bridge Street. The pilot scheme has now been extended for a further six months to give other businesses in the area the opportunity to apply for funding. Businesses awarded funding so far this year are: The Exchange Craft Beer House, SAPA Supermarket and The Old Bank pub, all on Market Street. The Ginger Goose pub on the corner of Market Street and Bridge Street, and both Tiffin Coffee and Lela's Café on Bank Street were also part of the scheme. Businesses purchased items such as tables and chairs, planters, signage, lighting and cover installation costs... Eligible businesses on Market Street, Bank Street, Broadway, Bridge Street, Hall Ings and Tyrrel Street can apply for funding. Grants between £500 and £3,000 are available, providing up to 90% of the total cost of the equipment. The lengthy project to pedestrianise a swathe of city centre streets was completed last spring, and several traders have already introduced outdoor seating. In 2025, an outdoor seating area was introduced at the cafe at St George's Hall on Hall Ings. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 The Pragmatic Engineer The Pulse: is GitHub still best for AI-native development? rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below eight days ago. If you 've been forwarded this email, you can subscribe here .
We're used to highly reliable systems which target four-nines of availability (99.99%, meaning about 52 minutes of downtime per year), and for it to be embarrassing to barely hit three nines (around 9 hours of downtime per year.) And yet, in the past month, GitHub's reliability is down to one nine!
Here's data from the third-party, "missing GitHub status page", which was built after GitHub stopped updating its own status page due to terrible availability. Recently, things have looked poor:
GitHub
down at one nine. Source:The Missing GitHub Status
PageThis means that for every 30 days, GitHub had issues on 3 days, or issues/degradations for 2.5 hours daily (around 10% of the time.)
GitHub seems unable to keep up with the massive increase in infra load from agents. One software engineer built a clever website called "Claude's Code" that tracks Claude Code bot contributions across GitHub. Growth in the past three months has been enormous:
Load
from Claude Code has 6x 'd in 3 months. Source: Claude 's
CodeStream of GitHub outages from infra overload
GitHub's CTO, Vladimir Fedorov, addressed availability issues in a blog post and covered three major incidents:
- 2 February: security policies unintentionally blocked access to virtual machine metadata
- 9 February: a database cluster got overloaded
- 5 March: writes failed on a Redis cluster
Software engineer Lori Hochstein did a helpful analysis of these outages and the CTO's response, and has interesting observations:
- Saturation : the database cluster incident (9 Feb) was a case of the database getting saturated, due to higher-than-expected usage. Databases are harder to scale up than stateless services. GitHub also underestimated how much additional traffic there would be.
- Failover + telemetry gap : the 2 Feb incident was a combination of an infra issue in one region failing over to a healthy region, and making things worse with a telemetry gap (incorrect security policies were applied in the new regions which blocked access to VM metadata)
- Failover + configuration issue : the 5 March incident was uncannily similar: after a failover, a configuration issue blocked writes on a Redis cluster
It is certainly nice to get details from GitHub on these outages. It feels to me that infra strains are causing more infra issues -> they trigger constraints faster -> failovers are not as smooth as they should be. Could it be because GitHub keeps changing their existing systems?
Startup shows GitHub how it's done
While GitHub struggles to keep up with the increase in load from AI agents generating more code and pull requests, a new startup called Pierre Computer claims to have built an "AI-native" solution for AI agents pushing code, which scales far beyond what GitHub can do. Pierre was founded by Jacob Thornton: formerly an engineer at Coinbase, Medium, and Twitter, and also the creator of the once-very popular Bootstrap CSS library.
Here's what Pierre supports, which GitHub does not:
"In October [2025], Github shared they were averaging ~230 new repos per minute.
Last week we [at Pierre Computer] hit a sustained peak of > 15,000 repos per minute for 3 hours.
And in the last 30 days customers have created > 9M repos"
These are incredible numbers - if also self-reported - and something that GitHub clearly cannot get close to, at least not today! There are few details about customers, while the product - called Code.storage - seems to be in closed beta.
Still, this is the type of "git for AI agents" that GitHub has failed to build, and the type of infrastructure it needs badly.
Has GitHub lost focus and purpose?
GitHub's reliability issues are acute enough that, if it keeps up, teams will start giving alternatives like small startups such as Pierre a try, or perhaps even consider self-hosting Git. But how did the largest Git host in the world neglect its customers, and fail to prepare its infra for an increase in code commits and pull requests?
Mitchell Hashimoto, founder of Ghostty, and a heavy user of GitHub himself, had advice on what he would do if he was in charge of GitHub, after growing frustrated with the state of its core offering. He writes (emphasis mine)
"Here's what I'd do if I was in charge of GitHub, in order:
1. Establish a North Star plan around being critical infrastructure for agentic code lifecycles and determine a set of ways to measure that.
2. Fire everyone who works on or advocates for Copilot and shut it down. It's not about the people, I'm sure there's many talented people; you're just working at the wrong company.
3. Buy Pierre and launch agentic repo hosting as the first agentic product. Repos would be separate from the legacy web product to start, since they're likely burdened with legacy cross product interactions.
4. Re-evaluate all product lines and initiatives against the new North Star. I suspect 50% get cut (to make room for different ones).
The big idea is all agentic interactions should critically rely on GitHub APIs. Code review should be agentic but the labs should be building that into GH (not bolted in through GHA like today, real first class platform primitives). GH should absolutely launch an agent chat primitive, agent mailboxes are obviously good. GH should be a platform and not an agent itself.
This is going to be very obviously lacking since I only have external ideas to work off of and have no idea how GitHub internals are working, what their KPIs are or what North Star they define, etc.
But, with imperfect information, this is what I'd do."
My sense is that GitHub has three concurrent problems:
- GitHub and Copilot are entangled with Microsoft 's internal politics. GitHub's Copilot in 2021 was the first massively successful "AI product." Microsoft took the "Copilot" brand and used it across all of their product lines, creating low-quality AI integrations. Simultaneously, internal Microsoft orgs like Azure and Microsoft AI were trying to get their hands on GitHub, which is one of the most positive developer brands at Microsoft.
- GitHub has no leader, seemingly by design. GitHub's last CEO was Thomas Dohmke, who stepped down voluntarily, and Microsoft never backfilled the CEO role; instead carrying out a reorg to make GitHub part of Microsoft's AI group and stripping its independence. It seems the "Microsoft AI" side won that battle.
- GitHub has no focus, and is stuck chasing Copilot as a revenue source. GitHub has no CEO and is caught up in internal politics, so, what can GitHub teams do? The safest bet is to increase revenue and the best way to do that is by investing more into GitHub Copilot, and ignoring long-term issues like reliability.
I agree with Mitchell: GitHub has no "North Star" and we see a large org being dysfunctional. That lack of vision - and CEO - is hitting hard:
- GitHub Copilot went from the most-used AI agent in 2021, to be overtaken by Claude Code, and is soon to be overtaken by Cursor.
- As a platform, GitHub has no vision for how to evolve to support AI agents. Sure, GitHub has an MCP server, but it has no "AI-native git platform" that can handle the massive load AI agents generate.
- GitHub keeps shipping small features and improvements without direction. For example, in October 2025, they started to work on stacked diffs. However, when it ships, the stacked diffs workflow might be mostly obsolete - at least with AI agents!
It's easy to win a market when you do one thing better than anyone else in the world. Right now, GitHub is doing too many things and doing a subpar job with Copilot, its platform, and AI infra.
Read the full issue of last week's The Pulse, or check out this week's The Pulse.
Catch up with recent The Pragmatic Engineer issues:
- Scaling Uber with Thuan Pham (Uber's first CTO -- podcast). We went into topics like scaling Uber from constant outages to global infrastructure, the shift to microservices and platform teams, and how AI is reshaping engineering.
- Building WhatsApp with Jean Lee (podcast): Jean Lee, engineer #19 at WhatsApp, on scaling the app with a tiny team, the Facebook acquisition, and what it reveals about the future of engineering.
- What will the Staff Engineer role look like in 2027 and beyond? What happens to the Staff engineer role when agents write more code? Actually, they could be more in demand than ever!
-
🔗 r/york Greyhound walk still on this Easter weekend? rss
I wanted to take my boy on the Greyhound Walk for the first time. I know it takes place on the first Sunday of the month, but as it's Easter Sunday I don't know if it's still going ahead?
submitted by /u/peachranunculus
[link] [comments] -
🔗 @binaryninja@infosec.exchange Who needs containers? You do! If you reverse firmware, macho files, malware, mastodon
Who needs containers? You do! If you reverse firmware, macho files, malware, or many other formats! Come see what's unlocked in Binary Ninja by this feature in our latest blog post from Brian:
-
🔗 r/Harrogate Empty Chairs - Wednesday 8th April rss
| https://emptychairs.org.uk Empty Chairs is a simple idea: each evening we book a small table in a pub and leave a few chairs empty - inviting anyone who wants company to join us. There's no pressure, no agenda, and no expectation to stay longer than you want. I'll be hosting an event at Major Tom Social in Harrogate on Wednesday 8th April at 6pm till whenever! In the spirit of the campaign I'll be wearing a bright orange t-shirt; hope to see some of you there! submitted by /u/LectricVersion
[link] [comments]
---|--- -
🔗 Evan Schwartz Scour - March Update rss
Hi friends,
In March, Scour scoured 813,588 posts from 24,029 feeds (7,131 were newly added) and 488 new users signed up. Welcome!
Here's what's new in the product:
🔃 Feed Diversity Overhaul
Scour now does a better job of ensuring that your feed draws from a mix of sources and that no single interest or group of interests dominates. I had made a number of changes along these lines in the past, but they were fiddly and the diversification mechanism wasn't working that well. Under the hood, Scour now does a first pass to score how similar articles are to your interests and then has a separate step for selecting posts for your feed while keeping it diverse on a number of different dimensions.
🥰 More of What You Like
Content from websites and groups of interests you tend to like and/or click on more are now given slightly more room in your feed. Conversely, websites and groups of interests you tend to dislike or not click on will be given a bit less space.
For Scour, I'm always trying to think of how to show you more content you'll find interesting -- without trapping you in a small filter bubble (you can read about my ranking philosophy in the docs). After a number of iterations, I landed on a design that I'm happy with. I hope this strikes a good balance between making sure you see articles from your favorite sources, while still leaving room for the serendipity of finding a great new source that you didn't know existed.
❤️ Inline Reactions
After you click an article, Scour now explicitly asks you for your reaction. These reactions help tune your feed slightly, and they help me improve the ranking algorithm over time. Before, the reaction buttons were below every post but that made them a bit hard to hit intentionally and easy to touch accidentally. If you want to react to an article without reading it first, you can also find them in the More Options (
...) menu.Thanks to Shane Sveller for pointing out that the reaction buttons were too small on mobile!
🎯 Exact Keyword Matching
Scour now supports exact keyword matching, in addition to using vector embeddings for semantic similarity. Articles that are similar to one of your interests but don't use the exact words or phrases from your interest definition will be ranked lower. Right now this applies to interests marked as "Specific" or "Normal" (this is also automatically determined when interests are created). This should cut down on the number of articles you see that are mis-categorized or clearly off-topic.
Thanks to Alex Miller and an anonymous user for prompting this, and thanks to Alex, JackJackson, mhsid, snuggles, and anders_no for all the Off-Topic reports!
⁉️ Why Didn't This Appear?
Sometimes, I see an article on Hacker News or elsewhere and wonder why didn't this show up in my Scour feed. You can now paste links into the Why didn't I see this? page, and it will give you a bit of an explanation. You can also report that so I can look into it more and continue to improve the ranking algorithm over time.
🔖 Some of My Favorite Posts
Here were some of my favorite posts that I found on Scour in March:
- For anyone building products, this is a good reminder to make sure you're trying out and experiencing the bad parts of your product: Bored of eating your own dogfood? Try smelling your own farts!.
- This was a brief, interesting history and technical overview of document formats, from
.docto.docxand.odfand why Markdown "won": Markdown Ate The World. - A reminder that any user-generated input, including repo branch names, can be malicious: OpenAI Codex: How a Branch Name Stole GitHub Tokens.
- This is a very detailed and informative visual essay explaining how quantization (compression) for large language models works: Quantization from the ground up.
- I'm not currently using Turso (the Rust rewrite of SQLite), but I think what they're doing is interesting. Including this experimental version that speaks the Postgres SQL dialect: pgmicro.
- And because I like making -- and eating -- sour sourdough: How To Make Sourdough Bread More (Or Less) Sour.
Happy Scouring!
- Evan
P.S. If you use a coding agent like Claude Code, I also wrote up A Rave Review of Superpowers, a plugin that makes me much more productive.
-
🔗 r/Leeds Delivery guy left the gate open, now there’s an old, sort of deaf and very silly dog on the loose. Anyone seen owt? rss
if anyone has seen little edwina please give us a shout. most likely around Middleton area 🙏
submitted by /u/RoyaleForFree
[link] [comments] -
🔗 Simon Willison The Axios supply chain attack used individually targeted social engineering rss
The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked:
so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering
they tailored this process specifically to me by doing the following:
- they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
- they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
- they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
- the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
- everything was extremely well co-ordinated looked legit and was done in a professional manner.
A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.
That's a very effective scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.
Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/Yorkshire Yorkshire Coast Kite Festival, with some of the world’s largest kites is to return this spring. rss
| submitted by /u/ScrollAndThink
[link] [comments]
---|--- -
🔗 Anton Zhiyanov Porting Go's strings package to C rss
Creating a subset of Go that translates to C was never my end goal. I liked writing C code with Go, but without the standard library it felt pretty limited. So, the next logical step was to port Go's stdlib to C.
Of course, this isn't something I could do all at once. I started with the io package, which provides core abstractions like
ReaderandWriter, as well as general-purpose functions likeCopy. Butioisn't very interesting on its own, since it doesn't include specific reader or writer implementations. So my next choices were naturallybytesandstrings— the workhorses of almost every Go program. This post is about how the porting process went.Bits and UTF-8 • Bytes • Allocators • Buffers and builders • Benchmarks • Optimizing search • Optimizing builder • Wrapping up
Bits and UTF-8
Before I could start porting
bytes, I had to deal with its dependencies first:math/bitsimplements bit counting and manipulation functions.unicode/utf8implements functions for UTF-8 encoded text.
Both of these packages are made up of pure functions, so they were pretty easy to port. The only minor challenge was the difference in operator precedence between Go and C — specifically, bit shifts (
<<,>>). In Go, bit shifts have higher precedence than addition and subtraction. In C, they have lower precedence:// Go: shift has HIGHER precedence than + var x uint32 = 1<<2 + 3 // (1 << 2) + 3 == 7 // C: shift has LOWER precedence than + uint32_t x = 1 << 2 + 3; // 1 << (2 + 3) == 32The simplest solution was to just use parentheses everywhere shifts are involved:
// Go: Mul64 returns the 128-bit product of x and y: (hi, lo) = x * y func Mul64(x, y uint64) (hi, lo uint64) { const mask32 = 1<<32 - 1 x0 := x & mask32 x1 := x >> 32 y0 := y & mask32 y1 := y >> 32 w0 := x0 * y0 t := x1*y0 + w0>>32 // ... } // C: Mul64 returns the 128-bit product of x and y: (hi, lo) = x * y so_Result bits_Mul64(uint64_t x, uint64_t y) { const so_int mask32 = ((so_int)1 << 32) - 1; uint64_t x0 = (x & mask32); uint64_t x1 = (x >> 32); uint64_t y0 = (y & mask32); uint64_t y1 = (y >> 32); uint64_t w0 = x0 * y0; uint64_t t = x1 * y0 + (w0 >> 32); // ... }With
bitsandutf8done, I moved on tobytes.Bytes
The
bytespackage provides functions for working with byte slices:// Count counts the number of non-overlapping instances of sep in s. func Count(s, sep []byte) int // Equal reports whether a and b are the // same length and contain the same bytes. func Equal(a, b []byte) bool // Index returns the index of the first instance // of sep in s, or -1 if sep is not present in s. func Index(s, sep []byte) int // Repeat returns a new byte slice consisting of count copies of b. func Repeat(b []byte, count int) []byte // and othersSome of them were easy to port, like
Equal. Here's how it looks in Go:// Equal reports whether a and b are the // same length and contain the same bytes. func Equal(a, b []byte) bool { // Neither cmd/compile nor gccgo allocates for these string conversions. return string(a) == string(b) }And here's the C version:
// bytes_string reinterprets a byte slice as a string (zero-copy). #define so_bytes_string(bs) ({ \ so_Slice _bs = (bs); \ (so_String){(const char*)_bs.ptr, _bs.len}; \ }) // string_eq returns true if two strings are equal. static inline bool so_string_eq(so_String s1, so_String s2) { return s1.len == s2.len && (s1.len == 0 || memcmp(s1.ptr, s2.ptr, s1.len) == 0); } // Equal reports whether a and b are the // same length and contain the same bytes. bool bytes_Equal(so_Slice a, so_Slice b) { return so_string_eq(so_bytes_string(a), so_bytes_string(b)); }Just like in Go, the
so_bytes_string([]byte→string) macro doesn't allocate memory; it just reinterprets the byte slice's underlying storage as a string. Theso_string_eqfunction (which works like==in Go) is easy to implement usingmemcmpfrom the libc API.Another example is the
IndexBytefunction, which looks for a specific byte in a slice. Here's the pure-Go implementation:// IndexByte returns the index of the first instance // of c in b, or -1 if c is not present in b. func IndexByte(b []byte, c byte) int { for i, x := range b { if x == c { return i } } return -1 }And here's the C version:
// IndexByte returns the index of the first instance // of c in b, or -1 if c is not present in b. so_int bytes_IndexByte(so_Slice b, so_byte c) { for (so_int i = 0; i < so_len(b); i++) { so_byte x = so_at(so_byte, b, i); if (x == c) { return i; } } return -1; }I used a regular C
forloop to mimic Go'sfor-range:- Loop over the slice indexes with
for(so_lenis a macro that returnsb.len, similar to Go'slenbuilt-in). - Access the i-th byte with
so_at(a bounds-checking macro that returns*((so_byte*)b.ptr + i)).
But
EqualandIndexBytedon't allocate memory. What should I do withRepeat, since it clearly does? I had a decision to make.Allocators
The Go runtime handles memory allocation and deallocation automatically. In C, I had a few options:
- Use a reliable garbage collector like Boehm GC to closely match Go's behavior.
- Allocate memory with libc's
mallocand have the caller free it later withfree. - Introduce allocators.
An allocator is a tool that reserves memory (typically on the heap) so a program can store its data structures there. See Allocators from C to Zig if you want to learn more about them.
For me, the winner was clear. Modern systems programming languages like Zig and Odin clearly showed the value of allocators:
- It's obvious whether a function allocates memory or not: if it has an allocator as a parameter, it allocates.
- It's easy to use different allocation methods: you can use
mallocfor one function, an arena for another, and a stack allocator for a third. - It helps with testing and debugging: you can use a tracking allocator to find memory leaks, or a failing allocator to test error handling.
An
Allocatoris an interface with three methods:Alloc,Realloc, andFree. In C, it translates to a struct with function pointers:// Allocator defines the interface for memory allocators. typedef struct { void* self; so_Result (*Alloc)(void* self, so_int size, so_int align); so_Result (*Realloc)(void* self, void* ptr, so_int oldSize, so_int newSize, so_int align); void (*Free)(void* self, void* ptr, so_int size, so_int align); } mem_Allocator;As I mentioned in the post about porting the io package, this interface representation isn't as efficient as using a static method table, but it's simpler. If you're interested in other options, check out the post on interfaces.
By convention, if a function allocates memory, it takes an allocator as its first parameter. So Go's
Repeat:// Repeat returns a new byte slice consisting of count copies of b. func Repeat(b []byte, count int) []byteTranslates to this C code:
// Repeat returns a new byte slice consisting of count copies of b. // // If the allocator is nil, uses the system allocator. // The returned slice is allocated; the caller owns it. so_Slice bytes_Repeat(mem_Allocator a, so_Slice b, so_int count)If the caller doesn't care about using a specific allocator, they can just pass an empty allocator, and the implementation will use the system allocator —
calloc,realloc, andfreefrom libc.Here's a simplified version of the system allocator (I removed safety checks to make it easier to read):
// SystemAllocator uses the system's malloc, realloc, and free functions. // It zeros out new memory on allocation and reallocation. typedef struct {} mem_SystemAllocator; so_Result mem_SystemAllocator_Alloc(void* self, so_int size, so_int align) { void* ptr = calloc(1, (size_t)(size)); if (ptr == NULL) { return (so_Result){.val.as_ptr = NULL, .err = mem_ErrOutOfMemory}; } return (so_Result){ .val.as_ptr = ptr, .err = NULL}; } so_Result mem_SystemAllocator_Realloc(void* self, void* ptr, so_int oldSize, so_int newSize, so_int align) { void* newPtr = realloc(ptr, (size_t)(newSize)); if (newPtr == NULL) { return (so_Result){.val.as_ptr = NULL, .err = mem_ErrOutOfMemory}; } if (newSize > oldSize) { // Zero new memory beyond the old size. memset((char*)newPtr + oldSize, 0, (size_t)(newSize - oldSize)); } return (so_Result){.val.as_ptr = newPtr, .err = NULL}; } void mem_SystemAllocator_Free(void* self, void* ptr, so_int size, so_int align) { free(ptr); }The system allocator is stateless, so it's safe to have a global instance:
// System is an instance of a memory allocator that uses // the system's malloc, realloc, and free functions. mem_Allocator mem_System = { .self = &(mem_SystemAllocator){}, .Alloc = mem_SystemAllocator_Alloc, .Free = mem_SystemAllocator_Free, .Realloc = mem_SystemAllocator_Realloc};Here's an example of how to call
Repeatwith an allocator:so_Slice src = so_string_bytes(so_str("abc")); so_Slice got = bytes_Repeat(mem_System, src, 3); so_String gotStr = so_bytes_string(got); if (so_string_ne(gotStr, so_str("abcabcabc"))) { so_panic("want Repeat(abc) == abcabcabc"); } mem_FreeSlice(so_byte, mem_System, got);Way better than hidden allocations!
Buffers and builders
Besides pure functions,
bytesandstringsalso provide types likebytes.Buffer,bytes.Reader, andstrings.Builder. I ported them using the same approach as with functions.For types that allocate memory, like
Buffer, the allocator becomes a struct field:// A Buffer is a variable-sized buffer of bytes // with Read and Write methods. typedef struct { mem_Allocator a; so_Slice buf; so_int off; } bytes_Buffer; // Usage example. bytes_Buffer buf = bytes_NewBuffer(mem_System, (so_Slice){0}); bytes_Buffer_WriteString(&buf, so_str("hello")); bytes_Buffer_WriteString(&buf, so_str(" world")); so_String str = bytes_Buffer_String(&buf); if (so_string_ne(str, so_str("hello world"))) { so_panic("Buffer.WriteString failed"); } bytes_Buffer_Free(&buf);The code is pretty wordy — most C developers would dislike using
bytes_Buffer_WriteStringinstead of something shorter likebuf_writestr. My solution to this problem is to automatically translate Go code to C (which is actually what I do when porting Go's stdlib). If you're interested, check out the post about this approach — Solod: Go can be a better C.Types that don't allocate, like
bytes.Reader, need no special treatment — they translate directly to C structs without an allocator field.The
stringspackage is the twin ofbytes, so porting it was uneventful. Here'sstrings.Builderusage example in Go and C side by side:// go var sb strings.Builder sb.WriteString("Hello") sb.WriteByte(',') sb.WriteRune(' ') sb.WriteString("world") s := sb.String() if s != "Hello, world" { panic("want sb.String() == 'Hello, world'") } // c strings_Builder sb = {.a = mem_System}; strings_Builder_WriteString(&sb, so_str("Hello")); strings_Builder_WriteByte(&sb, ','); strings_Builder_WriteRune(&sb, U' '); strings_Builder_WriteString(&sb, so_str("world")); so_String s = strings_Builder_String(&sb); if (so_string_ne(s, so_str("Hello, world"))) { so_panic("want sb.String() == 'Hello, world'"); } strings_Builder_Free(&sb);Again, the C code is just a more verbose version of Go's implementation, plus explicit memory allocation.
Benchmarks
What's the point of writing C code if it's slow, right? I decided it was time to benchmark the ported C types and functions against their Go versions.
To do that, I ported the benchmarking part of Go's
testingpackage. Surprisingly, the simplified version was only 300 lines long and included everything I needed:- Figuring out how many iterations to run.
- Running the benchmark function in a loop.
- Recording metrics (ns/op, MB/s, B/op, allocs/op).
- Reporting the results.
Here's a sample benchmark for the
strings.Buildertype:static so_String someStr = so_str("some string sdljlk jsklj3lkjlk djlkjw"); static const so_int numWrite = 16; volatile so_String sink = {0}; void main_WriteString_AutoGrow(testing_B* b) { mem_Allocator a = testing_B_Allocator(b); for (; testing_B_Loop(b);) { strings_Builder sb = strings_NewBuilder(a); for (so_int i = 0; i < numWrite; i++) { strings_Builder_WriteString(&sb, someStr); } sink = strings_Builder_String(&sb); strings_Builder_Free(&sb); } } // more benchmarks...Reads almost like Go's benchmarks.
To monitor memory usage, I created
Tracker— a memory allocator that wraps another allocator and keeps track of allocations:// A Stats records statistics about the memory allocator. typedef struct { uint64_t Alloc; uint64_t TotalAlloc; uint64_t Mallocs; uint64_t Frees; } mem_Stats; // A Tracker wraps an Allocator and tracks all // allocations and deallocations made through it. typedef struct { mem_Allocator Allocator; mem_Stats Stats; } mem_Tracker; so_Result mem_Tracker_Alloc(void* self, so_int size, so_int align) { mem_Tracker* t = self; so_Result res = t->Allocator.Alloc(t->Allocator.self, size, align); // ... t->Stats.Alloc += (uint64_t)(size); t->Stats.TotalAlloc += (uint64_t)(size); t->Stats.Mallocs++; return (so_Result){.val.as_ptr = res.val.as_ptr, .err = NULL}; } void mem_Tracker_Free(void* self, void* ptr, so_int size, so_int align) { mem_Tracker* t = self; t->Allocator.Free(t->Allocator.self, ptr, size, align); t->Stats.Alloc -= (uint64_t)(size); t->Stats.Frees++; }The benchmark gets an allocator through the
testing_RunBenchmarksfunction and wraps it in aTrackerto keep track of allocations:int main(void) { so_Slice benchs = {(testing_Benchmark[4]){ {.Name = so_str("WriteS_AutoGrow"), .F = main_WriteString_AutoGrow}, {.Name = so_str("WriteS_PreGrow"), .F = main_WriteString_PreGrow}, {.Name = so_str("WriteB_AutoGrow"), .F = main_Write_AutoGrow}, {.Name = so_str("WriteB_PreGrow"), .F = main_Write_PreGrow}}, 4, 4}; testing_RunBenchmarks(mem_System, benchs); }There's no auto-discovery, but the manual setup is quite straightforward.
Optimizing search
With the benchmarking setup ready, I ran benchmarks on the
stringspackage. Some functions did well — about 1.5-2x faster than their Go equivalents:go Benchmark_Clone-8 12143073 98.50 ns/op 1024 B/op 1 allocs/op Benchmark_Fields-8 791077 1524 ns/op 288 B/op 1 allocs/op Benchmark_Repeat-8 9197040 127.3 ns/op 1024 B/op 1 allocs/op c Benchmark_Clone 27935466 41.84 ns/op 1024 B/op 1 allocs/op Benchmark_Fields 1319384 907.7 ns/op 272 B/op 1 allocs/op Benchmark_Repeat 18445929 64.11 ns/op 1024 B/op 1 allocs/opBut
Index(searching for a substring in a string) was a total disaster — it was nearly 20 times slower than in Go:go Benchmark_Index-8 47874408 25.14 ns/op 0 B/op 0 allocs/op c Benchmark_Index 483787 483.1 ns/op 0 B/op 0 allocs/opThe problem was caused by the
IndexBytefunction we looked at earlier:// IndexByte returns the index of the first instance // of c in b, or -1 if c is not present in b. func IndexByte(b []byte, c byte) int { for i, x := range b { if x == c { return i } } return -1 }This "pure" Go implementation is just a fallback. On most platforms, Go uses a specialized version of
IndexBytewritten in assembly.For the C version, the easiest solution was to use
memchr, which is also optimized for most platforms:static inline so_int bytealg_IndexByte(so_Slice b, so_byte c) { void* at = memchr(b.ptr, (int)c, b.len); if (at == NULL) return -1; return (so_int)((char*)at - (char*)b.ptr); }With this fix, the benchmark results changed drastically:
go Benchmark_Index-8 47874408 25.14 ns/op 0 B/op 0 allocs/op Benchmark_IndexByte-8 54982188 21.98 ns/op 0 B/op 0 allocs/op c Benchmark_Index 33552540 35.21 ns/op 0 B/op 0 allocs/op Benchmark_IndexByte 36868624 32.81 ns/op 0 B/op 0 allocs/opStill not quite as fast as Go, but it's close. Honestly, I don't know why the
memchr-based implementation is still slower than Go's assembly here, but I decided not to pursue it any further.After running the rest of the
stringsfunction benchmarks, the ported versions won all of them except for two:Benchmark | Go | C (mimalloc) | C (arena) | Winner
---|---|---|---|---
Clone | 99ns | 42ns | 34ns | C - 2.4x
Compare | 47ns | 36ns | 36ns | C - 1.3x
Fields | 1524ns | 908ns | 912ns | C - 1.7x
Index | 25ns | 35ns | 34ns | Go - 0.7x
IndexByte | 22ns | 33ns | 33ns | Go - 0.7x
Repeat | 127ns | 64ns | 67ns | C - 1.9x
ReplaceAll | 243ns | 200ns | 203ns | C - 1.2x
Split | 1899ns | 1399ns | 1423ns | C - 1.3x
ToUpper | 2066ns | 1602ns | 1622ns | C - 1.3x
Trim | 501ns | 373ns | 375ns | C - 1.3xOptimizing builder
strings.Builderis a common way to compose strings from parts in Go, so I tested its performance too. The results were worse than I expected:go Benchmark_WriteS_AutoGrow-8 5385492 224.0 ns/op 1424 B/op 5 allocs/op Benchmark_WriteS_PreGrow-8 10692721 112.9 ns/op 640 B/op 1 allocs/op c Benchmark_WriteS_AutoGrow 5659255 212.9 ns/op 1147 B/op 5 allocs/op Benchmark_WriteS_PreGrow 9811054 122.1 ns/op 592 B/op 1 allocs/opHere, the C version performed about the same as Go, but I expected it to be faster. Unlike
Index,Builderis written entirely in Go, so there's no reason the ported version should lose in this benchmark.The
WriteStringmethod looked almost identical in Go and C:// WriteString appends the contents of s to b's buffer. // It returns the length of s and a nil error. func (b *Builder) WriteString(s string) (int, error) { b.buf = append(b.buf, s...) return len(s), nil } static so_Result strings_Builder_WriteString(void* self, so_String s) { strings_Builder* b = self; strings_Builder_grow(b, so_len(s)); b->buf = so_extend(so_byte, b->buf, so_string_bytes(s)); return (so_Result){.val.as_int = so_len(s), .err = NULL}; }Go's
appendautomatically grows the backing slice, whilestrings_Builder_growdoes it manually (so_extend, on the contrary, doesn't grow the slice — it's merely amemcpywrapper). So, there shouldn't be any difference. I had to investigate.Looking at the compiled binary, I noticed a difference in how the functions returned results. Go returns multiple values in separate registers, so
(int, error)uses three registers: one for 8-byteint, two for theerrorinterface (implemented as two 8-byte pointers). But in C,so_Resultwas a single struct made up of twoso_Valueunions and aso_Errorpointer:typedef union { bool as_bool; // 1 byte so_int as_int; // 8 bytes int64_t as_i64; // 8 bytes so_String as_string; // 16 bytes (ptr + len) so_Slice as_slice; // 24 bytes (ptr + len + cap) void* as_ptr; // 8 bytes // ... other types } so_Value; typedef struct { so_Value val; // 24 bytes so_Value val2; // 24 bytes so_Error err; // 8 bytes } so_Result;Of course, this 56-byte monster can't be returned in registers — the C calling convention passes it through memory instead. Since
WriteStringis on the hot path in the benchmark, I figured this had to be the issue. So I switched from a single monolithicso_Resulttype to signature-specific types for multi- return pairs:so_R_bool_errfor(bool, error);so_R_int_errfor(so_int, error);so_R_str_errfor(so_String, error);- etc.
Now, the
Builder.WriteStringimplementation in C looked like this:typedef struct { so_int val; so_Error err; } so_R_int_err; static so_R_int_err strings_Builder_WriteString(void* self, so_String s) { // ... }so_R_int_erris only 16 bytes — small enough to be returned in two registers. Problem solved! But it wasn't — the benchmark only showed a slight improvement.After looking into it more, I finally found the real issue: unlike Go, the C compiler wasn't inlining
WriteStringcalls. Addinginlineand movingstrings_Builder_WriteStringto the header file made all the difference:go Benchmark_WriteS_AutoGrow-8 5385492 224.0 ns/op 1424 B/op 5 allocs/op Benchmark_WriteS_PreGrow-8 10692721 112.9 ns/op 640 B/op 1 allocs/op c Benchmark_WriteS_AutoGrow 10344024 115.9 ns/op 1147 B/op 5 allocs/op Benchmark_WriteS_PreGrow 41045286 28.74 ns/op 592 B/op 1 allocs/op2-4x faster. That's what I was hoping for!
Wrapping up
Porting
bytesandstringswas a mix of easy parts and interesting challenges. The pure functions were straightforward — just translate the syntax and pay attention to operator precedence. The real design challenge was memory management. Using allocators turned out to be a good solution, making memory allocation clear and explicit without being too difficult to use.The benchmarks showed that the C versions outperformed Go in most cases, sometimes by 2-4x. The only exceptions were
IndexandIndexByte, where Go relies on hand-written assembly. Thestrings.Builderoptimization was an interesting challenge: what seemed like a return-type issue was actually an inlining problem, and fixing it gave a nice speed boost.There's a lot more of Go's stdlib to port. In the next post, we'll cover
time— a very unique Go package. In the meantime, if you'd like to write Go that translates to C — with no runtime and manual memory management — I invite you to try Solod. Thebytesandstringspackages are included, of course. -
🔗 r/LocalLLaMA Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion rss
| Hugging Face netflix/void-model: https://huggingface.co/netflix/void-model Project page - GitHub: https://github.com/Netflix/void-model Demo: https://huggingface.co/spaces/sam-motamed/VOID submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 r/reverseengineering Open source runtime that deep-inspects AI agent protocol traffic (MCP/ACP) — Rust rss
submitted by /u/After_Somewhere_2254
[link] [comments] -
🔗 r/LocalLLaMA Gemma 4 is fine great even … rss
| Been playing with the new Gemma 4 models it’s amazing great even but boy did it make me appreciate the level of quality the qwen team produced and I’m able to have much larger context windows on my standard consumer hardware. submitted by /u/ThinkExtension2328
[link] [comments]
---|--- -
🔗 r/Leeds A-W of Leeds: Armley rss
For my free Substack newsletter Bury the Leeds I’m walking through each of the 33 council wards in Leeds, from Adel to Wetherby, one by one, using unusual articles I’ve found from the city’s past as a rough guide.
My fourth walk took me to Armley. I had to start at the most famous postcode in the patch - the prison, which since 1847 has loomed above the city like a threat. I wrote about Emily Swann who was the only woman to be executed there, a few days after Christmas in 1903. Quite a sad tale, that one.
I also discuss a forgetful pieman in the Albion pub, the end of the line for Mollie, the beloved delivery horse of Tong Road and troublesome hobbledehoys in Armley Park.
Next stop is Beeston and Holbeck!
https://burytheleeds.substack.com/p/a-w-of-leeds-armley
Have a solid Easter weekend r/Leeds !
submitted by /u/bluetrainlinesss
[link] [comments] -
🔗 r/york Putting together a cycle parking map rss
Hello!
I've just started cycling again, and realised the bike racks have changed a lot since I last parked my bike in town - I was wondering if anyone can help me pin point where the cycle racks are now in the city centre?
I'm going to create a map for the city centre. I will be out and about checking where they are but if anyone can help me locate them first - that would be great!
submitted by /u/donttrustthellamas
[link] [comments] -
🔗 r/wiesbaden Wo wächst Bärlauch in Wiesbaden? rss
Hey, weiß jemand, wo man in Wiesbaden oder Umgebung Bärlauch finden kann (zum selber pflücken)?
Ich würde gerne damit kochen und wäre für jeden Tipp dankbar 🙏
submitted by /u/Sea_Rip_3269
[link] [comments] -
🔗 r/Leeds Hello everyone rss
Hi everyone, I’m really struggling at the moment and could use some advice or support.
I have learning disabilities, and I was in a relationship for about a year that ended two weeks ago. Unfortunately, things weren’t healthy, and I ended up in a situation where most of my money was spent on my ex rather than on myself. Now that the relationship has ended, I’ve been left with very little.
Right now, I don’t have any food or essentials at home, and I don’t get paid for another 10 days. With the Easter weekend coming up, I’m worried because I’m not sure what help is available during this time.
If anyone knows of local services, food banks, or any support that might be open over Easter, I would really appreciate the information. Even just some guidance on where I can turn would mean a lot.
Thank you for taking the time to read this.
submitted by /u/StunningAd8386
[link] [comments] -
🔗 r/Leeds Moving to Yorkshire rss
Hi everyone,
I’m starting work as a doctor in East Yorkshire this August and wanted some advice on where to live. I’ve never lived in Yorkshire before, so I’m not too familiar with the area. Ideally I’d like to be in a city like Leeds, York, or Hull, but I’m unsure how manageable the commutes are.
Options I’m considering:
-
Living in Leeds and commuting to Hull/Scunthorpe
-
Living in York and commuting to Scarborough
-
Living in Hull and commuting to Grimsby/Scunthorpe
I can commute by train or car—would really appreciate any insight on travel times, reliability, and what these routes are like for hospital shifts.
Also open to general recommendations on the best place to live as a junior doctor in the area.
Thanks 😊
submitted by /u/ExpressSort9553
[link] [comments] -
-
🔗 r/LocalLLaMA qwen 3.6 voting rss
| I am afraid you have to use X guys https://x.com/ChujieZheng/status/2039909486153089250 submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/wiesbaden Gute Physiotherapie rss
Hat jemand einen Tipp für gute Physiotherapie in Wiesbaden?
submitted by /u/wuyntmm
[link] [comments] -
🔗 r/Leeds First bus tap in, tap out cost £5? rss
Took the 72 route from Civic-Q stop to Bramley town end. I tapped in at the driver’s door, then tap on the reader on the pillar, near the drivers door as that is the way it is meant to work, the reader on the pillar had 4 green lights and one red light, it beeped. I didn’t see what was on the screen as it was so low down.
I assumed it had not worked, so I tapped on the driver reader, and it stated it accepted it.
Did I commit a user error? it double charged me or would route cost £5? I have emailed First bus to try and get a refund as they advertise it should only cost £2.50 for the entire route
Did they tap out reader on the pillar work? Even though I didn’t think it did?
I should note that I did use Apple Pay, which does randomise the card details each time
Edit: for anyone reading this in future, follow /u/dotpaul advice. Go to this link to check it has been charged correctly: https://first-group.littlepay.com/en/first-group/signin
Don’t do a me, and tap on again unnecessarily, the tap off the machine has worked, even if there’s nothing on the screen. As long as the journey it is complete on the website.
submitted by /u/L0rdLogan
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [vtable-context-tools](https://github.com/oxiKKK/ida-vtable-tools): 1.0.2
-