- ↔
- →
to read (pdf)
- Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
- Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
- Letting AI Actively Manage Its Own Context | 明天的乌云
- Garden Offices for Sale UK - Portable Space
- Cord: Coordinating Trees of AI Agents | June Kim
- March 23, 2026
-
🔗 r/Yorkshire Best time to excavate in Whitby/ surrounding area rss
I need fossils for my dinosaur park. Nobody will take my vision seriously without them. So I am coming to Whitby coast one day, with pickaxe and chisel and magnifying glass, to prove them wrong. What time of the year is it best to excavate?
I’ve been here a few good times, when the cliffs crumbled a bit. I found some pretty good fossils, but I feel as if there is something better waiting for me.
What time of year are the best fossils found?
submitted by /u/DinosaurGuy65
[link] [comments] -
🔗 r/wiesbaden Sie im Liliencarré rss
Hey,
heute ist mir eine hübsche Frau mit roten Strähnen im Haar und einem sympathischen Lächeln im Liliencarré entgegengekommen. Wir haben uns kurz angesehen und gelächelt.
Ich weiß nicht, ob Sie sich an den Moment erinnern wird, aber Ihr Lächeln hat mir den Tag gerettet.
Deshalb möchte ich mich mit diesen Zeilen bei Dir dafür bedanken.
Vielleicht sieht man sich ja nochmal.
Liebe Grüße von dem
Mann in der fliederfarbenen Jacke
submitted by /u/wie_throwaway
[link] [comments] -
🔗 r/reverseengineering We got Skype to log in - One major step in figuring out the popular 2000s IM client rss
submitted by /u/Gullible_Injury7023
[link] [comments] -
🔗 apple/embedding-atlas v0.19.0 release
New Features
- Audio data support: in the tooltip / instances view, audio data can be shown as a audio player.
What's Changed
- test: update python deps and initial pytest setup for backend folder by @donghaoren in #173
- feat: special color for 0 in counts by @domoritz in #178
- ci: update versions of ci packages by @domoritz in #176
- feat: support audio data with a audio player by @donghaoren in #179
- chore: bump version to 0.19.0 by @donghaoren in #180
- fix: valueKind function doesn't handle null values by @donghaoren in #181
Full Changelog :
v0.18.1...v0.19.0 -
🔗 r/reverseengineering I tried multiple tools ( Httpstoolkit, frida, burp , pcapdroid and others ) uses these on rooted phone.. but 1 app is not working / opening with these apps... I just need its api / ws from where it fetch data... can anyone help? rss
submitted by /u/killer78698
[link] [comments] -
🔗 r/wiesbaden Love the city rss
Wiesbaden is a really cool city ... I'm surprised and love how it's designed and the people..it's very serene and charming ..I thought first it's pretty but boring but now I'm amazed..
submitted by /u/Single_Lunch_5671
[link] [comments] -
🔗 r/york Looking for jewellers for valuations rss
Hi everyone,
I’m looking for jewellers in and around York who do valuations of pieces (and possibly do insurance reports, depending on valuations) for some pieces I’ve inherited lately after the death of some relatives
Any suggestions/experiences/thoughts are greatly appreciated
Thanks!!
submitted by /u/Space-cowboy1995
[link] [comments] -
🔗 r/Yorkshire Bartle - Yorkshire Folklore rss
I just stumbled across this lovely film on youtube about a village tradition in Yorkshire dating back to the 1400s. Thought it deserved a share, I hope they get some subs.
https://www.youtube.com/watch?v=BYrx2IypaHY
submitted by /u/Future_You_2800
[link] [comments] -
🔗 r/reverseengineering I built an FPGA reimplementation of the 3dfx Voodoo 1 rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
🔗 r/reverseengineering TIL you can detect a UEFI bootkit from usermode by just asking it nicely rss
submitted by /u/SapDragons
[link] [comments] -
🔗 @HexRaysSA@infosec.exchange We'll be at the RSA this week! mastodon
We'll be at the RSA this week!
If you want to talk about VR, AI, malware, and what’s on the IDA roadmap, book some time with us.
👉 https://meetings-eu1.hubspot.com/chris-hernandez -
🔗 r/Yorkshire Activities around Yorkshire rss
Might be a long shot but I’m looking for activities to do around (mainly west) Yorkshire, as beautiful the place is though I’m not looking for anything too outdoorsy.
Something creative such as pottery painting (already done it), I’ve seen a few jewellery or candle making workshops but these are only available on a select few days. An escape room would also be fun. Open to any ideas, just wondered what else is out here, something that would kill atleast 1-2 hours!
submitted by /u/ghostofhogwarts
[link] [comments] -
🔗 r/LocalLLaMA China's open-source dominance threatens US AI lead, US advisory body warns rss
| submitted by /u/Prolapse_to_Brolapse
[link] [comments]
---|--- -
🔗 r/reverseengineering Using local LLM and Ghidra to analyze malware (Part 2) rss
submitted by /u/moonlightelite
[link] [comments] -
🔗 r/Yorkshire Stargazing and astrophotography experiences in Yorkshire rss
Can anyone recommend some great places to spend the night (not camping, actual rooms) in the Yorkshire Dales / Peak District or North Yorkshire Moors where the skies are really dark and you can stargaze with a telescope and try some astro photography? Looking to gift an experience to my partner for his birthday. Particularly interested in guided experiences if possible at all :)
submitted by /u/Sajola_91
[link] [comments] -
🔗 sacha chua :: living an awesome life 2026-03-23 Emacs news rss
: Removed elecxzy comment-dwim, whoops.
Might be a good opportunity to set up better auto-saves, with buffer-guardian.el inspiring an update to super-save 0.5. Also, there were a couple of interesting experiments embedding Chromium (Reddit) or native macOS views in Emacs (Reddit), and one about embedding Emacs in a webpage (Reddit).
- Upcoming events (iCal file, Org):
- Emacs Berlin: Emacs-Berlin Hybrid Meetup https://emacs-berlin.org/ Wed Mar 25 1100 America/Vancouver - 1300 America/Chicago - 1400 America/Toronto - 1800 Etc/GMT - 1900 Europe/Berlin - 2330 Asia/Kolkata – Thu Mar 26 0200 Asia/Singapore
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Mar 28 0130 America/Vancouver - 0330 America/Chicago - 0430 America/Toronto - 0830 Etc/GMT - 0930 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- EmacsATX: Emacs Social https://www.meetup.com/emacsatx/events/313720093/ Thu Apr 2 1600 America/Vancouver - 1800 America/Chicago - 1900 America/Toronto - 2300 Etc/GMT – Fri Apr 3 0100 Europe/Berlin - 0430 Asia/Kolkata - 0700 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri Apr 3 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Emacs configuration:
- Emacs Lisp:
- elisp-2025/el-xeger.el (Reddit)- generate text from a regex
- [20] Working on Canvas Patch (Contd..) - 3/22/2026, 2:31:11 PM - Dyne.org TV
- Writing:
- Appearance:
- Navigation:
- Dired:
- Org Mode:
- Which org-related packages do you use?
- Taking Notes With Emacs Org Mode (It's Easy!) (15:35)
- Reading your Emacs notes on-the-go, minimally. (12:07, (Reddit)
- Srijan Choudhary: 2026-03-18-001: move Org heading title into body
- Resilient Technologies. Why Decades-Old Tools Define the ROOT of Modern Research Data Management — Workshop Documents (@lukascbossert@mastodon.social)
- Coding:
- aspiers/madolt: magit-like emacs mode for dolt · GitHub (Reddit)
- Emacs VC-mode in action (01:34)
- Projeto Omega - Cifra de Cesar (Emacs+Magit) (14:39)
- Lycomedes1814/temme-mode: A rewrite of emmet-mode for Emacs, aiming for a clean and modern codebase · GitHub (Reddit)
- Emacs Redux: surround.el: Vim-Style Pair Editing Comes to Emacs (Irreal)
- Emacs Redux: Tree-sitter Font-Lock and Indentation in Comint Buffers
- Tip about using python-indent-def-block-scale - mutliplier applied to indentation
- Einar Mostad: Use virtual environment in Emacs' Python Mode if in a project with a venv
- Monday Live Coding with Emacs. 3/16/2026 #coding #livecoding #emacs #learnc (01:16:08)
- Mail, news, and chat:
- Multimedia:
- Fun:
- AI:
- Agile & Coding: The tools of an Agentic Engineer (Reddit)
- jlouisbiz/rcd-mcp-emacs-documentation: get_documentation, list_functions_by_prefix, search_functions, rcd_elisp_function_definition
- MCP for Emacs - Improve your notes with ORG MODE 🦄 (10:27)
- agent-shell-notifications released! (Reddit)
- Jeremias-A-Queiroz/emacs-gptel-slim-tools: leverage etags for precise code fragment extraction (Reddit)
- Fritz Grabo:
acp2ollamain Emacs for fun and profit - James Dyer: Ollama Buddy - Seven Lines to Any LLM Provider
- Community:
- Shells:
- Other:
- buffer-terminator.el - safely auto terminate buffers for performance and reduced clutter [Release 1.2.1] (Reddit)
- James Cherti: buffer-guardian.el – Automatically Save Emacs Buffers Without Manual Intervention (When Buffers Lose Focus, Regularly, or After Emacs is Idle) (Reddit, Irreal)
- super-save 0.5: Modernized and Better Than Ever (Reddit)
- Much Ado About Emacs 012: kirigami, visible-mark, javelin, opml, appine, buffer-guardian, isearch-lazy-count, markdown-table-wrap, surround
- chaoswork/appine: embed native macOS views (WebKit, PDFKit etc.) directly inside Emacs windows. · GitHub (Reddit)
- emacs-os/embr.el: Emacs is the display server. Headless Chromium via CloakBrowser is the renderer. · GitHub (Reddit)
- Why fork+exec Takes 100ms on My Mac: Debugging Slow Emacs with Instruments (Reddit)
- exlee/emacs-reporter: Emacs data collector for macOS · GitHub (Reddit)
- Emacs development:
- emacs-devel:
- Re: MacOS/NS Events Processing Queue - Przemysław Kamiński - VM compacting patch in case anyone wants to try it out
- Re: Tree-sitter: Correctly parsing template-like embeddings - Yuan Fu - trade-offs for performance?
- Re: feature/igc3 44f854bad09 2/5: Avoid remote references in face cache (analogous to bug#80601) - Eli Zaretskii - challenges with igc branch
- Re: some Eglot-related options to consider for newcomers-presets - João Távora - language servers and newcomers-presets?
- (Fmakunbound): Break aliasing, if present (bug#80538)
- hideshow: Fix 'hs-hide-block-behavior' set to 'after-cursor'.
- hideshow: New minor mode 'hs-indentation-mode'. (Bug#80179)
- emacs-devel:
- New packages:
- async-http-queue: Async HTTP queue with parallel fetching (MELPA)
- consult-symbol: Consult-based symbol search with narrowing (MELPA)
- flymake-zizmor: Flymake backend for zizmor, a Github Actions static analyzer (MELPA)
- org-snitch: Project-specific org-capture and link faces (MELPA)
- org-tag-cloud: Easily maintain a tag-cloud of org-mode tags (MELPA)
- tiles: Tagged Instant Lightweight Emacs Snippets (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 r/Leeds Leeds Goes Purple! rss
Starting Friday Litter Free Leeds is having its annual event where we try to get everyone involved filling purple bags with rubbish. If you want to join follow the link and find groups and events in your area.
https://litterfreeleeds.co.uk/leeds-goes-purple
submitted by /u/nfurnoh
[link] [comments] -
🔗 r/LocalLLaMA The current state of the Chinese LLMs scene rss
This is a summary of what's going on in Chinese LLM scene based on my own research. If you find any errors, please let me know.
The Big Boys:
- ByteDance: dola-seed (aka doubao) is the current market leader in proprietary LLM. It plays a role like OpenAI. They have an Seed OSS 36B model that is a solid dense model but seems like no one is talking about it. They have a proprietary Seedance T2V model that is now the most popular video gen app for lay people.
- Alibaba - Not many people uses its properitary model Qwen Max. It is the strongest in its open weight offering especially the small models. It is also strongest in T2I and T2V scene but this is off topic.
- Tencent - Hunyuan is their proprietary model but not many people use. Their T2I, T2V effort is second to Alibaba. They are the leader in 3D mesh generation with Hunyuan 3D but this model is only open weight up to 2.1.
- Baidu - Ernie is proprietary but not many people use. Baidu is stronger in the autonomous driving scene but that's off topic here.
- Xiaomi - Mimo V2 Pro is their proprietary model while the Mimo V2 Flash 309B-A15B is their open weight model.
- Ant Group - Ling 2.5 1T is their flagship open weight model. Seems to be outperformed by Kimi K2.5, so not many people are talking about it. It introduces something called Lightning LinearAttention, does anyone know the paper describing it?
- Meituan - LongCat-Flash-Chat is an open weight 562B model with dynamic MoE that activates 18.6B~31.3B. It also has a lite version that is 65B-A3B. Attention mechanism is MLA. Seems like they are the most aggressive open weight player now but they are more like the Middle Boy instead of Big.
The Side Project:
- Deepseek - a side project from an algorithmic trading firm. Current usage in China is a close second to ByteDance's doubao with half of the users. Interestingly, it is the most innovative among all Chinese LLM companies as it invented MLA,, DSA, GRPO, etc. Please let me know if there are other non-obvious tech that is used in actual product that is developed by other Chinese companies. Their business model might be similar to the Six Small Tigers but it seems to me this project is more for attracting investments to the investment arm and gaining access to President Xi.
The Six AI Small Tigers: (business models are highly similar. Release big open weight model to gain recognition and provide cheap inference service. Not sure if any of them is viable for the long term.)
- Zhipu - IPOed in HK. Current GLM-5 is a derivate of DeepSeek.
- Minimax - IPOed in HK. They have a MiniMax 2.7 proprietary model. MiniMax 2.5 is their open weight model which is a vanilla MoE 229B-A10B. So its inference cost is significantly lower than the others.
- Moonshot - Kimi open weight model which is a derivative of DeepSeek
- Stepfun - Step 3.5 flash is their open weight model that is a mixture of full attn and sliding window attention (SWA) layers at 1:3. It is 196B-A11B. Similar business model to Minimax but their model is not as good.
- Baichuan - Their Baichuan-M3 235B is a medical enhanced open weight model based on Qwen3Moe.
- 01 AI - Yi-34B is their last open weight model published in Nov 2024. They seem to focus on Enterprise AI agent system now, so they are becoming irrelevant to people here.
submitted by /u/Ok_Warning2146
[link] [comments] -
🔗 r/LocalLLaMA Let's take a moment to appreciate the present, when this sub is still full of human content. rss
It's going down guys, day by day.
submitted by /u/Ok-Internal9317
[link] [comments] -
🔗 r/Leeds Leeds “New Town” as part of South Bank development rss
20,000 new homes, 40% to be affordable. Whilst I applaud the intent that’s a lot of houses for what is still a relatively small footprint so not really going to be the family homes needed in big blocks of high rises.
But hey, at least they won’t be student lets, this time!
submitted by /u/zharrt
[link] [comments] -
🔗 r/Leeds First Direct Arena - Greg Davies rss
what a fantastic night we had on Sunday 😁
Greg is so damn funny, his show was on point and his warm up act, really good too.
ended up sitting in the "Gods", 3 rows from the roof of the building itself. this section should have its own postcode.
really tempted by the upcoming Prodigy and Carl Cox show, but will definitely buy tickets for a much lower tier!
dont get me started on the price's of drinks 😮💨
overall a fantastic night, and if you are a fan of Greg, the show is worth the cost.
good job there is a Spoons nearby
submitted by /u/migoodridge
[link] [comments] -
🔗 backnotprop/plannotator v0.14.5 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.14.4 | GitHub review submission, repo identifier in tab title, nested code fence parser fix, Pi paste URL wiring, file header gap fix
v0.14.3 | PR context panel, diff search in code review, OpenCode permission normalization, landing page redesign
v0.14.2 | OpenCode plan mode prompt replacement, Windows non-ASCII path fix, Pi link fix
v0.14.1 | Single submit_plan with auto-detect, viewed-file draft persistence, Bear nested tag fix
v0.14.0 | PR review via GitHub URL,/plannotator-lastfor annotating agent messages, OpenCode plan mode permissions fix, VS Code SSH proxy fix
v0.13.1 | OpenCode plan mode rewrite, Obsidian save fix
v0.13.0 | Built-in themes, annotatable plan diffs, file-scoped code review comments, Octarine integration, unified review core, Pi remote sessions
v0.12.0 | Quick annotation labels, mobile compatibility, Graphviz rendering, markdown images with lightbox, linked doc navigation in annotate mode
v0.11.4 | Git add from code review, bidirectional scroll navigation, clipboard paste for annotation images, VS Code IPC port stability
v0.11.3 | Expandable diff context, hierarchical folder tree, redesigned worktree controls, supply chain hardening
v0.11.2 | Git worktree support in code review, VS Code editor annotations in review, Obsidian auto-save & separator settings, session discovery, smart file resolution
What's New in v0.14.5
v0.14.5 adds GitLab merge request review support, bringing Plannotator's code review UI to a second hosting platform. Two community bug fixes round out the release. 3 PRs, 2 from external contributors, 1 first-time.
GitLab Merge Request Review
Plannotator can now review GitLab merge requests alongside GitHub PRs. Pass any GitLab MR URL to the review command and it works the same way: diff viewer, annotations, feedback submission, and the new PR context panel (summary, comments, pipeline status).
The platform is auto-detected from the URL.
github.comroutes throughgh, and any URL containing/-/merge_requests/routes throughglab. Self-hosted GitLab instances are supported via the--hostnameflag.Under the hood, the existing GitHub implementation was extracted to
packages/shared/pr-github.ts, and a parallelpr-gitlab.tshandlesglabCLI interactions. The dispatch layer inpr-provider.tsroutes by platform. ThePRRefandPRMetadatatypes are now discriminated unions that carry the platform context throughout the stack.GitLab's API surface differs from GitHub's in several ways that required specific handling.
glab mr diffoutputs bare diffs without thediff --gitprefix, so the output is normalized before parsing.glabhas no--jqflag, so JSON responses are parsed in full. Review submission requires three separate API calls (note, discussions, approve) rather than GitHub's single atomic endpoint, with inline comments submitted in parallel usingPromise.allSettledfor partial failure resilience.The UI adapts to the platform: labels switch between PR/MR, icons between GitHub/GitLab, and issue number prefixes between
#and!.Annotate-Last Session Resolution After cd
/plannotator-lastsilently annotated the wrong message when a user changed directories during a Claude Code session. The command resolves the current session by matchingprocess.cwd()against Claude Code's project slug, but after acdthe CWD no longer matches the session's original directory. The result: it finds a stale session from a previous day and opens that session's last message with no warning.The fix introduces three-tier session resolution. First, it checks for PPID- based session metadata that Claude Code writes to
~/.claude/sessions/. If that's not available, it falls back to CWD-based slug matching, then to a recency heuristic. The PPID path is the most reliable because it ties directly to the running Claude Code process regardless of the shell's current directory.This is a Claude Code-only bug. Codex uses
CODEX_THREAD_ID, OpenCode and Pi use their SDK APIs, and none of them resolve sessions via CWD.Additional Changes
- Fix duplicate Code Review header in Pi extension — the Pi extension's review command handler wrapped feedback in a
# Code Review Feedbackheading, butexportReviewFeedback()already includes that heading. The duplicate is removed, and two tests verify single-heading output. By @dmmulroy in #370.
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: GitLab merge request review support by @backnotprop in #364
- fix: annotate-last resolves wrong session after cd by @janah01 in #366
- fix: remove duplicate 'Code Review' header in pi extension review feedback by @dmmulroy in #370
New Contributors
Contributors
@janah01 identified and fixed a subtle session resolution bug in
/plannotator-lastthat caused it to silently annotate the wrong message aftercd-ing during a Claude Code session. The three-tier resolution strategy in #366 ensures the command finds the correct session regardless of the shell's current directory.@dmmulroy fixed the duplicate heading in Pi extension review feedback in #370, his second contribution after wiring the paste URL in v0.14.4.
Full Changelog :
v0.14.4...v0.14.5 - Fix duplicate Code Review header in Pi extension — the Pi extension's review command handler wrapped feedback in a
-
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/LocalLLaMA So cursor admits that Kimi K2.5 is the best open source model rss
| Nothing speaks louder than recognition from your peers. submitted by /u/Giveawayforusa
[link] [comments]
---|--- -
🔗 r/LocalLLaMA I came from Data Engineering stuff before jumping into LLM stuff, i am surprised that many people in this space never heard Elastic/OpenSearch rss
| Jokes aside, on a technical level, Google/brave search and vector stores basically work in a very similar way. The main difference is scale. From an LLM point of view, both fall under RAG. You can even ignore embedding models entirely and just use TF-IDF or BM25. Elastic and OpenSearch (and technically Lucene) are powerhouses when it comes to this kind of retrieval. You can also enable a small BERT model as a vector embedding, around 100 MB (FP32), running in on CPU, within either Elastic or OpenSearch. If your document set is relatively small (under ~10K) and has good variance, a small BERT model can handle the task well, or you can even skip embeddings entirely. For deeper semantic similarity or closely related documents, more powerful embedding models are usually the go to. submitted by /u/Altruistic_Heat_9531
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.62.0 release
New Features
- Built-in tools as extensible ToolDefinitions. Extension authors can now override rendering of built-in read/write/edit/bash/grep/find/ls tools with custom
renderCall/renderResultcomponents. See docs/extensions.md. - Unified source provenance via
sourceInfo. All resources, commands, tools, skills, and prompt templates now carry structuredsourceInfowith path, scope, and source metadata. Visible in autocomplete, RPC discovery, and SDK introspection. See docs/extensions.md. - AWS Bedrock cost allocation tagging. New
requestMetadataoption onBedrockOptionsforwards key-value pairs to the Bedrock Converse API for AWS Cost Explorer split cost allocation.
Breaking Changes
- Changed
ToolDefinition.renderCallandrenderResultsemantics. Fallback rendering now happens only when a renderer is not defined for that slot. IfrenderCallorrenderResultis defined, it must return aComponent. - Changed slash command provenance to use
sourceInfoconsistently. RPCget_commands,RpcSlashCommand, and SDKSlashCommandInfono longer exposelocationorpath. UsesourceInfoinstead (#1734) - Removed legacy
sourcefields fromSkillandPromptTemplate. UsesourceInfo.sourcefor provenance instead (#1734) - Removed
ResourceLoader.getPathMetadata(). Resource provenance is now attached directly to loaded resources viasourceInfo(#1734) - Removed
extensionPathfromRegisteredCommandandRegisteredTool. UsesourceInfo.pathfor provenance instead (#1734)
Migration Notes
Resource, command, and tool provenance now use
sourceInfoconsistently.Common updates:
- RPC
get_commands: replacepathandlocationwithsourceInfo.path,sourceInfo.scope, andsourceInfo.source SlashCommandInfo: replacecommand.pathandcommand.locationwithcommand.sourceInfoSkillandPromptTemplate: replace.sourcewith.sourceInfo.sourceRegisteredCommandandRegisteredTool: replace.extensionPathwith.sourceInfo.path- Custom
ResourceLoaderimplementations: removegetPathMetadata()and read provenance from loaded resources directly
Examples:
command.path->command.sourceInfo.pathcommand.location === "user"->command.sourceInfo.scope === "user"skill.source->skill.sourceInfo.sourcetool.extensionPath->tool.sourceInfo.path
Changed
- Built-in tools now work like custom tools in extensions. To get built-in tool definitions, import
readToolDefinition/createReadToolDefinition()and the equivalentbash,edit,write,grep,find, andlsexports from@mariozechner/pi-coding-agent. - Cleaned up
buildSystemPrompt()so built-in tool snippets and tool-local guidelines come from built-inToolDefinitionmetadata, while cross-tool and global prompt rules stay in system prompt construction. - Added structured
sourceInfotopi.getAllTools()results for built-in, SDK, and extension tools (#1734)
Fixed
- Fixed extension command name conflicts so extensions with duplicate command names can load together. Conflicting extension commands now get numeric invocation suffixes in load order, for example
/review:1and/review:2(#1061) - Fixed slash command source attribution for extension commands, prompt templates, and skills in autocomplete and command discovery (#1734)
- Fixed auto-resized image handling to enforce the inline image size limit on the final base64 payload, return text-only fallbacks when resizing cannot produce a safe image, and avoid falling back to the original image in
readand@fileauto-resize paths (#2055) - Fixed
pi updatefor git packages to skip destructive reset, clean, and reinstall steps when the fetched target already matches the local checkout (#2503) - Fixed print and JSON mode to take over stdout during non-interactive startup, keeping package-manager and other incidental chatter off protocol/output stdout (#2482)
- Fixed cli-highlight auto-detection for languageless code blocks that misidentified prose as programming languages and colored random English words as keywords
- Fixed Anthropic thinking disable handling to send
thinking: { type: "disabled" }for reasoning-capable models when thinking is explicitly off (#2022) - Fixed explicit thinking disable handling across Google, Google Vertex, Gemini CLI, OpenAI Responses, Azure OpenAI Responses, and OpenRouter-backed OpenAI-compatible completions (#2490)
- Fixed OpenAI Responses replay for foreign tool-call item IDs by hashing foreign IDs into bounded
fc_<hash>IDs - Fixed OpenAI-compatible completions streams to ignore null chunks instead of crashing (#2466 by @Cheng-Zi-Qing)
- Fixed
truncateToWidth()performance for very large strings by streaming truncation (#2447) - Fixed markdown heading styling being lost after inline code spans within headings
- Built-in tools as extensible ToolDefinitions. Extension authors can now override rendering of built-in read/write/edit/bash/grep/find/ls tools with custom
-
- March 22, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-22 rss
IDA Plugin Updates on 2026-03-22
Activity:
- dylib_dobby_hook
- ca090048: Refactor memory utility calls to use macros for instance variable acc…
- tomsons_RE_scripts
- a0d9cd4a: update specific_dyn_init_namer.py
- dylib_dobby_hook
-
🔗 Simon Willison Experimenting with Starlette 1.0 with Claude skills rss
Starlette 1.0 is out! This is a really big deal. I think Starlette may be the Python framework with the most usage compared to its relatively low brand recognition because Starlette is the foundation of FastAPI, which has attracted a huge amount of buzz that seems to have overshadowed Starlette itself.
Kim Christie started working on Starlette in 2018 and it quickly became my favorite out of the new breed of Python ASGI frameworks. The only reason I didn't use it as the basis for my own Datasette project was that it didn't yet promise stability, and I was determined to provide a stable API for Datasette's own plugins... albeit I still haven't been brave enough to ship my own 1.0 release (after 26 alphas and counting)!
Then in September 2025 Marcelo Trylesinski announced that Starlette and Uvicorn were transferring to their GitHub account, in recognition of their many years of contributions and to make it easier for them to receive sponsorship against those projects.
The 1.0 version has a few breaking changes compared to the 0.x series, described in the release notes for 1.0.0rc1 that came out in February.
The most notable of these is a change to how code runs on startup and shutdown. Previously that was handled by
on_startupandon_shutdownparameters, but the new system uses a neat lifespan mechanism instead based around an async context manager:@contextlib.asynccontextmanager async def lifespan(app): async with some_async_resource(): print("Run at startup!") yield print("Run on shutdown!") app = Starlette( routes=routes, lifespan=lifespan )
If you haven't tried Starlette before it feels to me like an asyncio-native cross between Flask and Django, unsurprising since creator Kim Christie is also responsible for Django REST Framework. Crucially, this means you can write most apps as a single Python file, Flask style.
This makes it really easy for LLMs to spit out a working Starlette app from a single prompt.
There's just one problem there: if 1.0 breaks compatibility with the Starlette code that the models have been trained on, how can we have them generate code that works with 1.0?
I decided to see if I could get this working with a Skill.
Building a Skill with Claude
Regular Claude Chat on claude.ai has skills, and one of those default skills is the skill-creator skill. This means Claude knows how to build its own skills.
So I started a chat session and told it:
Clone Starlette from GitHub - it just had its 1.0 release. Build a skill markdown document for this release which includes code examples of every feature.
I didn't even tell it where to find the repo, Starlette is widely enough known that I expected it could find it on its own.
It ran
git clone https://github.com/encode/starlette.gitwhich is actually the old repository name, but GitHub handles redirects automatically so this worked just fine.The resulting skill document looked very thorough to me... and then I noticed a new button at the top I hadn't seen before labelled "Copy to your skills". So I clicked it:

And now my regular Claude chat has access to that skill!
A task management demo app
I started a new conversation and prompted:
Build a task management app with Starlette, it should have projects and tasks and comments and labels
And Claude did exactly that, producing a simple GitHub Issues clone using Starlette 1.0, a SQLite database (via aiosqlite) and a Jinja2 template.
Claude even tested the app manually like this:
cd /home/claude/taskflow && timeout 5 python -c " import asyncio from database import init_db asyncio.run(init_db()) print('DB initialized successfully') " 2>&1 pip install httpx --break-system-packages -q \ && cd /home/claude/taskflow && \ python -c " from starlette.testclient import TestClient from main import app client = TestClient(app) r = client.get('/api/stats') print('Stats:', r.json()) r = client.get('/api/projects') print('Projects:', len(r.json()), 'found') r = client.get('/api/tasks') print('Tasks:', len(r.json()), 'found') r = client.get('/api/labels') print('Labels:', len(r.json()), 'found') r = client.get('/api/tasks/1') t = r.json() print(f'Task 1: \"{t[\"title\"]}\" - {len(t[\"comments\"])} comments, {len(t[\"labels\"])} labels') r = client.post('/api/tasks', json={'title':'Test task','project_id':1,'priority':'high','label_ids':[1,2]}) print('Created task:', r.status_code, r.json()['title']) r = client.post('/api/comments', json={'task_id':1,'content':'Test comment'}) print('Created comment:', r.status_code) r = client.get('/') print('Homepage:', r.status_code, '- length:', len(r.text)) print('\nAll tests passed!') "
For all of the buzz about Claude Code, it's easy to overlook that Claude itself counts as a coding agent now, fully able to both write and then test the code that it is writing.
Here's what the resulting app looked like. The code is here in my research repository.

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/Leeds Looking for physical signage of the place name "Steander" in the LS9 area rss
In a random stroke of inspiration, I've gotten myself fascinated with the old forgotten Steander/Fearns Island area across the river from the Armouries. I've been out in the area trying to find any sort of signage or mention of the name "Steander". So far the only mention of it I've seen is on the map at the dock showing the area as it was in the 1930s, but that's it. I know it used to be on the street signs in the area, but those were unfortunately far before my time.
I'm throwing this out there to ask if anybody knows if the name is present anywhere physically in the area that's still presently there? Any help would be much appreciated :)
submitted by /u/SH_Eastawott
[link] [comments] -
🔗 r/Yorkshire Between Skipton, Gargrave and Airton 🚶 rss
| submitted by /u/unitedkingdombaby
[link] [comments]
---|--- -
🔗 r/Harrogate Slingsby Gin rss
Visiting Harrogate from out of town and would like to take a few bottles of Slingsby gin back home for a friend. Any idea where it can still be purchased?
submitted by /u/Apprehensive_Pay_740
[link] [comments] -
🔗 r/reverseengineering Reversing World Conqueror 4's asset encryption — AES-256-CBC with 5 header variants, key extracted from .so rss
submitted by /u/Ascendo_Aquila
[link] [comments] -
🔗 r/york Best camping sites in Yorkshire area? rss
submitted by /u/Ill-Toe-8182
[link] [comments] -
🔗 r/Yorkshire Campaigners celebrate reprieve for Whitby cliff lift rss
| Campaigners in Whitby are celebrating after North Yorkshire Council backed a motion to investigate the costs of repairing the town's historic cliff lift rather than permanently closing it. The 95-year-old attraction has been out of action for three years, prompting a petition with over 5,600 signatures to save it. A report presented to North Yorkshire Council's executive yesterday recommended permanently closing the 95-year-old lift, which has been out of action for three years since corrosion was discovered in 2022. Council officers had initially recommended adopting a £199,000 plan to seal the lift shaft while retaining the top building, arguing that full repairs, estimated to cost up to £5.5 million, were disproportionate in a challenging financial environment. However, councillors have instead backed a motion to look into the costs of repairing the lift, following fierce resistance from residents and a petition to save the historic landmark that gathered more than 5,600 signatures. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 r/Yorkshire Mayor unveils £1.5bn ‘People’s Network’ transport plan for South Yorkshire rss
| Mayor unveils £1.5bn ‘People’s Network’ transport plan for South Yorkshire Trams, buses and hire bikes will be integrated under molten orange and asphalt black livery highlighting industrial heritage, says Oliver Coppard South Yorkshire’s transport system will be known as the “People’s Network”, with trams, buses and hire bikes all coming under public control. The plan was unveiled on Monday by the region’s mayor, Oliver Coppard, who said it would create an affordable, joined-up network in molten orange and asphalt black colours. A large fleet of electric buses and 25 new trams will be introduced over the next five years. Buses will be franchised and taken under public control next year, joining the Supertram, which was brought back into the combined authority’s hands in 2024. Coppard said it was a “once-in-a-generation change to how transport works in South Yorkshire”. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Alibaba confirms they are committed to continuously open-sourcing new Qwen and Wan models rss
| Source: https://x.com/ModelScope2022/status/2035652120729563290 submitted by /u/TKGaming_11
[link] [comments]
---|--- -
🔗 r/LocalLLaMA MiniMax M2.7 Will Be Open Weights rss
| Composer 2-Flash has been saved! (For legal reasons that's a joke) submitted by /u/Few_Painter_5588
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Impressive thread from /r/ChatGPT, where after ChatGPT finds out no 7Zip, tar, py7zr, apt-get, Internet, it just manually parsed and unzipped from hex data of the .7z file. What model + prompts would be able to do this? rss
| submitted by /u/jinnyjuice
[link] [comments]
---|--- -
🔗 r/Yorkshire Looking upstream on the Swale towards Culloden tower yesterday. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 3Blue1Brown (YouTube) This picture broke my brain rss
Escher's Print Gallery, and the tour of complex analysis it invites. Check out our virtual career fair: 3b1b.co/talent Join channel supporters to see videos early: 3b1b.co/support An equally valuable form of support is to share the videos. Home page: https://www.3blue1brown.com
Original paper by de Smit and Lenstra: https://pub.math.leidenuniv.nl/~smitbde/papers/2003-de_smit-lenstra-escher.pdf
Co-written by Paul Dancstep, who handled many of the animations in the art section, including the delightful mesh warp scene.
Aaron Gostein helped with the manim animations in the section introducing complex functions.
Artwork provided by Talia Gerhson, Mitchell Zemil, and Anna Fedczuk.
Music by Vincent Rubinetti
Timestamps:
0:00 - The print gallery 13:04 - Conformal maps from complex analysis 21:41 - The complex exponential 25:56 - The complex logarithm 32:32 - 3b1b Talent 33:14 - Constructing the key function 40:16 - The deeper math behind Escher
These animations are largely made using a custom Python library, manim. See the FAQ comments here: https://3b1b.co/faq#manim
3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.
Mailing list: https://3blue1brown.substack.com Twitter: https://twitter.com/3blue1brown Bluesky: https://bsky.app/profile/3blue1brown.com Instagram: https://www.instagram.com/3blue1brown Reddit: https://www.reddit.com/r/3blue1brown Facebook: https://www.facebook.com/3blue1brown Patreon: https://patreon.com/3blue1brown Website: https://www.3blue1brown.com
-
🔗 r/reverseengineering I built a tool to search and extract specific classes from very large JAR files rss
submitted by /u/ProphetC8
[link] [comments] -
🔗 r/Harrogate Vegetarian Fish & Chips rss
for anyone know of any Fish & Chip places in Harrogate that don‘t fry their chips in beef dripping?
submitted by /u/CyclePrevious9043
[link] [comments] -
🔗 r/york 2 Day York Itinerary rss
My husband and I are visiting York in a couple of weeks and I’ve put together the following itinerary: (yes I like planning as terrible at decision making 🫠)
Is there anything you would add / change? We are in our late 30s and like art, history, food and beers/cocktails.
Wednesday
Breakfast - Brew & Brownie
Morning - York Art gallery , Look at York Minster , City Walls,
Lunch - Shambles Area/Market
Afternoon - Cat walking trail
Cocktails: Dusk / Polymath
Dinner - Dough Eyes
Evening Pub crawl - Valhalla, Shambles Tavern, house of trembling, the Golden Fleece
Thursday
Breakfast : Flori Bakery
Morning: York Castle Musuem , Clifford’s Tower, Fairfax House
Lunch: Brew York
Cake: Little Blondie Bakehouse
Afternoon : York Distillery Gin Tasting
Pre dinner drink - Cat in the Wall
Dinner: Rustique
submitted by /u/Iamtheonlylauren
[link] [comments] -
🔗 r/Leeds New station clock looks tacky rss
Not a fan at all
submitted by /u/AlanWrightScreamer
[link] [comments] -
🔗 r/york Can anybody suggest a decent window cleaner in York? rss
submitted by /u/OneItchy396
[link] [comments] -
🔗 r/Leeds Lane Side development in Churwell losing planned school for 250 more homes – anyone else worried about local infrastructure? rss
I recently visited the Lane Side development in Churwell and found out that the land originally reserved for a school is apparently no longer going to be used for one. From what I haveen told that Leeds City Council believes the existing schools in the area are enough to absorb the extra demand from new residents, and the developer is instead being allowed to build around 250 more homes under Charles Church. Honestly, I find this really worrying. There is already a huge amount of development happening around here, and it feels like social infrastructure is not keeping up at all. Schools, GP surgeries, and local roads are already under pressure, and Elland Road is busy enough as it is. Is anyone else concerned about how all this extra housing is being approved without proper supporting infrastructure in place?
submitted by /u/Historical-Turn8243
[link] [comments] -
🔗 r/LocalLLaMA Qwen3.5-9B-Claude-4.6-Opus-Uncensored-v2-Q4_K_M-GGUF rss
This is a request merge asked by some people on Reddit and HuggingFace. They don't have powerful GPUs and want to have big context window in uncensored smart local AI.
NEW: So, during tensor debugging session via merging I found a problem. In GGUF files some attention layers and expert layers (29 total) are mathematically broken during GGUF convertation from original .safetensors to .gguf.
Fixed Q3_K_M, Q4_K_M, Q8_0, quants for HauhauCS Qwen 3.5 35B-A3B original model uploaded:
I am using Q4_K_M quant. I have 16 tokens per second on RTX 3060 12 GB.
https://huggingface.co/LuffyTheFox/Qwen3.5-35B-A3B-Uncensored-Kullback- Leibler9B model in Q4_K_M format available here.
Сurrently the most stable KL quant for Qwen 3.5 9B, but still has thinking loops:
https://huggingface.co/LuffyTheFox/Qwen3.5-9B-Claude-4.6-Opus-Uncensored- Kullback-LeiblerFor both models for best perfomance please use following settings in LM Studio 0.4.7 (build 4):
- Use this System Prompt: https://pastebin.com/pU25DVnB
- If you want to disable thinking use this chat template in LM Studio: https://pastebin.com/uk9ZkxCR
- Temperature: 0.7
- Top K Sampling: 20
- Repeat Penalty: (disabled) or 1.0
- Presence Penalty: 1.5
- Top P Sampling: 0.8
- Min P Sampling: 0.0
- Seed: 3407
BONUS: Dataset for System Prompt written by Claude Opus 4.6: https://pastebin.com/9jcjqCTu
Finally found a way to merge this amazing model made by Jackrong: https://huggingface.co/Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning- Distilled-v2-GGUF
With this uncensored model made by HauhauCS: https://huggingface.co/HauhauCS/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive
And preserve all training data and accuracy on Qwen 3.5 9B architecture for weights in tensors via Float32 precision during merging process. I simply pick Q8 quant, dequant it in Float32, merge float32, and re-quantize float32 back to Q4_K_M via llama-quantize binary file from llama.cpp.
Now we have, the smallest, fastest and the smartest uncensored model trained on this dataset: https://huggingface.co/datasets/Roman1111111/claude- opus-4.6-10000x
On my RTX 3060 I got 42 tokens per second in LM Studio. On, llama-server it can run even more faster.
Enjoy, and share your results ^_^. Don't forget to upvote / repost so more people will test it.
PS: There were a lot of questions according to math troubles during merging process in GGUF format. Yes, the most mathematiclly correct way is using .safetensors format in float16 for merging neural networks together. Q8 -> Float32 (merge per tensor) -> Q8. Сonversion in GGUF is a workaround, but it's a best that I can currently do during to very limted system resources.
submitted by /u/EvilEnginer
[link] [comments] -
🔗 r/Yorkshire West Yorkshire makes the case: Mayor Tracy Brabin calls for a rethink on Brexit to unlock regional growth rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 Register Spill Joy & Curiosity #79 rss
Is software turning into a liquid?
It never was a solid, true. Pure thought stuff, as Fred Brooks wrote. But even that pure thought stuff felt more tangible than what software is turning into, did it not? Software had corners and edges: releases and version numbers. This is a piece of software, it's done, one could say. A long time ago, software even came in boxes. Sometimes it had a printed manual.
Now ChatGPT writes tens or hundreds of lines of Python to resize images, create a PDF, or extract data from a CSV -- and then throws it away, without anyone even having seen the code. An agent like OpenClaw will create a little script to check whether I turned off all the lights in the house. Nothing to throw away, because it was never stored in a file.
There is now so much code out there, appearing and disappearing as needed, that putting version numbers on it seems as futile as naming waves in the ocean.
Is this what most software is going to be? Nameless, shapeless? Created just in time?
A good friend of mine works at a company that shoots into and operates things in space. This week he told me that they're required to record how much torque they use to tighten bolts and screws. There are torque-recording wrenches you can buy, but they cost $25k a pop. Maybe it was $15k, not sure, but it was an outrageous number. So outrageous that someone on his team thought "nuh-uh" and went out and bought Bluetooth-enabled torque wrenches for $1k -- far cheaper in this comparison. Then that teammate, who's not a programmer, used an agent to vibe-code a piece of software to talk to the torque wrenches via Bluetooth and record the data in the spreadsheet he uses. He tested it a few times to make sure it worked as it should and then, well, went to work. Tens of thousands of dollars saved.
Now that was a piece of software , right? One could even put a name on it: TorqueThis v0.0.1, or something. But I said to my friend, one could also imagine that in the future, say in a year, even that won't be a piece anymore. Doesn't it seem possible that in a year you can say to your agent: hey, I'm holding this Bluetooth-enabled torque wrench in my hand, I have this spreadsheet open, write some code that records the torque whenever I say "now" and adds it as a new row in column D of that spreadsheet.
And code will appear, do its things while you do your thing, and then it'll disappear. Drip drip drip, it goes into every nook and cranny and then, job done, it evaporates.
-
Are you going to be in Boston in July? Let's meet at Laracon. I'll be speaking there.
-
Adapting to AI: Reflections on Productivity. One of the calmest, most balanced, and most pragmatic pieces of writing I've seen on this topic. It has more questions than answers, but that feels apt for what we're going through. I'm skeptical of any opinion about programming these days if it's made up of more exclamation marks than question marks.
-
I'd never read anything by C.S. Lewis, but whenever I came across his name I felt like I should have. This week, I finally righted what had long felt like a wrong and read The Inner Ring. And now I want more: "The quest of the Inner Ring will break your hearts unless you break it. But if you break it, a surprising result will follow. If in your working hours you make the work your end, you will presently find yourself all unawares inside the only circle in your profession that really matters. You will be one of the sound craftsmen, and other sound craftsmen will know it. This group of craftsmen will by no means coincide with the Inner Ring or the Important People or the People in the Know. It will not shape that professional policy or work up that professional influence which fights for the profession as a whole against the public: nor will it lead to those periodic scandals and crises which the Inner Ring produces. But it will do those things which that profession exists to do and will in the long run be responsible for all the respect which that profession in fact enjoys and which the speeches and advertisements cannot maintain."
-
To the sound of "Is this the real life? Is this just fantasy?" from Queen's Bohemian Rhapsody: "European Commission […] announced the creation of a '28th regime' […] The Proposal for an EU Inc. corporate legal framework provides faster (within 48 hours), cheaper (maximum EUR 100) and fully digital company registration, simplified procedures throughout the company life cycle, easier digital share transfers and capital operations, support for modern financing instruments, and the possibility for Member States to allow access to public equity markets. It also introduces fully digital insolvency procedures and automatic transmission of company data to relevant authorities in line with the "once-only principle," while including safeguards against fraud and abuse." If this truly, actually, for real happens then something that has died in me through the process of running a company here in Germany will maybe be reborn again.
-
The always wonderful Craig Mod: "The point of bloviating like this: We watch the LLMs perform these acts -- acts that, even five years ago, would have seemed like pure science fiction -- and we wrongly (I believe) extrapolate out a kind of intelligence that would be able to make sound decisions on a larger, world-based scale. Which is to say: LLMs' operating resolution is severely hamstrung. Whereas we, humans -- messy, disgusting, goopy, flawed, miraculous humans -- are operating at a freakishly high resolution, to which we have a preternatural ability to access subconsciously, and through which we use language to represent -- in broad strokes -- notions that operate in this higher register."
-
This was a delicious mind-bender: CEOs Don't Steer. It only made my fascination with businesses greater.
-
The Guardian profiled Stewart Brand and I thought it was lovely. I've never before looked through the notion of Maintenance as a lense like this.
-
Agent-Native Engineering by the The General Intelligence Company Of New York. There's a bunch of interesting stuff in there (although I bet it's not as applicable as it sounds) but this one here stood out: "Speaking of idea generation, that's the new problem. Before 2026 engineers had to spend time using their high level of intelligence solving relatively narrow well defined problems. Now, most of those problems are simple or manageable by background agents. Your engineers' new job is to find more problems to solve. That's why many are saying its the golden age of the idea guy - it is. If you can narrowly scope a problem then hand it off to an engineer, you might as well just hand it off to a background agent."
-
Ghostling: "A minimum viable terminal emulator built on top of the libghostty C API." Mitchell added: "From empty repo to a functional minimal standalone terminal based on libghostty in 2 hours, presenting Ghostling! ~600 lines of C and you get extremely accurate, performant, and proven terminal emulation." And someone asked: "Did you use AI? I'm wondering because you pushed this out pretty quickly and there is a large volume of comments... but the code is neat and readable" And he said: "I didn't write a single line of code. I reviewed it all though and consistently nudged the AI in the right direction. Heavy commenting is my personal style, and its especially good for a demo like this."
-
This is pretty neat: Obsidian Web Clipper now comes with a "reader mode" (I don't know if that's the official name) that produces pretty good results and is incredibly fast. Lot of fun to press Opt-Shift-R and see what it does.
-
Now, this, this was interesting: We Have Learned Nothing. I mean, they had me at Thomas Kuhn and Paul Feyerabend (although it felt like they really wanted to throw those names in there even if they didn't have to), but the Red Queen was the interesting bit: "In 1973, the evolutionary biologist Leigh Van Valen proposed what he called the Red Queen hypothesis: in any ecosystem, when one species evolves an advantage at the expense of another, the disadvantaged species will evolve to offset that improvement. […] Similarly, when new startup methods are quickly adopted by everyone, no one gains a relative advantage, and success rates stay flat. To win, startups must develop novel, differentiating strategies and build sustainable barriers to imitation before competitors can catch up."
-
I consider myself a pretty advanced User of Computers. An experienced Surfer of the Web, so to say. Someone who never, even back in 2000, fell prey to the flashy, blinking, red Download button that would appear on websites to trick you when you were trying to download something real. Pour two drinks into me and I'll even insist that I never, not once, not a single time in my life, clicked on something I didn't mean to click on. If I clicked, I meant to click. And I never clicked on a fake link, yes sir. I'm that good with the cursor. But, fucking hell, I think I would've fallen for this phishing attack.
-
The 49MB Web Page. I don't get people who truly enjoy horror movies. Like, you get a kick out of being scared, of … feeling bad? And yet here I am, reading about 49MB web pages, shivering, shaking my head.
-
Matteo Collina wrote about why Node.js needs a virtual file system and the two paragraphs that made everyone share this: "What began as a holiday experiment became PR #61478: a node:vfs module for Node.js, with almost 14,000 lines of code across 66 files. Let me be honest: a PR that size would normally take months of full-time work. This one happened because I built it with Claude Code. I pointed the AI at the tedious parts, the stuff that makes a 14k-line PR possible but no human wants to hand-write: implementing every fs method variant (sync, callback, promises), wiring up test coverage, and generating docs. I focused on the architecture, the API design, and reviewing every line. Without AI, this would not have been a holiday side project. It just wouldn't have happened."
-
The Robotic Tortoise & the Robotic Hare. It's a race between Opus 4.6 and Qwen 35B, the latter running locally and with less, say, smarts. But Qwen won. Because: "With 3x faster responses, I could add an extra cycle : 'critique the plan and address the critiques.' In the time the hare was still thinking, the tortoise ran another lap." Very interesting! I'm torn on this. At some point last year I was also a believer in "if you have a dumb but fast model, it can outrun the smart but slow model" but then, in practice, it turns out that on average the smart but slow model is actually fast, because -- on average -- it gets to the right results faster. Maybe that's changing? Maybe the floor has been raised too and "dumb" models are smart enough now?
-
"A skill file based on the articles written on my personal site. Designed for designers and engineers to help them build better user interfaces." What a time to be alive! A file as a distillation of one's own preferences and taste and judgement and experiences, fed to a neural network trained to help you get your work done.
-
macOS has
/usr/bin/timewhich takes an-largument and can show memory & resource usage of whatever command you're passing. -
apenwarr: Every layer of review makes you 10x slower. In some sense, I get it. Yes. Reviews can be the bottleneck. But then: are reviews the same thing they were three years ago? Does a review take the same amount of time, no: should it take the same amount of time as in 2023, even if you can now spin up five parallel models to help you review? (And this one I'm consciously putting in parentheses so you can imagine I whisper this into your ear: I also don't think that in the near future the code generated by models will need close-up reviews.)
-
Can't say I've ever been really interested in Banksy, but this was great: In Search of Banksy.
-
A sufficiently detailed spec is code. I'm not sure what to think here. On one hand: yes, true, if you want to specify everything a piece of software is supposed to do, you might as well write the code. On the other: it also feels like you can specify what software is supposed to do without being 100% precise and, as long as the person (thing) implementing it and you have some shared understanding about what's left out of the spec, things will be fine. Question is how much shared understanding there is and I think that's where a lot of people have the wrong estimates.
-
tigerfs looks very, very interesting: "A filesystem backed by PostgreSQL, and a filesystem interface to PostgreSQL. TigerFS mounts a database as a directory. Every file is a real row. Writes are transactions. Multiple agents and humans can read and write concurrently with full ACID guarantees, locally or across machines. Any tool that works with files works out of the box." I've been hacking on an agent that isn't really stateless but also doesn't need a full VM. A filesystem backed by PostgreSQL seems like it sits right in the middle and could be very handy.
-
Armin: "There's a feeling that all the things that create friction in your life should be automated away. That human involvement should be replaced by AI-based decision-making. Because it is the friction of the process that is the problem. When in fact many times the friction, or that things just take time, is precisely the point."
-
Pre-ordered this within ten seconds of clicking the link: "Silicon is the element that built modernity. Silicon is a beautiful book about the world of transistors, chips, and the greatest technology revolution of all time." Of course, right after I caught myself: wait, did you just pre-order an expensive book about… silicon? Yes, I did. Let's see how it goes.
-
Talking about Silicon: there's a new Dwarkesh episode with Dylan Patel out. I love it. Listening to this made me think: is this how people how are into sports feel like every weekend?
-
Then again, I do know what it's like to be into sports, don't I? Yesterday evening I put on this thriller (top comment: "The heavy breathing of two very experienced commentators tells you how special this achievement is!") and my wife couldn't make sense of the dichotomy between the quiet click-clacks coming from the TV and me saying "holy shit, holy shit, now he's going to put the white-- wow, incredible."
Ever spent a Sunday evening in front of the TV, alone, dreading going to to school the next day, thinking "maybe I should become a snooker player? never played once in my life, but everyone has to start somewhere, don't they?" Then you should subscribe:
-
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [IDAssistMCP](https://github.com/symgraph/IDAssistMCP): 1.3.0 -
🔗 r/reverseengineering Hyoketsu - Solving the Vendor Dependency Problem in Reverse Engineering rss
submitted by /u/Mempodipper
[link] [comments] -
🔗 r/LocalLLaMA Interesting loop rss
| submitted by /u/Willing_Reflection57
[link] [comments]
---|--- -
🔗 r/Leeds Meant to be going on a date in Leeds but he cancelled. rss
It's super hard for me to get a day off work and I normally drive every where however like an idiot I've booked the day off and booked the train and hotel only to be cancelled on.. Rather than not going, what is there to do in Leeds on Thursdays?
submitted by /u/throwradrpri
[link] [comments] -
🔗 Baby Steps Maximally minimal view types, a follow-up rss
A short post to catalog two interesting suggestions that came in from my previous post, and some other related musings.
Syntax with
.It was suggested to me via email that we could use
.to eliminate the syntax ambiguity:let place = &mut self.{statistics};Conceivably we could do this for the type, like:
fn method( mp: &mut MessageProcessor.{statistics}, ... )and in
selfposition:fn foo(&mut self.{statistics}) {}I have to sit with it but…I kinda like it?
I'll use it in the next example to try it on for size.
Coercion for calling public methods that name private types
In my post I said that if you hvae a public method whose
selftype references private fields, you would not be able to call it from another scope:mod module { #[derive(Default)] pub struct MessageProcessor { messages: Vec<String>, statistics: Statistics, } pub struct Statistics { .. } impl MessageProcessor { pub fn push_message( &mut self.{messages}, // -------- private field message: String, ) {} } } pub fn main() { let mp = MessageProcessor::default(); mp.push_message(format!("Hi")); // ------------ Error! }The error arises from desugaring
push_messageto a call that references private fields:MessageProcessor::push_message( &mut mp.{messages}, // -------- not nameable here format!("Hi"), )I proposed we could lint to avoid this situation.
But an alternative was proposed where we would say that, when we introduce an auto-ref, if the callee references local variables not visible from this point in the program, we just borrow the entire struct rather than borrowing specific fields.
So then we would desugar to:
MessageProcessor::push_message( &mut mp, // -- borrow the whole struct format!("Hi"), )If we then say that
&mut MessageProcessoris coercable to a&mut MessageProcessor.{messages}, then the call would be legal.Interestingly, the autoderef loop already considers visibility: if you do
a.foo, we will deref until we see afoofield visible to you at the current point.Oh and a side note, assigning etc
This raises an interesting question I did not discuss. What happens when you write a value of a type like
MessageProcessor.{messages}?For example, what if I do this:
fn swap_fields( mp1: &mut MessageProcessor.{messages}, mp2: &mut MessageProcessor.{messages}, ) { std::mem::swap(mp1, mp2); }What I expect is that this would just swap the selected fields (
messages, in this case) and leave the other fields untouched.The basic idea is that a type
MessageProcessor.{messages}indicates that the messages field is initialized and accessible and the other fields must be completely ignored.Another possible future extension: moved values
This represents another possible future extension. Today if you move out of a field in a struct, then you can no longer work with the value as a whole:
impl MessageProcessor { fn example(mut self) { // move from self.statistics std::mem::drop(self.statistics); // now I cannot call this method, // because I can't borrow `self`: self.push_message(format!("Hi again")); } }But with selective borrowing, we could allow this, and you could even return "partially initialized" values:
impl MessageProcessor { fn take_statistics( mut self, ) -> MessageProcessor.{messages} { std::mem::drop(self.statistics); self } }That'd be neat.
-
- March 21, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-21 rss
IDA Plugin Updates on 2026-03-21
New Releases:
Activity:
- AWDP_PWN_Helper
- 70668687: fix(seccomp): 修复 _start 通防规则并补全 syscall 禁用
- binsync
- 000e68c1: feat/compare-commit-history (#508)
- IDAssistMCP
- AWDP_PWN_Helper
-
🔗 Simon Willison Profiling Hacker News users based on their comments rss
Here's a mildly dystopian prompt I've been experimenting with recently: "Profile this user", accompanied by a copy of their last 1,000 comments on Hacker News.
Obtaining those comments is easy. The Algolia Hacker News API supports listing comments sorted by date that have a specific tag, and the author of a comment is tagged there as
author_username. Here's a JSON feed of my (simonw) most recent comments, for example:https://hn.algolia.com/api/v1/search_by_date?tags=comment,author_simonw&hitsPerPage=1000
The Algolia API is served with open CORS headers, which means you can access the API from JavaScript running on any web page.
Last August I had ChatGPT build me a simple tool for hitting that API for any user which fetches their comments and gives me a mobile-friendly "copy to clipboard" button. I've since tweaked it a few times with Claude.
I can then paste the whole lot into any LLM - these days I mostly use Claude Opus 4.6 for this - and prompt "profile this user" to see what it can figure out.
It's startlingly effective! It feels invasive to quote the profile of another user here, so I'll show you what it produces for me:
This is Simon Willison — a prolific, independent software developer, blogger, and one of the most visible voices in the AI-assisted coding space. Here's a profile drawn from his comments:
Professional identity: Independent developer and writer. Co-creator of Django, creator of Datasette and many other open source tools. On the board of the Python Software Foundation. Runs a heavily-trafficked personal blog at simonwillison.net with multiple content types (entries, link blog, notes, quotations). Monetizes through GitHub sponsors, ethical ads, and occasional consulting — not through any AI company payroll, a point he defends frequently.
Core thesis on AI coding: He's the leading evangelist for what he calls "agentic engineering" — using coding agents (Claude Code, Codex CLI) as productivity multipliers for skilled developers. His central argument: LLMs don't replace programmers, they amplify existing expertise. He's writing a guide/book on this at simonwillison.net/guides. He insists these tools are deceptively difficult to use well, and that most people who report bad results simply haven't learned the craft yet.
Working style: He does most of his programming from his iPhone via Claude Code for web, often while riding BART, walking the dog, or sitting in the garden. He embraces "YOLO mode" (auto-approving agent actions) and frequently runs 2-3 agent sessions in parallel. He starts every session with "run uv run pytest" to anchor the agent in TDD. He's on the $200/month Claude Max plan.
Key technical interests: Sandboxing and security (WebAssembly, Pyodide, sandbox-exec, Firecracker), SQLite (deeply), Python packaging and tooling (uv, PyPI distribution tricks), browser-in-a-browser experiments (v86, WASM Linux), and local LLM inference. He's been learning Go "by osmosis" through coding agents.
Security consciousness: He coined the term "prompt injection" and the "lethal trifecta" (access to private data + exposure to untrusted input + ability to take actions). He's genuinely alarmed about OpenClaw/Clawdbot security risks and has been predicting a headline-grabbing prompt injection attack "in the next six months" for over two years — acknowledging it keeps not happening while insisting the risk is real. He references the "normalization of deviance" pattern frequently.
Personality and debate style: Energetic, combative when challenged, but generally good-natured. He engages heavily in HN comment threads — sometimes dozens of replies in a single discussion. He's transparent about his biases and maintains a public disclosures page. He pushes back hard against what he sees as motivated reasoning from AI skeptics, but also against AGI hype ("I refer to that as the science fiction version of AI"). He gets visibly frustrated by sock-puppet accounts and bad-faith criticism. He has a self-deprecating streak ("my stupid pelican benchmark").
Recurring themes across hundreds of comments:
- "Two things can be true at the same time" — he holds nuanced positions
- Tests are for productivity, not just quality
- The November 2025 model releases (Opus 4.5, GPT-5.2) were a genuine inflection point
- Code review is the biggest bottleneck in agent-assisted workflows
- "Cognitive debt" is a real and unsolved problem
- The best engineering practices (tests, docs, CI/CD, clean code) also make agents work better
- He's deliberately trying to "teach people good software engineering while tricking them into thinking the book is about AI"
Personal interests mentioned: Niche museums (runs niche-museums.com), New Zealand kākāpō parrots, cooking food from countries he's never visited, chickens in his garden. Lives in Half Moon Bay, CA near a major pelican roost. The pelican-on-a-bicycle SVG test is his signature LLM benchmark.
In a sentence: A deeply experienced, independently-minded developer who's genuinely excited about AI coding tools, fights hard against both uncritical hype and reflexive dismissal, and is trying to professionalize how the industry uses these tools — while also worrying loudly about the security implications almost nobody else takes seriously enough.
This all checks out! I ran this in Claude incognito mode to hopefully prevent Claude from guessing that I was evaluating myself and sycophantically glazing me - the tone of the response it gave here is similar to the tone I've seen against other accounts.
I expect it guessed my real name due to my habit of linking to my own writing from some of my comments, which provides plenty of simonwillison.net URLs for it to associate with my public persona. I haven't seen it take a guess at a real name for any of the other profiles I've generated.
It's a little creepy to be able to derive this much information about someone so easily, even when they've shared that freely in a public (and API-available) place.
I mainly use this to check that I'm not getting embroiled in an extensive argument with someone who has a history of arguing in bad faith. Thankfully that's rarely the case - Hacker News continues to be a responsibly moderated online space.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/Leeds The sad demise of Yorkshire flagstone pavements rss
If you know any people who like to steal our pavements please ask them to stop!
submitted by /u/AsparagusDramatic475
[link] [comments] -
🔗 r/reverseengineering Turbo Pascal 3.02A, deconstructed rss
submitted by /u/alberto-m-dev
[link] [comments] -
🔗 r/Yorkshire Response to the horror cup of tea made by an American. rss
| Howdy everyone. I am the American at large in the original post which was posted sometime last night. I Wanted to provide some insight to the original post made by a friend of mine who is the reason I had the opportunity to visit West Yorkshire last year for the first time. Couple of points.- The tea looked lumpy cause I hadn't had a chance to stir it yet, and nothing was mixed
- My wife is currently on a no dairy diet for the next few weeks so we can figure out if my son who is breastfed has a dairy intolerance, so I unfortunately had to use oatmeal(🤮) instead of real milk or cream in the coffee which is why it looked different.
- As the wife had never had Yorkshire tea before I only used a tablespoon of milk as we're both inexperienced in making British style tea.
- The color. Yes it's dark, and due to using oat milk instead of regular milk, it's more than likely always going to be darker than it should be.
- Any snarky replies in the original post you see from me to the OP is strictly due to us being friends and not in anyway me insulting him or trying to insult the British culture.
TLDR: I failed at my first attempt at a cup of tea, but have since tried to make it better and have posted updated photos, and understand that oat milk is not correct but unfortunately it has to be used and this the tea it will never be the right color. submitted by /u/mitchellfuller21
[link] [comments]
---|--- -
🔗 r/york Sign on bus counting down to departure rss
submitted by /u/tyw7
[link] [comments] -
🔗 r/reverseengineering Widevine Leak rss
submitted by /u/Aaryakrishna_
[link] [comments] -
🔗 Anton Zhiyanov Solod: Go can be a better C rss
I'm working on a new programming language named Solod (So). It's a strict subset of Go that translates to C, without hidden memory allocations and with source-level interop.
Highlights:
- Go in, C out. You write regular Go code and get readable C11 as output.
- Zero runtime. No garbage collection, no reference counting, no hidden allocations.
- Everything is stack-allocated by default. Heap is opt-in through the standard library.
- Native C interop. Call C from So and So from C — no CGO, no overhead.
- Go tooling works out of the box — syntax highlighting, LSP, linting and "go test".
So supports structs, methods, interfaces, slices, multiple returns, and defer. To keep things simple, there are no channels, goroutines, closures, or generics.
So is for systems programming in C, but with Go's syntax, type safety, and tooling.
Hello world • Language tour • Compatibility • Design decisions • FAQ • Final thoughts
'Hello world' example
This Go code in a file
main.go:package main type Person struct { Name string Age int Nums [3]int } func (p *Person) Sleep() int { p.Age += 1 return p.Age } func main() { p := Person{Name: "Alice", Age: 30} p.Sleep() println(p.Name, "is now", p.Age, "years old.") p.Nums[0] = 42 println("1st lucky number is", p.Nums[0]) }Translates to a header file
main.h:#pragma once #include "so/builtin/builtin.h" typedef struct main_Person { so_String Name; so_int Age; so_int Nums[3]; } main_Person; so_int main_Person_Sleep(void* self);Plus an implementation file
main.c:#include "main.h" so_int main_Person_Sleep(void* self) { main_Person* p = (main_Person*)self; p->Age += 1; return p->Age; } int main(void) { main_Person p = (main_Person){.Name = so_str("Alice"), .Age = 30}; main_Person_Sleep(&p); so_println("%.*s %s %" PRId64 " %s", p.Name.len, p.Name.ptr, "is now", p.Age, "years old."); p.Nums[0] = 42; so_println("%s %" PRId64, "1st lucky number is", p.Nums[0]); }Language tour
In terms of features, So is an intersection between Go and C, making it one of the simplest C-like languages out there — on par with Hare.
And since So is a strict subset of Go, you already know it if you know Go. It's pretty handy if you don't want to learn another syntax.
Let's briefly go over the language features and see how they translate to C.
Variables • Strings • Arrays • Slices • Maps • If/else and for • Functions • Multiple returns • Structs • Methods • Interfaces • Enums • Errors • Defer • C interop • Packages
Values and variables
So supports basic Go types and variable declarations:
// so const n = 100_000 f := 3.14 var r = '本' var v any = 42 // c const so_int n = 100000; double f = 3.14; so_rune r = U'本'; void* v = &(so_int){42};byteis translated toso_byte(uint8_t),runetoso_rune(int32_t), andinttoso_int(int64_t).anyis not treated as an interface. Instead, it's translated tovoid*. This makes handling pointers much easier and removes the need forunsafe.Pointer.nilis translated toNULL(for pointer types).Strings
Strings are represented as
so_Stringtype in C:// c typedef struct { const char* ptr; size_t len; } so_String;All standard string operations are supported, including indexing, slicing, and iterating with a for-range loop.
// so str := "Hi 世界!" println("str[1] =", str[1]) for i, r := range str { println("i =", i, "r =", r) } // c so_String str = so_str("Hi 世界!"); so_println("%s %u", "str[1] =", so_at(so_byte, str, 1)); for (so_int i = 0, _iw = 0; i < so_len(str); i += _iw) { _iw = 0; so_rune r = so_utf8_decode(str, i, &_iw); so_println("%s %" PRId64 " %s %d", "i =", i, "r =", r); }Converting a string to a byte slice and back is a zero-copy operation:
// so s := "1世3" bs := []byte(s) s1 := string(bs) // c so_String s = so_str("1世3"); so_Slice bs = so_string_bytes(s); // wraps s.ptr so_String s1 = so_bytes_string(bs); // wraps bs.ptrConverting a string to a rune slice and back allocates on the stack with
alloca:// so s := "1世3" rs := []rune(s) s1 := string(rs) // c so_String s = so_str("1世3"); so_Slice rs = so_string_runes(s); // allocates so_String s1 = so_runes_string(rs); // allocatesThere's a
so/stringsstdlib package for heap-allocated strings and various string operations.Arrays
Arrays are represented as plain C arrays (
T name[N]):// so var a [5]int // zero-initialized b := [5]int{1, 2, 3, 4, 5} // explicit values c := [...]int{1, 2, 3, 4, 5} // inferred size d := [...]int{100, 3: 400, 500} // designated initializers // c so_int a[5] = {0}; so_int b[5] = {1, 2, 3, 4, 5}; so_int c[5] = {1, 2, 3, 4, 5}; so_int d[5] = {100, [3] = 400, 500};len()on arrays is emitted as compile-time constant.Slicing an array produces a
so_Slice.Slices
Slices are represented as
so_Slicetype in C:// c typedef struct { void* ptr; size_t len; size_t cap; } so_Slice;All standard slice operations are supported, including indexing, slicing, and iterating with a for-range loop.
// so s1 := []string{"a", "b", "c", "d", "e"} s2 := s1[1 : len(s1)-1] for i, v := range s2 { println(i, v) } // c so_Slice s1 = (so_Slice){(so_String[5]){ so_str("a"), so_str("b"), so_str("c"), so_str("d"), so_str("e")}, 5, 5}; so_Slice s2 = so_slice(so_String, s1, 1, so_len(s1) - 1); for (so_int i = 0; i < so_len(s2); i++) { so_String v = so_at(so_String, s2, i); so_println("%" PRId64 " %.*s", i, v.len, v.ptr); }As in Go, a slice is a value type. Unlike in Go, a nil slice and an empty slice are the same thing:
// so var nils []int = nil var empty []int = []int{} // c so_Slice nils = (so_Slice){0}; so_Slice empty = (so_Slice){0};make()allocates a fixed amount of memory on the stack (sizeof(T)*cap).append()only works up to the initial capacity and panics if it's exceeded. There's no automatic reallocation; use theso/slicesstdlib package for heap allocation and dynamic arrays.Maps
Maps are fixed-size and stack-allocated, backed by parallel key/value arrays with linear search. They are pointer-based reference types, represented as
so_Map*in C. No delete, no resize.// c typedef struct { void* keys; void* vals; size_t len; size_t cap; } so_Map;Only use maps when you have a small, fixed number of key-value pairs. For anything else, use heap-allocated maps from the
so/mapspackage (planned).Most of the standard map operations are supported, including getting/setting values and iterating with a for-range loop:
// so m := map[string]int{"a": 11, "b": 22} for k, v := range m { println(k, v) } // c so_Map* m = &(so_Map){(so_String[2]){ so_str("a"), so_str("b")}, (so_int[2]){11, 22}, 2, 2}; for (so_int _i = 0; _i < (so_int)m->len; _i++) { so_String k = ((so_String*)m->keys)[_i]; so_int v = ((so_int*)m->vals)[_i]; so_println("%.*s %" PRId64, k.len, k.ptr, v); }As in Go, a map is a pointer type. A
nilmap emits asNULLin C.If/else and for
If-else and for come in all shapes and sizes, just like in Go.
Standard if-else with chaining:
// so if x > 0 { println("positive") } else if x < 0 { println("negative") } else { println("zero") } // c if (x > 0) { so_println("%s", "positive"); } else if (x < 0) { so_println("%s", "negative"); } else { so_println("%s", "zero"); }Init statement (scoped to the if block):
// so if num := 9; num < 10 { println(num, "has 1 digit") } // c { so_int num = 9; if (num < 10) { so_println("%" PRId64 " %s", num, "has 1 digit"); } }Traditional for loop:
// so for j := 0; j < 3; j++ { println(j) } // c for (so_int j = 0; j < 3; j++) { so_println("%" PRId64, j); }While-style loop:
// so i := 1 for i <= 3 { println(i) i = i + 1 } // c so_int i = 1; for (; i <= 3;) { so_println("%" PRId64, i); i = i + 1; }Range over an integer:
// so for k := range 3 { println(k) } // c for (so_int k = 0; k < 3; k++) { so_println("%" PRId64, k); }Functions
Regular functions translate to C naturally:
// so func sumABC(a, b, c int) int { return a + b + c } // c static so_int sumABC(so_int a, so_int b, so_int c) { return a + b + c; }Named function types become typedefs:
// so type SumFn func(int, int, int) int fn1 := sumABC // infer type var fn2 SumFn = sumABC // explicit type s := fn2(7, 8, 9) // main.h typedef so_int (*main_SumFn)(so_int, so_int, so_int); // main.c main_SumFn fn1 = sumABC; main_SumFn fn2 = sumABC; so_int s = fn2(7, 8, 9);Exported functions (capitalized) become public C symbols prefixed with the package name (
package_Func). Unexported functions arestatic.Variadic functions use the standard
...syntax and translate to passing a slice:// so func sum(nums ...int) int { total := 0 for _, num := range nums { total += num } return total } func main() { sum(1, 2, 3, 4, 5) } // c static so_int sum(so_Slice nums) { so_int total = 0; for (so_int _ = 0; _ < so_len(nums); _++) { so_int num = so_at(so_int, nums, _); total += num; } return total; } int main(void) { sum((so_Slice){(so_int[5]){1, 2, 3, 4, 5}, 5, 5}); }Function literals (anonymous functions and closures) are not supported.
Multiple returns
So supports two-value multiple returns in two patterns:
(T, error)and(T1, T2). Both cases translate toso_ResultC type:// so func divide(a, b int) (int, error) { return a / b, nil } func divmod(a, b int) (int, int) { return a / b, a % b } // c typedef struct { so_Value val; so_Value val2; so_Error err; } so_Result; // c static so_Result divide(so_int a, so_int b) { return (so_Result){.val.as_int = a / b, .err = NULL}; } static so_Result divmod(so_int a, so_int b) { return (so_Result){.val.as_int = a / b, .val2.as_int = a % b}; }Named return values are not supported.
Structs
Structs translate to C naturally:
// so type person struct { name string age int } bob := person{"Bob", 20} alice := person{name: "Alice", age: 30} fred := person{name: "Fred"} // c typedef struct person { so_String name; so_int age; } person; person bob = (person){so_str("Bob"), 20}; person alice = (person){.name = so_str("Alice"), .age = 30}; person fred = (person){.name = so_str("Fred")};new()works with types and values:// so n := new(int) // *int, zero-initialized p := new(person) // *person, zero-initialized n2 := new(42) // *int with value 42 p2 := new(person{name: "Alice"}) // *person with values // c so_int* n = &(so_int){0}; person* p = &(person){0}; so_int* n2 = &(so_int){42}; person* p2 = &(person){.name = so_str("Alice")};Methods
Methods are defined on struct types with pointer or value receivers:
// so type Rect struct { width, height int } func (r *Rect) Area() int { return r.width * r.height } func (r Rect) resize(x int) Rect { r.height *= x r.width *= x return r }Pointer receivers pass
void* selfin C and cast to the struct pointer. Value receivers pass the struct by value, so modifications operate on a copy:// c typedef struct main_Rect { so_int width; so_int height; } main_Rect; so_int main_Rect_Area(void* self) { main_Rect* r = (main_Rect*)self; return r->width * r->height; } static main_Rect main_Rect_resize(main_Rect r, so_int x) { r.height *= x; r.width *= x; return r; }Calling methods on values and pointers emits pointers or values as necessary:
// so r := Rect{width: 10, height: 5} r.Area() // called on value (address taken automatically) r.resize(2) // called on value (passed by value) rp := &r rp.Area() // called on pointer rp.resize(2) // called on pointer (dereferenced automatically) // c main_Rect r = (main_Rect){.width = 10, .height = 5}; main_Rect_Area(&r); main_Rect_resize(r, 2); main_Rect* rp = &r; main_Rect_Area(rp); main_Rect_resize(*rp, 2);Methods on named primitive types are also supported.
Interfaces
Interfaces in So are like Go interfaces, but they don't include runtime type information.
Interface declarations list the required methods:
// so type Shape interface { Area() int Perim(n int) int }In C, an interface is a struct with a
void* selfpointer and function pointers for each method (less efficient than using a static method table, but simpler; this might change in the future):// c typedef struct main_Shape { void* self; so_int (*Area)(void* self); so_int (*Perim)(void* self, so_int n); } main_Shape;Just as in Go, a concrete type implements an interface by providing the necessary methods:
// so func (r *Rect) Area() int { // ... } func (r *Rect) Perim(n int) int { // ... } // c so_int main_Rect_Area(void* self) { // ... } so_int main_Rect_Perim(void* self, so_int n) { // ... }Passing a concrete type to functions that accept interfaces:
// so func calcShape(s Shape) int { return s.Perim(2) + s.Area() } r := Rect{width: 10, height: 5} calcShape(&r) // implicit conversion calcShape(Shape(&r)) // explicit conversion // c static so_int calcShape(main_Shape s) { return s.Perim(s.self, 2) + s.Area(s.self); } main_Rect r = (main_Rect){.width = 10, .height = 5}; calcShape((main_Shape){.self = &r, .Area = main_Rect_Area, .Perim = main_Rect_Perim}); calcShape((main_Shape){.self = &r, .Area = main_Rect_Area, .Perim = main_Rect_Perim});Type assertion works for concrete types (
v := iface.(*Type)), but not for interfaces (iface.(Interface)). Type switch is not supported.Empty interfaces (
interface{}andany) are translated tovoid*.Enums
So supports typed constant groups as enums:
// so type ServerState string const ( StateIdle ServerState = "idle" StateConnected ServerState = "connected" StateError ServerState = "error" )Each constant is emitted as a C
const:// main.h typedef so_String main_ServerState; extern const main_ServerState main_StateIdle; extern const main_ServerState main_StateConnected; extern const main_ServerState main_StateError; // main.c const main_ServerState main_StateIdle = so_str("idle"); const main_ServerState main_StateConnected = so_str("connected"); const main_ServerState main_StateError = so_str("error");iotais supported for integer-typed constants:// so type Day int const ( Sunday Day = iota Monday Tuesday )Iota values are evaluated at compile time and translated to integer literals:
// c typedef so_int main_Day; const main_Day main_Sunday = 0; const main_Day main_Monday = 1; const main_Day main_Tuesday = 2;Errors
Errors use the
so_Errortype (a pointer):// c struct so_Error_ { const char* msg; }; typedef struct so_Error_* so_Error;So only supports sentinel errors, which are defined at the package level using
errors.New(implemented as compiler built-in):// so import "solod.dev/so/errors" var ErrOutOfTea = errors.New("no more tea available") // c #include "so/errors/errors.h" so_Error main_ErrOutOfTea = errors_New("no more tea available");Errors are compared using
==. This is an O(1) operation (compares pointers, not strings):// so func makeTea(arg int) error { if arg == 42 { return ErrOutOfTea } return nil } err := makeTea(42) if err == ErrOutOfTea { println("out of tea") } // c static so_Error makeTea(so_int arg) { if (arg == 42) { return main_ErrOutOfTea; } return NULL; } so_Error err = makeTea(42); if (err == main_ErrOutOfTea) { so_println("%s", "out of tea"); }Dynamic errors (
fmt.Errorf), local error variables (errors.Newinside functions), and error wrapping are not supported.Defer
deferschedules a function or method call to run at the end of the enclosing scope.The scope can be either a function (as in Go):
// so func funcScope() { xopen(&state) defer xclose(&state) if state != 1 { panic("unexpected state") } }Or a bare block (unlike Go):
// so func blockScope() { { xopen(&state) defer xclose(&state) if state != 1 { panic("unexpected state") } // xclose(&state) runs here, at block end } // state is already closed here }Deferred calls are emitted inline (before returns, panics, and scope end) in LIFO order:
// c static void funcScope(void) { xopen(&state); if (state != 1) { xclose(&state); so_panic("unexpected state"); } xclose(&state); }Defer is not supported inside other scopes like
fororif.C interop
Include a C header file with
so:include://so:include <stdio.h>Declare an external C type (excluded from emission) with
so:extern://so:extern FILE type os_file struct{}Declare an external C function (no body or
so:extern):func fopen(path string, mode string) *os_file //so:extern func fclose(stream *os_file) int { _ = stream return 0 }When calling extern functions,
stringand[]Targuments are automatically decayed to their C equivalents: string literals become raw C strings ("hello"), string values becomechar*, and slices become raw pointers. This makes interop cleaner:// so f := fopen("/tmp/test.txt", "w") // c os_file* f = fopen("/tmp/test.txt", "w"); // not like this: // fopen(so_str("/tmp/test.txt"), so_str("w"))The decay behavior can be turned off with the
nodecayflag://so:extern nodecay func set_name(acc *Account, name string)The
so/cpackage includes helpers for converting C pointers back to So string and slice types. Theunsafepackage is also available and is implemented as compiler built-ins.Packages
Each Go package is translated into a single
.h+.cpair, regardless of how many.gofiles it contains. Multiple.gofiles in the same package are merged into one.cfile, separated by// -- filename.go --comments.Exported symbols (capitalized names) are prefixed with the package name:
// geom/geom.go package geom const Pi = 3.14159 func RectArea(width, height float64) float64 { return width * height }Becomes:
// geom.h extern const double geom_Pi; double geom_RectArea(double width, double height); // geom.c const double geom_Pi = 3.14159; double geom_RectArea(double width, double height) { ... }Unexported symbols (lowercase names) keep their original names and are marked
static:// c static double rectArea(double width, double height);Exported symbols are declared in the
.hfile (withexternfor variables). Unexported symbols only appear in the.cfile.Importing a So package translates to a C
#include:// so import "example/geom" // c #include "geom/geom.h"Calling imported symbols uses the package prefix:
// so a := geom.RectArea(5, 10) _ = geom.Pi // c double a = geom_RectArea(5, 10); (void)geom_Pi;That's it for the language tour!
Compatibility
So generates C11 code that relies on several GCC/Clang extensions:
- Binary literals (
0b1010) in generated code. - Statement expressions (
({...})) in macros. __attribute__((constructor))for package-level initialization.__auto_typefor local type inference in generated code.__typeof__for type inference in generic macros.allocaformake()and other dynamic stack allocations.
You can use GCC, Clang, or
zig ccto compile the transpiled C code. MSVC is not supported.Supported operating systems: Linux, macOS, and Windows (partial support).
Design decisions
So is highly opinionated.
Simplicity is key. Fewer features are always better. Every new feature is strongly discouraged by default and should be added only if there are very convincing real-world use cases to support it. This applies to the standard library too — So tries to export as little of Go's stdlib API as possible while still remaining highly useful for real-world use cases.
No heap allocations are allowed in language built-ins (like maps, slices, new, or append). Heap allocations are allowed in the standard library, but they must clearly state when an allocation happens and who owns the allocated data.
Fast and easy C interop. Even though So uses Go syntax, it's basically C with its own standard library. Calling C from So, and So from C, should always be simple to write and run efficiently. The So standard library (translated to C) should be easy to add to any C project.
Readability. There are several languages that claim they can transpile to readable C code. Unfortunately, the C code they generate is usually unreadable or barely readable at best. So isn't perfect in this area either (though it's arguably better than others), but it aims to produce C code that's as readable as possible.
Go compatibility. So code is valid Go code. No exceptions.
Non-goals:
Raw performance. You can definitely write C code by hand that runs faster than code produced by So. Also, some features in So, like interfaces, are currently implemented in a way that's not very efficient, mainly to keep things simple.
Hiding C entirely. So is a cleaner way to write C, not a replacement for it. You should know C to use So effectively.
Go feature parity. Less is more. Iterators aren't coming, and neither are generic methods.
Frequently asked questions
I have heard these several times, so it's worth answering.
Why not Rust/Zig/Odin/other language?
Because I like C and Go.
Why not TinyGo?
TinyGo is lightweight, but it still has a garbage collector, a runtime, and aims to support all Go features. What I'm after is something even simpler, with no runtime at all, source-level C interop, and eventually, Go's standard library ported to plain C so it can be used in regular C projects.
How does So handle memory?
Everything is stack-allocated by default. There's no garbage collector or reference counting. The standard library provides explicit heap allocation in the
so/mempackage when you need it.Is it safe?
So itself has few safeguards other than the default Go type checking. It will panic on out-of-bounds array access, but it won't stop you from returning a dangling pointer or forgetting to free allocated memory.
Most memory-related problems can be caught with AddressSanitizer in modern compilers, so I recommend enabling it during development by adding
-fsanitize=addressto yourCFLAGS.Can I use So code from C (and vice versa)?
Yes. So compiles to plain C, therefore calling So from C is just calling C from C. Calling C from So is equally straightforward.
Can I compile existing Go packages with So?
Not really. Go uses automatic memory management, while So uses manual memory management. So also supports far fewer features than Go. Neither Go's standard library nor third-party packages will work with So without changes.
How stable is this?
Not for production at the moment.
Where's the standard library?
There is a growing set of high-level packages (
so/bytes,so/mem,so/slices, ...). There are also low-level packages that wrap the libc API (so/c/stdlib,so/c/stdio,so/c/cstring, ...). Check the links below for more details.Final thoughts
Even though So isn't ready for production yet, I encourage you to try it out on a hobby project or just keep an eye on it if you like the concept.
Further reading:
-
🔗 mhx/dwarfs dwarfs-0.15.1 release
Serious Bug in All Previous Releases
In #350, cipriancraciun started a discussion that got me thinking about the file scanner class. At some point, I realized that there might be a bug, and, after looking at the code, it turned out that there was one: when collecting hard-linked files, the class did not take the device these files were on into account.
When
--file-hashis set to anything exceptnone, the issue is triggered if and only if all of the following are true:- The input to
mkdwarfsspans more than one device (i.e. mount point). - There are regular files with the same inode number on more than one of these devices, and these files are part of the input to
mkdwarfs(e.g. not filtered out). - At least two files in such a set of files with identical inode numbers also have a hard link count greater than 1 on their respective devices.
For any set of files with identical inode numbers and hard link counts greater than 1 for which all of the above conditions are true, only one inode will be chosen to represent all files. That means the data for all other inodes in the set is lost and will not be present in the resulting DwarFS image.
When
--file-hashis set tonone, the issue is triggered regardless of condition (3) above. In this case, however,mkdwarfsis guaranteed to crash with an assertion if it runs into the issue:$ mkdir data && echo "hello" >data/x $ mkdwarfs -i data -o data.dwarfs $ mkdir -p mnt/a mnt/b $ dwarfs data.dwarfs mnt/a && dwarfs data.dwarfs mnt/b $ mkdwarfs --file-hash=none -i mnt -o /dev/null --force [...] Assertion `!files_.empty()` failed in /home/mhx/dwarfs/src/writer/internal/inode_manager.cpp(212): inode has no file (any)So the default, hash-based deduplication mode is much more dangerous because it fails silently, but I hope that condition (3) is rarely true in practice.
The fix is
3c15ab2, along with a dedicated test. I strongly recommend upgrading to this new release if your input tomkdwarfsspans multiple devices.
Bug fixes
-
mkdwarfsdid not correctly handle inputs where hardlinks had the same inode number on different devices. To run into this issue, you would have to makemkdwarfsscan files from multiple devices (e.g. the root of a directory tree with multiple mounted filesystems) and have files with the same inode number on different devices and have at least two of those files also have a link count greater than 1. While this is hopefully rare in practice, it is a serious bug that can lead to crashes (in the best case) or even data loss (in the worst case), as only the data of one of these files would be stored in the image. This has been fixed and a test has been added to cover this case. -
A missing dependency was causing linker errors with shared library builds on macOS. This has been fixed.
Build
-
The static release binaries are now all built using Clang and link-time optimization. This was previously not the case for some architectures due to bugs in the toolchain. As a result, the binaries are now significantly smaller.
-
There is now a new set of binaries (
dwarfs-universal-small) that are built without brotli support and without support for the performance monitor. The performance monitor is rarely used and brotli compression comes with a huge dictionary that bloats the binary size without offering much benefit over lzma or zstd in most cases. If you care about binary size, these new binaries are a good default choice.
Full Changelog :
v0.15.0...v0.15.1SHA-256 Checksums
ff5ef1716dec13082356a23ca8b9a349d00e8af71712cd659d95195202838e5d dwarfs-0.15.1-Linux-aarch64.tar.xz 1d017f5da0a92f61d8620c45670cd799e0bd452a8c1f31080cea554bce880dda dwarfs-0.15.1-Linux-arm.tar.xz c91bfe1eb348a8a34581d6377a49c937f18a96a8d1460241fde7f79fe7d3cd47 dwarfs-0.15.1-Linux-i386.tar.xz eaead2ac3c61c6765b80fe57e6c870c7e9f14b83c4947a533aeda0d3720aac7a dwarfs-0.15.1-Linux-loongarch64.tar.xz c70c58a47a81b51bb61fff6f878d9b9e984ac944b704c713b126aaabae9429f6 dwarfs-0.15.1-Linux-ppc64le.tar.xz 0a11aff785ae7ebed0aa2f9a0f12e85cb14336901937387e45c4ecc94399b7b6 dwarfs-0.15.1-Linux-ppc64.tar.xz 7d7af8685ac2527760fa4ca848ded3780a6fc10760242ac789094cd9ee612153 dwarfs-0.15.1-Linux-riscv64.tar.xz f5b9c87e4471fe658690951861949b8fe9bf8ddae0fde36a04dc672cc4926568 dwarfs-0.15.1-Linux-s390x.tar.xz 24453ca3f18e08cde0e323fae4447d2e9f47c65508d8da223d22553e73cafc36 dwarfs-0.15.1-Linux-x86_64.tar.xz a180086f9a898b4b52a5217e336c0134d63a10b395b493f2e19f231d575a87ec dwarfs-0.15.1.tar.xz b3d3ff5608766f05a37b9a9d1f3cc3cac7ca01959ac388d43140136c858d54b7 dwarfs-0.15.1-Windows-AMD64.7z 1258d788d9950de4db55e22b5ec510665eb3049f8c199386fce16a132ab6f846 dwarfs-fuse-extract-0.15.1-Linux-aarch64 8f8f958217180f1d49fd775a84fb3f2df4a6c7d42b9537f7258c155c805f5ece dwarfs-fuse-extract-0.15.1-Linux-aarch64.upx 814204960bdd739da73c965288f4d86dab91620241a05cc99ec84be8c0377ffd dwarfs-fuse-extract-0.15.1-Linux-arm 5ea3eb07bc38d5a9cad9a296b83a6049df80c9508888341fb74b9cbb1695a750 dwarfs-fuse-extract-0.15.1-Linux-arm.upx b7974ad380b1dac65516cd06fa1f0918708e212cb89fb6e79a79ca133e227097 dwarfs-fuse-extract-0.15.1-Linux-i386 60e448f27084f30727ae64346d5ae2c2e13cff28445e0c5b5f85a7666cfcc325 dwarfs-fuse-extract-0.15.1-Linux-i386.upx cfd53bd6d61474cf4adf32cfbdf937717c9c329f5fd833ee997a8cd50afff74d dwarfs-fuse-extract-0.15.1-Linux-loongarch64 925433bac0999babff000767e1a3bc5f33cfc93bc4d4f5344e9c160e0fe04e0a dwarfs-fuse-extract-0.15.1-Linux-ppc64 7574dd2f519cc76c39a67d7a62fdb700a264be56f379df893872f7c864c7b5da dwarfs-fuse-extract-0.15.1-Linux-ppc64le ccec93d4277e82ac44b78116e27c3df68b376e1ed3588c60d57a94ba5234d8ff dwarfs-fuse-extract-0.15.1-Linux-riscv64 7cae9c8f7b0ea283f7ac1f70fa8fdcd6fd50f34597536211ca854c78c6088d5a dwarfs-fuse-extract-0.15.1-Linux-riscv64.upx afe1cfcc82314af7593054b81cee5191aa35a40a0b06bf6aab036e9d97e3da79 dwarfs-fuse-extract-0.15.1-Linux-s390x 0d0a771c7849f6f6e56cfba879ad03cf4194b57cb4fa94b4d34b797ebdd34a99 dwarfs-fuse-extract-0.15.1-Linux-x86_64 dce7fb26462f7ef0ba4c5aef5563e5e74ba23b880195ce37b4a0bd7ca46ff431 dwarfs-fuse-extract-0.15.1-Linux-x86_64.upx 36fd1012426446d85ab37503b1f8780b9c2e94e6368dac5cd3de1fd58c224cca dwarfs-universal-0.15.1-Linux-aarch64 7927c4ba101efe64fc9015de0f4b6b0c4898b3e854e037670b9998556dc6406f dwarfs-universal-0.15.1-Linux-aarch64.upx 29fcfc2bd09abe023ae8f17072b1a5967fc4536867b19fcbe79818f9f3dd9920 dwarfs-universal-0.15.1-Linux-arm 00f84a9a3e93102a880e641fcf0bda77216c8ef65b84e6239bbcdc4aa70271fa dwarfs-universal-0.15.1-Linux-arm.upx f59359c67e5e43e67a086ad020c66f08ceb87ded0ddd641e21d35fa3ffb323b5 dwarfs-universal-0.15.1-Linux-i386 dd8116704ea22628699a0500c1d4b1fb7ff81acd3f4fe2c836604c6f654ff1ef dwarfs-universal-0.15.1-Linux-i386.upx 48624d95428286e8cac2801f23b62c000fcabedbc01bc91db734341fa621facb dwarfs-universal-0.15.1-Linux-loongarch64 428565e0033be5fe47b97c4cb1ec60da4550ec76e388326250148ecefb04ae21 dwarfs-universal-0.15.1-Linux-ppc64 a9671a135f7362d42dd8479e02593a63ac7e34430c94b9257e1fb6f28bf506a2 dwarfs-universal-0.15.1-Linux-ppc64le c8c2ef012b80e554b0052bf85fd2271b374abd4a82337777d75200ebf40ffc0d dwarfs-universal-0.15.1-Linux-riscv64 459aa0cac669f9794f114dc868a628f8bebb8ca538d4213b7d25448e424db3ae dwarfs-universal-0.15.1-Linux-riscv64.upx ff7b663e44f1f6d5e5b1b022fce85a6c993991f9c99e3d0e1d89c45a42d25284 dwarfs-universal-0.15.1-Linux-s390x 69750c543ea2272d96397a14627f5bb98a42385026df118cb3a4363312892fa4 dwarfs-universal-0.15.1-Linux-x86_64 83dd7dad048a86347a53cfa62d1b2dc3765a0803681a3c701a32029b464f4873 dwarfs-universal-0.15.1-Linux-x86_64.upx 5a4da367840829874f89a80112e1f49b25393969121130dc489ed6b6e8e6b782 dwarfs-universal-0.15.1-Windows-AMD64.exe fb9baa894d32d7182d07ce54c4623d27b2b0ce6a0c6b4f1f0101462877d4ab0e dwarfs-universal-small-0.15.1-Linux-aarch64 accef8fb32026084953e9e2e0174950f9fa789e9d1ece3263bccc1db58dde6a6 dwarfs-universal-small-0.15.1-Linux-aarch64.upx 0499ffc9b7093bee499dd56845659590fdcbd32aaf3e0a5f4bb9a5db92dac60c dwarfs-universal-small-0.15.1-Linux-arm fcd1422cd2315a6dbab0526ebe0275db9713623bd64f8f5b7bf11437f2e74f19 dwarfs-universal-small-0.15.1-Linux-arm.upx 1f3faf769fefe6517e0086288e17a0e60780cfab63b527c8e9c0cc6f695ea1df dwarfs-universal-small-0.15.1-Linux-i386 a0bad3f10125a08ac51b765371f1d82c14ff0738ddda76aa40597e64e8faf9c9 dwarfs-universal-small-0.15.1-Linux-i386.upx eac51d831d84a89df62718ba7c3c55f4994d0c5d9387199e060d9c2242572eed dwarfs-universal-small-0.15.1-Linux-loongarch64 588de2941e5353daa164afbb55f5b4245ef27d7b0a19a2302a76093ba0c68005 dwarfs-universal-small-0.15.1-Linux-ppc64 4db6b7fec01bbaef8d51c8850eaccccbb72723dea02d5966fc8cd5198cc9f316 dwarfs-universal-small-0.15.1-Linux-ppc64le 58cda2c96c82bae3fed2d461be37786093e8f68bba7573c57ac7a1fa45cedd33 dwarfs-universal-small-0.15.1-Linux-riscv64 18b2d1c203b07a5bee39453f200d79d2069ca607e8913e060befdd5456b9aa7a dwarfs-universal-small-0.15.1-Linux-riscv64.upx 391e3e75743899a23fefe3b816cd2dda86f2048deedbc1490870a55e49cb6309 dwarfs-universal-small-0.15.1-Linux-s390x 0d84291b3e7e26f3791d50169097a7b0901542440ce934fb1ec263ae0c4a256e dwarfs-universal-small-0.15.1-Linux-x86_64 fc607ba7af3485feb8b90fd4f4678bd586c587ce8dd982e57f4fd9afb1faf391 dwarfs-universal-small-0.15.1-Linux-x86_64.upx - The input to
-
🔗 r/LocalLLaMA Moonshot says Cursor Composer was authorized rss
| Sounds like Fireworks had a partnership with Moonshot, and Cursor went through them. Kinda makes sense that Moonshot wouldn’t be aware of it if they are working with Fireworks as a “reseller” of sorts. And the custom license they have with Fireworks may mean the non-disclosure of base model wasn’t against license. Or it could be a good story told after the fact. Impossible to know without knowing the private details of the contract. I guess either way, they worked it out. submitted by /u/davernow
[link] [comments]
---|--- -
🔗 r/Yorkshire Chip shop sausages rss
Very random but does anyone know where I can buy the same jumbo sausages that the chip shops use, I’m located birstall so have easy access to Leeds/bradford!!
submitted by /u/Top-Welcome5620
[link] [comments] -
🔗 3Blue1Brown (YouTube) Bacteria Grid Puzzle Solution rss
Part of a monthly series of puzzlers, in collaboration with MoMath and Peter Winkler
-
🔗 r/york [Participants Required - York St John University] Gay male couples in the UK – Views on parenthood (21+, in a relationship of 12+ months) rss
Hi! 👋
My name is Ryan and I am a doctoral researcher in counselling psychology at York St John University. I am conducting a study exploring how gay men make sense of the psychological and emotional experience of deciding whether or not to become a parent, with a focus on those currently in relationships.I’m particularly interested in understanding the different factors that make thinking about parenthood feel easier or more challenging, how these conversations happen within relationships, and what kinds of support or information might be helpful, whether you want children, don’t want children, or are unsure.
I’m looking for gay men (21+) currently in a relationship (12+ months), who are not parents, to take part in a one-to-one interview.
What’s involved:
- A 60–90 minute interview
- Conducted online via MS Teams or in person at York St John University, UK
- Scheduled at a time that suits you
- Participants will be recruited in couples, but interviews will be conducted separately to ensure individual perspectives
Eligibility:
- Identify as a gay man
- Aged 21+
- In a relationship of 12+ months
- Not currently a parent
- UK-based and fluent in English
- Open to discussing views on parenthood (whether you want children or not)
This is an under-researched area, and your contribution could help inform future counselling practice and community support for gay men and couples.
If you’re interested or would like more information, feel free to send me a DM or comment below 😊
The study has been approved by the York St John University Research Ethics Committee (Ref: ETH2526-0084).
submitted by /u/DCounsPsych_Research
[link] [comments] -
🔗 r/Leeds [Participants Required] Gay male couples in the UK – Views on parenthood (21+, in a relationship of 12+ months) rss
Hi! 👋
My name is Ryan and I am a doctoral researcher in counselling psychology at York St John University. I am conducting a study exploring how gay men make sense of the psychological and emotional experience of deciding whether or not to become a parent, with a focus on those currently in relationships.I’m particularly interested in understanding the different factors that make thinking about parenthood feel easier or more challenging, how these conversations happen within relationships, and what kinds of support or information might be helpful, whether you want children, don’t want children, or are unsure.
I’m looking for gay men (21+) currently in a relationship (12+ months), who are not parents, to take part in a one-to-one interview.
What’s involved:
- A 60–90 minute interview
- Conducted online via MS Teams or in person at York St John University, UK
- Scheduled at a time that suits you
- Participants will be recruited in couples, but interviews will be conducted separately to ensure individual perspectives
Eligibility:
- Identify as a gay man
- Aged 21+
- In a relationship of 12+ months
- Not currently a parent
- UK-based and fluent in English
- Open to discussing views on parenthood (whether you want children or not)
This is an under-researched area, and your contribution could help inform future counselling practice and community support for gay men and couples.
If you’re interested or would like more information, feel free to send me a DM or comment below 😊
The study has been approved by the York St John University Research Ethics Committee (Ref: ETH2526-0084).
submitted by /u/DCounsPsych_Research
[link] [comments] -
🔗 r/Harrogate Looking for dog walks rss
Any recommendations for dog walks in Harrogate or surrounding areas? We are happy to drive out a bit if anyone has any nice dales walks too.
We have done fewston, birk cragg and pinewoods but looking to change it up a bit. Bonus points if there’s somewhere we can grab a coffee with the dogs afterward
submitted by /u/emsversion12222
[link] [comments] -
🔗 r/Yorkshire Reflecting… rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 r/Yorkshire Harrogate faces Scarborough and Barnsley in race to become UK's first-ever Town of Culture rss
submitted by /u/willfiresoon
[link] [comments] -
🔗 r/Yorkshire People in North Yorkshire town found to have ‘alarming’ levels of toxic Pfas chemicals in blood rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 r/Leeds Leeds Armouries: old scary display? rss
I remember going to Leeds Armouries as a kid (15-20y ago probably) and being traumatised by a specific display case they had. It was a glass box with full size people in (they still have several of these) but this one in particular seems to have been removed at some point in the last 20 years, maybe because it was too scary!
It consisted of a soldier/terrorist bursting into a kid’s bedroom, fully armed, with the kid cowering in the corner. Does anyone remember this? My context and memory may be fuzzy as I was so young.
I’d be interested if anyone else has similar memories of this, or has any info about it or why it was removed. There doesn’t seem to be any photo of it online.
submitted by /u/mailywhale
[link] [comments] -
🔗 r/LocalLLaMA This is incredibly tempting rss
| Has anyone bought one of these recently that can give me some direction on how usable it is? What kind of speeds are you getting trying to load one large model vs using multiple smaller models? submitted by /u/No_Mango7658
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Feedback on my 256gb VRAM local setup and cluster plans. Lawyer keeping it local. rss
| I’m a lawyer who got Claude code pilled about 90 days ago, then thought about what I wanted to do with AI tools, and concluded that the totally safest way for me to experiment was to build my own local cluster. I did an earlier post about what I was working on, and the feedback was helpful. Wondering if anyone has feedback or suggestions for me in terms of what I should do next. Anyway, node 1 is basically done at this point. Gigabyte threadripper board, 256gbs of ddr4, and 8 32gb nvidia v100s. I have two PSUs on two different regular circuits in my office, 2800 watts total (haven’t asked the landlord for permission to install a 240 volt yet). I am running … windows … because I still use the computer for my regular old office work. But I guess my next steps for just this node are probably to get a 240 plug installed, and maybe add 2 or 4 more v100s, and then call it a day for node 1. Took one photo of one of th 4-card pass through boards. Each of these NVlinks 128gbs of sxm v100s, and they get fed back into the board at x16 using two pex switches and 4 slim sass cables. The only part that’s remotely presentable is the 4 card board I have finished. There’s a 2 card board on footers and 2pcie v100s. I have 2 more 2 card sxm boards and a 4 card sxm board in waiting. And 3 sxm v100s and heatsinks (slowly buying more). Goal is to do local rag databases on the last 10 years of my saved work, to automate everything I can so that all the routine stuff is automatic and the semi routine stuff is 85% there. Trying to get the best biggest reasoning models to run, then to test them with rag, then to qlora train. Wondering if anyone has suggestions on how to manage all the insane power cables this requires. I put this 4 card board in an atx tower case, and have one more for the second board, but I have the rest of the stuff (motherboard board, 2 pcie cards, 2 card sxm board) open bench/open air like a mining rig. Would love some kind of good looking glass and metal 3 level air flow box or something. Also wondering if anyone has really used big models like GLM or full deepseek or minimax 2.5 locally for anything like this. And if anyone has done Qlora training for legal stuff. In terms of what’s next, I will start on Node 2 after I get some of the stray heatsinks and riser cables out of my office and thermal paste off of my suit. I have a romed2 board and processor, and a variety of loose sticks of ddr4 server ram that will probably only add up to like 192gb. I have 3 rtx3090s. Plan is I guess to add a fourth and nvlink them. My remaining inventory is a supermicro x10drg board and processor, 6 p40s, 6p100s, 4 16gb v100 sxms, another even older x10 board and processor, more loose sticks of server ram, and then a couple more board and processor combos (x299a 64gb ddr4, and my 2019 gaming pc). Original plan (and maybe still plan) was to just have so much vram I could slowly run the biggest model ever over a distributed cluster, and use that to tell me the secret motives and strategy of parties on the other side of cases. And then maybe use it to tell me why I can never be satisfied and always want more. Worried Opus 4.6 will be better at all that. I wrote this actual post without any AI help, because I still have soul inside. Will re post it in a week with Claude rewriting it to see how brainwashed you all are. Anyway, ask me questions, give me advice, explain to me in detail why I’m stupid. But be real about it you anime freaks. submitted by /u/TumbleweedNew6515
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen wants you to know… rss
| Seen while walking through Singapore’s Changi airport earlier this week. Alibaba Cloud spending up big on advertising. submitted by /u/m-gethen
[link] [comments]
---|--- -
🔗 r/Harrogate Fun date ideas Harrogate rss
Hi I’m in my mid 20s. Recently took a girl in her early 20s on a date to the valley gardens park and it was just so relaxed and more 30s/40s. Didn’t see anyone our age. Everyone there with kids or in their 60s+ and the vibe was so so off. We went to a cocktail bar, was completely empty and they were playing music from 1950, just incredibly uncool and a bit cringe, it unfolded like I planned a date for her mother. Is there anything more fun to do in Harrogate?
submitted by /u/Apprehensive_Ring666
[link] [comments] -
🔗 r/reverseengineering Black Rock Shooter: the Game was Made by Madmen. I’ve Been Solo Reverse Engineering it for Two Years as My First Big Project and Am Finally Ripping Its Engine Wide Open. rss
submitted by /u/brs-game-researcher
[link] [comments] -
🔗 backnotprop/plannotator v0.14.4 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.14.3 | PR context panel, diff search in code review, OpenCode permission normalization, landing page redesign
v0.14.2 | OpenCode plan mode prompt replacement, Windows non-ASCII path fix, Pi link fix
v0.14.1 | Single submit_plan with auto-detect, viewed-file draft persistence, Bear nested tag fix
v0.14.0 | PR review via GitHub URL,/plannotator-lastfor annotating agent messages, OpenCode plan mode permissions fix, VS Code SSH proxy fix
v0.13.1 | OpenCode plan mode rewrite, Obsidian save fix
v0.13.0 | Built-in themes, annotatable plan diffs, file-scoped code review comments, Octarine integration, unified review core, Pi remote sessions
v0.12.0 | Quick annotation labels, mobile compatibility, Graphviz rendering, markdown images with lightbox, linked doc navigation in annotate mode
v0.11.4 | Git add from code review, bidirectional scroll navigation, clipboard paste for annotation images, VS Code IPC port stability
v0.11.3 | Expandable diff context, hierarchical folder tree, redesigned worktree controls, supply chain hardening
v0.11.2 | Git worktree support in code review, VS Code editor annotations in review, Obsidian auto-save & separator settings, session discovery, smart file resolution
v0.11.1 | VS Code extension for in-editor plan review, Pinpoint mode for point-and-click annotations, untracked files in code review
What's New in v0.14.4
v0.14.4 adds the ability to post reviews (comments, approval) directly to GitHub from the code review UI, along with repo-aware tab titles, a parser fix for nested code fences, scroll-accessibility UI fixes, and two environment wiring fixes. 5 PRs, 3 from first-time contributors.
Post Reviews Directly to GitHub
The code review UI can now submit reviews to GitHub instead of (or in addition to) sending feedback to the local agent. When reviewing a PR, a toggle in the review panel switches between "Agent" and "GitHub" targets. In GitHub mode, clicking "Approve" or "Send Feedback" opens a popup for a review body comment, then posts the review via
gh apiwith per-file, per-line comments mapped from your annotations.This uses the GitHub pull request review API rather than
gh pr review, because the CLI only supports general comments. The API call lets Plannotator attach each annotation to the exact file and line it references. After submitting, the PR opens in your browser and the agent receives a notification that the review was posted.The squash-and-merge action from #348 is intentionally left for a follow-up, since it requires discussion around when automatic merging is appropriate.
- #352 by @rockneurotiko, partial progress on #348
Repo Identifier in Tab Title
Browser tabs now show the repository or directory name:
myproject · Plannotatorfor plan review andmyproject · Code Reviewfor diffs. Previously every tab just said "Plannotator," which made it hard to tell them apart when working across multiple repos.- #353 by @VincentHardouin, closing #342 filed by @ovitrif
Nested Markdown Code Fences
Plans that contain markdown examples with triple-backtick code blocks inside a four-backtick fence would break the parser. The outer block closed at the first triple-backtick line, producing two broken blocks instead of one. The parser now counts backticks in the opening fence and requires the same count (or more) to close it. Language tag extraction was also fixed to slice by the actual fence length rather than a hardcoded 3.
This PR also adds the first test file for the parser (8 tests).
Additional Changes
- Wire
PLANNOTATOR_PASTE_URLthrough Pi extension — the Pi extension readPLANNOTATOR_SHARE_URLbut never passedPLANNOTATOR_PASTE_URLto the browser UI, so self-hosted paste service configurations were silently ignored. Now wired through all three server types (plan, review, annotate). By @dmmulroy in #356. - Fix resize handle covering scrollbar — the resize handle's touch target extended 8px in both directions, overlapping the 6px scrollbar on the adjacent panel. A global CSS transition on
*also caused choppy scrolling. The handle now extends only outward from the panel edge, and transitions are narrowed to color properties. Closes #354, reported by @dillonoconnor. In #359.
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: show repo identifier in tab title by @VincentHardouin in #353
- feat: approve and review PR to GitHub by @rockneurotiko in #352
- fix(parser): support nested markdown code fences by @brian-malinconico-axomic in #355
- fix(pi-extension): wire PLANNOTATOR_PASTE_URL through all servers by @dmmulroy in #356
- fix: prevent resize handle from covering scrollbar by @backnotprop in #359
New Contributors
- @VincentHardouin made their first contribution in #353
- @brian-malinconico-axomic made their first contribution in #355
- @dmmulroy made their first contribution in #356
Contributors
@rockneurotiko built the GitHub review submission feature in #352, adding the agent/GitHub toggle, per-line comment mapping, and the review body popup. This is his second PR to the project after the OpenCode permission fix in v0.14.3.
@VincentHardouin implemented repo-aware tab titles in #353, picking up the feature request filed by @ovitrif in #342.
@brian-malinconico-axomic fixed the nested code fence parsing in #355 and added the project's first parser test suite.
@dmmulroy wired the missing paste URL environment variable through the Pi extension in #356, fixing self- hosted paste service configurations.
@dillonoconnor reported the scrollbar interaction bug in #354.
Full Changelog :
v0.14.3...v0.14.4 -
🔗 Rust Blog Security advisory for Cargo rss
The Rust Security Response Team was notified of a vulnerability in the third- party crate
tar, used by Cargo to extract packages during a build. The vulnerability, tracked as CVE-2026-33056, allows a malicious crate to change the permissions on arbitrary directories on the filesystem when Cargo extracts it during a build.For users of the public crates.io registry, we deployed a change on March 13th to prevent uploading crates exploiting this vulnerability, and we audited all crates ever published. We can confirm that no crates on crates.io are exploiting this.
For users of alternate registries, please contact the vendor of your registry to verify whether you are affected by this. The Rust team will release Rust 1.94.1 on March 26th, 2026, updating to a patched version of the
tarcrate (along with other non-security fixes for the Rust toolchain), but that won't protect users of older versions of Cargo using alternate registries.We'd like to thank Sergei Zimmerman for discovering the underlying
tarcrate vulnerability and notifying the Rust project ahead of time, and William Woodruff for directly assisting the crates.io team with the mitigations. We'd also like to thank the Rust project members involved in this advisory: Eric Huss for patching Cargo; Tobias Bieniek, Adam Harvey and Walter Pearce for patching crates.io and analyzing existing crates; Emily Albini and Josh Stone for coordinating the response; and Emily Albini for writing this advisory. -
🔗 Baby Steps Maximally minimal view types rss
This blog post describes a maximally minimal proposal for view types. It comes out of a converastion at RustNation I had with lcnr and Jack Huey, where we talking about various improvements to the language that are "in the ether", that basically everybody wants to do, and what it would take to get them over the line.
Example: MessageProcessor
Let's start with a simple example. Suppose we have a struct
MessageProcessorwhich gets created with a set of messages. It will process them and, along the way, gather up some simple statistics:pub struct MessageProcessor { messages: Vec<String>, statistics: Statistics, } #[non_exhaustive] // Not relevant to the example, just good practice! pub struct Statistics { pub message_count: usize, pub total_bytes: usize, }The basic workflow for a message processor is that you
- accumulate messages by
pushing them into theself.messagesvector - drain the accumulate messages and process them
- reuse the backing buffer to push future messages
Accumulating messages
Accumulating messages is easy:
impl MessageProcessor { pub fn push_message(&mut self, message: String) { self.messages.push(message); } }Processing a single message
The function to process a single message takes ownership of the message string because it will send it to another thread. Before doing so, it updates the statistics:
impl MessageProcessor { fn process_message(&mut self, message: String) { self.statistics.message_count += 1; self.statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }Draining the accumulated messages
The final function you need is one that will drain the accumulated messages and process them. Writing this ought to be straightforward, but it isn't:
impl MessageProcessor { pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- ERROR: `self` is borrowed } } }The problem is that
self.messages.drain(..)takes a mutable borrow onself.messages. When you callself.process_message, the compiler assumes you might modify any field, includingself.messages. It therefore reports an error. This is logical, but frustrating.Experienced Rust programmers know a number of workarounds. For example, you could swap the
messagesfield for an empty vector. Or you could invokeself.messages.pop(). Or you could rewriteprocess_messageto be a method on theStatisticstype. But all of them are, let's be honest, suboptimal. The code above is really quite reasonable, it would be nice if you could make it work in a straightforward way, without needing to restructure it.What's needed: a way for the borrow checker to know what fields a method
may access
The core problem is that the borrow checker does not know that
process_messagewill only access thestatisticsfield. In this post, I'm going to focus on an explicit, and rather limited, notation, but I'll also talk about how we might extend it in the future.View types extend struct types with a list of fields
The basic idea of a view type is to extend the grammar of a struct type to optionally include a list of accessible fields:
RustType := StructName<...> | StructName<...> { .. } // <-- what we are adding | StructName<...> { (fields),* } // <-- what we are addingA type like
MessageProcessor { statistics }would mean "aMessageProcessorstruct where only thestatisticsfield can be accessed". You could also include a.., likeMessageProcessor { .. }, which would mean that all fields can be accessed, which is equivalent to today's struct typeMessageProcessor.View types respect privacy
View types would respect privacy, which means you could only write
MessageProcessor { messages }in a context where you can name the fieldmessagesin the first place.View types can be named on
selfarguments and elsewhereYou could use this to define that
process_messageonly needs to access the fieldstatistics:impl MessageProcessor { fn process_message(&mut self {statistics}, message: String) { // ---------------------- // Shorthand for: `self: &mut MessageProcessor {statistics}` // ... as before ... } }Of course you could use this notation in other arguments as well:
fn silly_example(.., mp: &mut MessageProcessor {statistics}, ..) { }Explicit view-limited borrows
We would also extend borrow expressions so that it is possible to specify precisely which fields will be accessible from the borrow:
let messages = &mut some_variable {messages}; // Ambiguous grammar? See below.When you do this, the borrow checker produces a value of type
&mut MessageProcessor {messages}.Sharp-eyed readers will note that this is ambiguous. The above could be parsed today as a borrow of a struct expression like
some_variable { messages }or, more verbosely,some_variable { messages: messages }. I'm not sure what to do about that. I'll note some alternative syntaxes below, but I'll also note that it would be possible for the compiler to parse the AST in an ambiguous fashion and disambiguate later on once name resolution results are known.We automatically introduce view borrows in an auto-ref
In our example, though, the user never writes the
&mutborrow explicitly. It results from the auto-ref added by the compiler as part of the method call:pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- auto-ref occurs here } }The compiler internally rewrites method calls like
self.process_message(message)to fully qualified form based on the signature declared inprocess_message. Today that results in code like this:MessageProcessor::process_message(&mut *self, message)But because
process_messagewould now declare&mut self { statistics }, we can instead desugar to a borrow that specifies a field set:MessageProcessor::process_message(&mut *self { statistics }, message)The borrow checker would respect views
Integrating views into the borrow checker is fairly trivial. The way the borrow checker works is that, when it sees a borrow expression, it records a "loan" internally that tracks the place that was borrowed, the way it was borrowed (mut, shared), and the lifetime for which it was borrowed. All we have to do is to record, for each borrow using a view, multiple loans instead of a single loan.
For example, if we have
&mut self, we would record onemut-loan ofself. But if we have&mut self {field1, field2}, we would twomut-loans, one ofself.field1and one ofself.field2.Example: putting it all together
OK, let's put it all together. This was our original example, collected:
pub struct MessageProcessor { messages: Vec<String>, statistics: Statistics, } #[non_exhaustive] pub struct Statistics { pub message_count: usize, pub total_bytes: usize, } impl MessageProcessor { pub fn push_message(&mut self, message: String) { self.messages.push(message); } pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- ERROR: `self` is borrowed } } fn process_message(&mut self, message: String) { self.statistics.message_count += 1; self.statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }Today,
process_pushed_messagesresults in an error:pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { // ------------- borrows `self.messages` self.process_message(message); // <-- ERROR! // --------------- borrows `self` } }The error arises from a conflict between two borrows:
self.messages.drain(..)desugars toIterator::drain(&mut self.messages, ..)which, as you can see,mut-borrowsself.messages;- then
self.process_message(..)desugars toMessageProcessor::process_message(&mut self, ..)which, as you can see,mut-borrows all ofself, which overlapsself.messages.
But in the "brave new world", we'll modify the program in one place:
- fn process_message(&mut self, message: String) { + fn process_message(&mut self {statistics}, message: String) {and as a result, the
process_pushed_messagesfunction will now borrow check successfully. This is because the two loans are now issued for different places:- as before,
self.messages.drain(..)desugars toIterator::drain(&mut self.messages, ..)whichmut-borrowsself.messages; - but now,
self.process_message(..)desugars toMessageProcessor::process_message(&mut self {statistics}, ..)whichmut-borrowsself.statistics, which doesn't overlapself.messages.
At runtime, this is still just a pointer
One thing I want to emphasize is that "view types" are a purely static construct and do not change how things are compiled. They simply give the borrow checker more information about what data will be accessed through which references. The
process_messagemethod, for example, still takes a single pointer toself.This is in contrast with the workarounds that exist today. For example, if I were writing the above code, I might well rewrite
process_messageinto an associated fn that takes a&mut Statistics:impl MessageProcessor { fn process_message(statistics: &mut Statistics, message: String) { statistics.message_count += 1; statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }This would be annoying, of course, since I'd have to write
Self::process_message(&mut self.statistics, ..)instead ofself.process_message(), but it would avoid the borrow check error.Beyond being annoying, it would change the way the code is compiled. Instead of taking a reference to the
MessageProcessorit now takes a reference to theStatistics.In this example, the change from one type to another is harmless, but there are other examples where you need access to mulitple fields, in which case it is less efficient to pass them individually.
Frequently asked questions
How hard would this be to implement?
Honestly, not very hard. I think we could ship it this year if we found a good contributor who wanted to take it on.
What about privacy?
I would require that the fields that appear in view types are 'visible' to the code that is naming them (this includes in view types that are inserted via auto-ref). So the following would be an error:
mod m { #[derive(Default)] pub struct MessageProcessor { messages: Vec<String>, ... } impl MessageProcessor { pub fn process_message(&mut self {messages}, message: String) { // ---------- // It's *legal* to reference a private field here, but it // results in a lint, just as it is currently *legal* // (but linted) for a public method to take an argument of // private type. The lint is because doing this is effectively // going to make the method uncallable from outside this module. self.messages.push(message); } } } fn main() { let mut mp = m::MessageProcessor::default(); mp.process_message(format!("Hello, world!")); // --------------- ERROR: field `messages` is not accessible here // // This desugars to: // // ``` // MessageProcessor::process_message( // &mut mp {messages}, // <-- names a private field! // format!("Hello, world!"), // ) // ``` // // which names the private field `messages`. That is an error. }Does this mean that view types can't be used in public methods?
More-or-less. You can use them if the view types reference public fields:
#[non_exhaustive] pub Statistics { pub message_count: usize, pub average_bytes: usize, // ... maybe more fields will be added later ... } impl Statistics { pub fn total_bytes(&self {message_count, average_bytes}) -> usize { // ---------------------------- // Declare that we only read these two fields. self.message_count * self.average_bytes } }Won't it be limited that view types more-or-less only work for private
methods?
Yes! But it's a good starting point. And my experience is that this problem occurs most often with private helper methods like the one I showed here. It can occur in public contexts, but much more rarely, and in those circumstances it's often more acceptable to refactor the types to better expose the groupings to the user. This doesn't mean I don't want to fix the public case too, it just means it's a good use-case to cut from the MVP. In the future I would address public fields via abstract fields, as I described in the past.
What if I am borrowing the same sets of fields over and over? That sounds
repititive!
That's true! It will be! I think in the future I'd like to see some kind of 'ghost' or 'abstract' fields, like I described in my abstract fields blog post. But again, that seems like a "post-MVP" sort of problem to me.
Must we specify the field sets being borrowed explicitly? Can't they be
inferred?
In the syntax I described, you have to write
&mut place {field1, field2}explicitly. But there are many approaches in the literature to inferring this sort of thing, with row polymorphism perhaps being the most directly applicable. I think we could absolutely introduce this sort of inference, and in fact I'd probably make it the default, so that&mut placealways introduces a view type, but it is typically inferred to "all fields" in practice. But that is a non-trivial extension to Rust's inference system, introducing a new kind of inference we don't do today. For the MVP, I think I would just lean on auto-ref covering by far the most common case, and have explicit syntax for the rest.Man, I have to write the fields that my method uses in the signature? That
sucks! It should be automatic!
I get that for many applications, particularly with private methods, writing out the list of fields that will be accessed seems a bit silly: the compiler ought to be able to figure it out.
On the flip side, this is the kind of inter-procedural inference we try to avoid in Rust, for a number of reasons:
- it introduces dependecies between methods which makes inference more difficult (even undecidable, in extreme cases);
- it makes for 'non-local errors' that can be really confusing as a user, where modifying the body of one method causes errors in another (think of the confusion we get around futures and
Send, for example); - it makes the compiler more complex, we would not be able to parallelize as easily (not that we parallelize today, but that work is underway!)
The bottom line for me is one of staging : whatever we do, I think we will want a way to be explicit about exactly what fields are being accessed and where. Therefore, we should add that first. We can add the inference later on.
Why does this need to be added to the borrow checker? Why not desugar?
Another common alternative (and one I considered for a while…) is to add some kind of "desugaring" that passes references to fields instead of a single reference. I don't like this for two reasons. One, I think it's frankly more complex! This is a fairly straightforward change to the borrow checker, but that desugaring would leave code all over the compiler, and it would make diagnostics etc much more complex.
But second, it would require changes to what happens at runtime, and I don't see why that is needed in this example. Passing a single reference feels right to me.
What about the ambiguous grammar? What other syntax options are there?
Oh, right, the ambiguous grammar. To be honest I've not thought too deeply about the syntax. I was trying to have the type
Struct { field1, field 2 }reflect struct constructor syntax, since we generally try to make types reflect expressions, but of course that leads to the ambiguity in borrow expressions that causes the problem:let foo = &mut some_variable { field1 }; // ------------- is this a variable or a field name?Options I see:
- Make it work. It's not truly ambiguous, but it does require some semantic diambiguation, i.e., in at least some cases, we have to delay resolving this until name resolution can complete. That's unusual for Rust. We do it in some small areas, most notably around the interpretation of a pattern like
None(is it a binding to a variableNoneor an enum variant?). - New syntax for borrows only. We could keep the type syntax but make the borrow syntax different, maybe
&mut {field1} in some_variableor something. Given that you would rarely type the explicit borrow form, that seems good? - Some new syntax altogether. Perhaps we want to try something different, or introduce a keyword everywhere? I'd be curious to hear options there. The current one feels nice to me but it occupies a "crowded syntactic space", so I can see it being confusing to readers who won't be sure how to interpret it.
Conclusion: this is a good MVP, let's ship it!
In short, I don't really see anything blocking us from moving forward here, at least with a lang experiment.
- accumulate messages by
-