- ↔
- →
to read (pdf)
- Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
- Letting AI Actively Manage Its Own Context | 明天的乌云
- Garden Offices for Sale UK - Portable Space
- Cord: Coordinating Trees of AI Agents | June Kim
- Style tips for less experienced developers coding with AI · honnibal.dev
- March 15, 2026
-
🔗 r/reverseengineering Decomp vs Recomp vs Port! So What Is the Difference? rss
submitted by /u/chicagogamecollector
[link] [comments] -
🔗 r/reverseengineering Locally hosted cheat sheets and helpful information for labs. rss
submitted by /u/Visual_Implement5116
[link] [comments] -
🔗 r/reverseengineering RE//verse 2026: Hacking the Xbox One rss
submitted by /u/born-in1984
[link] [comments] -
🔗 badlogic/pi-mono v0.58.3 release
No content.
-
🔗 badlogic/pi-mono v0.58.2 release
Added
- Improved settings, theme, thinking, and show-images selector layouts by using configurable select-list primary column sizing (#2154 by @markusylisiurunen)
Fixed
- Fixed fuzzy
editmatching to normalize Unicode compatibility variants before comparison, reducing false "oldText not found" failures for text such as CJK and full-width characters (#2044) - Fixed
/model <ref>exact matching and picker search to recognize canonicalprovider/modelreferences when model IDs themselves contain/, such as LM Studio models likeunsloth/qwen3.5-35b-a3b(#2174) - Fixed Anthropic OAuth manual login and token refresh by using the localhost callback URI for pasted redirect/code flows and omitting
scopefrom refresh-token requests (#2169) - Fixed stale scrollback remaining after session switches by clearing the screen before wiping scrollback (#2155 by @Perlence)
- Fixed extra blank lines after markdown block elements in rendered output (#2152 by @markusylisiurunen)
-
🔗 r/york Cottage in York? rss
Hi guys, is there any nice and affordable spa/cottage outside York? Is my husband birthday and I would like to do something different.. see more green (we live in Leeds) but we don’t have a car. Please any suggestions in booking or Airbnb?? Please I would appreciate any suggestions xx
submitted by /u/Bubblygirl1999
[link] [comments] -
🔗 r/Yorkshire Rare sighting of the legendary flying squirrel of North Yorkshire… rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 r/york Does anyone want Motorsport Magazines? rss
I subscribe for their online content and archive, but seem to have clicked the wrong option at some point and am now getting the print edition sent monthly.
When I get round to it, I'll ring them up and see if they want to stop wasting postage, but in the meantime does anyone want the last three issues (Feb, March, April) and any further ones that arrive?
submitted by /u/Brickie78
[link] [comments] -
🔗 @HexRaysSA@infosec.exchange Our IDA Starter course is moving to on-demand! mastodon
Our IDA Starter course is moving to on-demand!
Learn the fundamentals of reverse engineering with IDA at your own pace, and at a lower price.
Coming April 2026
👉 Learn more & Join the waitlist: https://hex-rays.com/training/ida-pro- starter-training -
🔗 r/york Nut Allergy rss
Hi, I have a severe peanut allergy…
just wondering if anyone knows of any restaurants in Manchester and York with nut free kitchen???
Thanks!
submitted by /u/LH1998xx
[link] [comments] -
🔗 remorses/critique critique@0.1.127 release
- Directory tree index in web previews — shared
critique --webpages now render the same file tree shown in the TUI at the top of the page, with each file row linking directly to its diff section:critique --web "My changes"→ opens a page with a clickable tree index at the top
Clicking a file in the tree jumps straight to that file's diff — useful for large PRs with many files.
- Fixed
.patchURL routing —GET /v/:id.patchnow works reliably. Previously the Hono route param could conflict and fail to route.patchrequests correctly.
- Directory tree index in web previews — shared
-
🔗 r/reverseengineering PHP 8 disable_functions bypass PoC rss
submitted by /u/Firm-Armadillo-3846
[link] [comments] -
🔗 r/LocalLLaMA Homelab has paid for itself! (at least this is how I justify it...) rss
| Hey, I thought I'd do an update on my Homelab I posted a while back. I have it running on LLM experiments, which I wrote up here. Basically, it seems I may have discovered LLM Neuroanatomy, and am now using the server to map out current LLM's like the Qwen3.5 and GLM series (thats the partial 'Brain Scan' images here). Anyway, I have the rig power though a Tasmota, and log everything to Grafana. My power costs are pretty high over here in Munich, but calculating with a cost of about $3.50 per GH100 module per hour (H100s range in price, but these have 480GB system RAM and 8TB SSD per chip, so I think $3.50 is about right), I would have paid today $10,000.00 in on-demand GPU use. As I paid $9000 all up, and power was definitely less than $1000, I am officially ahead! Remember, stick to the story if my wife asks! submitted by /u/Reddactor
[link] [comments]
---|--- -
🔗 r/wiesbaden Regionale Motorradgruppe sucht dich! rss
Wir sind eine Gruppe von knapp 180 Bikern aus der Region, die gemeinsam Ausfahrten machen, sich untereinander vernetzen und einfach zusammen eine richtig gute Zeit haben. 🏍️
Neben unseren Rideouts engagieren wir uns auch für Charity-Projekte. Letztes Jahr konnten wir 1.510,19 € für das Bärenherz Hospiz in Wiesbaden sammeln – und dieses Jahr wollen wir daran natürlich wieder anknüpfen. ❤️
Außerdem organisieren wir immer mal wieder Event-Rideouts, bei denen wir kleinen Gesten verteilen, Menschen eine Freude machen und einfach positive Vibes in die Gegend bringen.
Wenn du Bock hast, Teil der Community zu werden, gemeinsam zu fahren und bei solchen Aktionen mitzumachen, dann schreib mir gerne eine PN.
Sobald ich wieder auf Reddit unterwegs bin, können wir alles Weitere in Ruhe besprechen.
submitted by /u/Intrepid-Sea-2045
[link] [comments] -
🔗 r/york Any indoor motorbike storage in the city centre? rss
Does anyone have spare space in a city centre garage that they'd like some money for? The council garages are full at the moment. Ideally near Gillygate but anywhere in the centre would be great.
- 8ft L x 3ft W
- No damage liability for you
- Bike clean and covered (never running in the garage)
- Infrequent access (never too late/early, likely only on sunny evenings and weekends)
Thanks!
submitted by /u/_lbowes
[link] [comments] -
🔗 r/york Driving lessons in Haxby/York? rss
Anyone know of any good, reliable driving instructors I could maybe get in contact with for driving lessons? Tried the usual Bill Plant, check mirrors and LDC Driving School but they have no availability.
Thanks
submitted by /u/a_person4499
[link] [comments] -
🔗 r/york Nazi Map of York, England from 1942 rss
submitted by /u/123brillwill
[link] [comments] -
🔗 r/york Appeal launched to raise £250K to save York’s oldest nature reserve rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 r/wiesbaden Der Frosch erinnert euch an eure Bürgerpflicht! rss
submitted by /u/Extension-Cry225
[link] [comments] -
🔗 r/LocalLLaMA You guys gotta try OpenCode + OSS LLM rss
| as a heavy user of CC / Codex, i honestly find this interface to be better than both of them. and since it's open source i can ask CC how to use it (add MCP, resume conversation etc). but i'm mostly excited about having the cheaper price and being able to talk to whichever (OSS) model that i'll serve behind my product. i could ask it to read how tools i provide are implemented and whether it thinks their descriptions are on par and intuitive. In some sense, the model is summarizing its own product code / scaffolding into product system message and tool descriptions like creating skills. P3: not sure how reliable this is, but i even asked kimi k2.5 (the model i intend to use to drive my product) if it finds the tools design are "ergonomic" enough based on how moonshot trained it lol submitted by /u/No-Compote-6794
[link] [comments]
---|--- -
🔗 Register Spill Joy & Curiosity #78 rss
Imagine working in the oil industry and someone figures out how to turn rainwater into oil. Some in the industry aren't impressed: "More oil. Pah. That won't change much, actually. It's just more oil. We've been dealing with oil for decades. Sure, there's more, but hey: more work for us. The rest is the same old, same old."
They'd be right to some extent. It is more oil and some things would not change. Oil would still be a physical business. You would still need customers and contracts and sales channels and salespeople. You would still need refineries and storage and transport and distribution. You would still need safety and regulation and all of that.
But, also: everything else would change. Because the oil industry isn't built around oil . It's built around hard-to-find, only-in-some-places, hard-to-extract oil.
The price of crude oil would collapse. Reserves would lose their value. Finding oil fields and drilling for oil would not be a thing anymore. Location wouldn't matter anymore, since it rains nearly everywhere.
And then come the second-order effects: on energy policy and geopolitics, on plastics and chemicals and fertilizers, on the parts of the industry that only refine and move and sell oil. Oil wouldn't stop being oil, but the bottleneck would move through the industry and bump into and kick over many things along the way.
You know me. I'm not here to provide indirect political commentary on rising petrol prices. No, I'm talking about software, of course, and I want you to again consider: we now have buttons that we can smash and out come hundreds and thousands of lines of working code, in seconds.
Those buttons are not just another type of developer tool and "we've had code generators for decades" is not a valid reply.
Code is no longer hard-to-find, only-in-some-places, hard-to-extract. And yes, I am preaching to a choir here, but it's Sunday and this is my newsletter and, damn it, I have to say this again, because I keep bumping into engineers who still don't seem to understand what follows from that.
They'll say something like: yes, someone should rebuild GitHub, because GitHub is dead. And I agree, yes, I've been saying that. But what they actually mean is: someone should rebuild GitHub as-is, with the same fundamental assumptions, with the same shape of open source as we know it, and built on the idea that code is scarce.
And I want to shake them and go: man, don't you see? All of it was built on the assumption that code is expensive! And most of it doesn't make sense anymore when code is cheap. Yes, some things won't change. The need to do proper engineering won't go away. But many, many, many things will, because a single constant in a very fundamental equation has been changed.
-
Craig Mod built "the accounting software I've always craved" (called TaxBot2000) and is now software bonkers: "It's strange times. Anyway, I'm mad for software right now. Bonkers. I can't stop thinking about things to make, things to make better. And then I go and make them. There's an energy around all this that is -- truly -- epochal. If you're not playing with models like Claude, you should probably take a peek. It's the time of building."
-
Great page: background-agents.com. There's obviously (no: it's very obvious) a bias towards the creators of the page there, but leaving that aside: this is where it's going.
-
This tweet by Mitchell might have saved me this week. I read it and while I'm not like the guy in the video, I immediately felt guilty for getting distracted so often. Apparently, I have built up muscle memory to cmd-tab to a different window as soon as I submit a prompt. So, after reading that tweet, I closed the browser window with my private profile, put my phone away, and swore to myself that I'll now either try to figure out the same thing the agent is trying to figure out or do something else on my own while it's running. That lead to two incredibly productive days that made me feel great.
-
Karpathy released autoresearch, which is a repository, a tiny bit of code, and a Markdown file to instruct a coding agent to act like an LLM researcher: "The idea: give an AI agent a small but real LLM training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You wake up in the morning to a log of experiments and (hopefully) a better model." The idea of running an agent in a loop isn't new, but what I find fascinating: how small this repo is, how small the codebase is, how direct and clear the instructions and the workflow are, and the meta thing of this being exactly what the non-nano researchers at the big labs are doing, at least kind of. Tobi Lutke then used the same loop, through the pi-autoresearch plugin, but instead of training a model the agent optimized his templating language. Now the question is: what problems are as verifiable as a training run result or performance? Also, if you read this whole paragraph without thinking of the word "Ralph" that means we live in different bubbles.
-
Six Selfish Reasons to Have Kids, by Kevin Kelly.
-
Florian Brand on LLM benchmarks: "It is hard to see real-world utility being measured here. […] The other issue is the harness: It includes a set of tools to look at the files, revert to a previous step and edit code, but the model has to return a block of reasoning, followed by the tool call in triple-backtick delimited markdown. This is not how models work these days! […] So, what happens when you fix those mistakes?" I guess we all know by now that the benchmarks that are shared on the day of a model release are just pointers in a general direction, but this was still very, very interesting to read.
-
Why ATMs didn't kill bank teller jobs, but the iPhone did: "The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists." Good piece. The framing of "automating a job is much harder than making it irrelevant" makes a lot of sense to me and seems like a useful lens.
-
Amazing: howisFelix.today? Lots of nice little insights. Don't miss the conclusion at the end.
-
"What's your favourite disassembler? Mine's a font." Yes, that's one hard line, and yes, you read it right: "This font converts sequences of hexadecimal lowercase characters into disassembled Z80 instructions, by making extensive use of OpenType's Glyph Substitution Table (GSUB) and Glyph Positioning Table (GPOS)." Watch the video.
-
Gruber's review of the MacBook Neo: "The Neo crystallizes the post-Jony Ive Apple. The MacBook "One" was a design statement, and a much-beloved semi-premium product for a relatively small audience. The Neo is a mass-market device that was conceived of, designed, and engineered to expand the Mac user base to a larger audience. It's a design statement too, but of a different sort -- emphasizing practicality above all else. It's just a goddamn lovely tool, and fun too. I'll just say it: I think I'm done with iPads. Why bother when Apple is now making a crackerjack Mac laptop that starts at just $600? May the MacBook Neo live so long that its name becomes inapt." And that first line is the most Gruber line he's ever published.
-
But this review of the MacBook Neo I really loved. Not only because of this paragraph: "Downloaded Xcode and dragged buttons and controls around in Interface Builder with no understanding of what I was looking at. I edited SystemVersion.plist to make the 'About this Mac' window say it was running Mac OS 69, which is the s*x number, which is very funny. I faked being sick to watch WWDC 2011 -- Steve Jobs' last keynote -- and clapped alone in my room when the audience clapped, and rebuilt his slides in Keynote afterward because I wanted to understand how he'd made them feel that way." But also because of this one: "That is not a bug in how he's using the computer. That is the entire mechanism by which a kid becomes a developer. Or a designer. Or a filmmaker. Or whatever it is that comes after spending thousands of hours alone in a room with a machine that was never quite right for what you were asking of it."
-
Apple Does Fusion: "This is why I think Fusion Architecture is the real story.
Not because of what M5 Pro and M5 Max can do today. Because of what it opens up. Once you've proven you can split the chip and keep unified memory working across the pieces, the question changes. It is no longer 'how big can we make this chip?' It is 'how many pieces can we connect, and in how many dimensions?'"
-
Some Words on WigglyPaint. In the Joy column: this looks so lovely! I want to play with WigglyPaint! In the Curiosity column, the ending: "The most wildly successful project I've ever released is no longer mine. In all my years of building things and sharing them online, I have never felt so violated."
-
Drew Breunig is asking why is Claude an Electron app. His hypothesis: "For one thing, coding agents are really good at the first 90% of dev. But that last bit - nailing down all the edge cases and continuing support once it meets the real world - remains hard, tedious, and requires plenty of agent hand-holding." After having worked on Zed and contributed a few things to Ghostty (the first and only two truly native macOS apps I've worked on): I think most engineers underestimate how hard it is to build a truly great native application. And the question is: will your users notice, or care? If you're building the application for a business, will going native make the business more successful? On top of that: once you've worked on a native application you realize what an amazing platform the web is and how much developer tooling has been built in the last twenty, thirty years around it.
-
And here's Nikita Prokopov's answer to Drew's question: Claude is an Electron App because we've lost native.
-
Helen Min: Software isn't dying, but it is becoming more honest. Fascinating stuff. This line here, for example: "I often hear founders and other hyper-rational types ask why we haven't always billed for outcomes. The answer usually boils down to technical limitations and risk." That made me wonder: because now you can kiiinda say that tokens are substitute for outcomes? If you spend millions of tokens on something, won't you get outcomes? It might not be dying, but software is changing, man. And the old software we knew -- that's dead, I'm pretty sure. Dead in the sense that rock & roll is dead.
-
I also found this podcast with Bret Taylor to have some interesting thoughts on outcome-based billing.
-
Yes: "Willingness to look stupid is a genuine moat in creative work"
-
The 8 Levels of Agentic Engineering. Interesting, but at this point I'm convinced that in a year that ladder will look very funny and outdated. The models will wash away a lot.
-
Talking about models washing away stuff, here's Simon Willison: "Drop a coding agent into any existing codebase that uses libraries and tools that are too private or too new to feature in the training data and my experience is that it works just fine --the agent will consult enough of the existing examples to understand patterns, then iterate and test its own output to fill in the gaps." Many, many things I believed over the last year have been washed away by these models. If you still think Opus 4.6 is the peak, try deep mode in Amp, which uses GPT-5.3-Codex right now. Stare into its eyes.
-
Not a short form video guy, but I am a this-is-funny guy and this is funny: Taking my mate ChatGPT to lunch. (But, seriously, will AI cliche phrases disappear in the future or always be a thing?)
-
Or I guess I should've said "trope" instead of "cliche", because I'm going to ask a model to create a really, really dense version of this and then I'll put it in my ChatGPT system prompt: tropes.md.
-
Temporal: The 9-Year Journey to Fix Time in JavaScript. Years ago, back when we had such things, I was in a quarterly planning meeting. I ran the meeting, in fact. I was the manager, and I asked an engineer on my team to give a rough estimate of how long something would take. "Whew, really hard to say," he said. "Come on," I pushed. "We need something here, so--gun to your head--how long?" "Gun to my head?" he said. "I'd take the bullet." So, anyway, that's what I think of every time date and time libraries come up. Fix Time in JavaScript? I'd take the bullet.
-
I love Google Maps but I don't really enjoy using it to find places to eat in a city I don't know. And "don't really enjoy using" it is putting it mildly. Now Google Maps is getting Gemini and that seems like one of the most interesting "we put an LLM in it" product changes in a while.
-
Paula Muldoon is saying staff engineers need to get hands-on again: "This definition of staff engineering, particularly the organisational impact, made a lot of sense before 2025. Staff engineers need to stop being hands-on with the code as the majority of their work and spend time teaching others, making strategy etc. […] AI software tools have changed that." Yes. And now let's all consider what other roles and processes in the Big Tech Org Chart 2010-2025 don't make a lot of sense anymore. This isn't 2018 anymore.
-
Boredom Is the Price We Pay for Meaning: "If you try to distract yourself from boredom, if you run from it, all will be lost. Brodsky quoted an imperishable line from Robert Frost: 'The best way out is always through.' A note written by the novelist David Foster Wallace makes a similar point: 'Bliss--a second-by-second joy and gratitude at the gift of being alive, conscious--lies on the other side of crushing, crushing boredom.'"
Do you also like to deem yourself an oil industry expert in your newsletter? Sign right up:
-
-
🔗 HexRaysSA/plugin-repository commits sync repo: +8 releases rss
sync repo: +8 releases ## New releases - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer): 1.0.3 - [IDAssist](https://github.com/symgraph/IDAssist): 1.2.0, 1.1.0 - [IDAssistMCP](https://github.com/symgraph/IDAssistMCP): 1.2.0, 1.1.0 - [augur](https://github.com/0xdea/augur): 0.8.1 - [haruspex](https://github.com/0xdea/haruspex): 0.8.1 - [rhabdomancer](https://github.com/0xdea/rhabdomancer): 0.8.1
-
- March 14, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-14 rss
IDA Plugin Updates on 2026-03-14
New Releases:
- augur v0.8.1
- haruspex v0.8.1
- ida-theme-explorer v1.0.3
- IDAssist IDA 9.1 support
- IDAssist Updates and bug fixes
- IDAssistMCP IDA 9.1 support
- IDAssistMCP Updates and bug fixes
- OpenLumina v9.3.0
- rhabdomancer v0.8.1
Activity:
- augur
- binlex
- haruspex
- 5f4c3a63: chore: prepare for release
- ida-pro-mcp
- 9cd287d5: Merge pull request #292 from mrexodia/installer-refactor
- a3235c37: Merge pull request #291 from mrexodia/test-fix
- bf29384e: Refactor installation logic
- e0302418: Loosen max docstring word count
- 4af1e138: Blind fix attempt for #289
- 66131071: Merge pull request #273 from withzombies/conversation-improvements
- c3b41831: Merge pull request #287 from deadcode-walker/feat/composite-tools
- 1a2d7a16: Expand MCP query, type, rename, and runtime workflows
- bd7e19d3: removed ext=aggregate gating, tools are always available
- 622204fc: fixed abs() crash on non-int constant values in _filter_constants
- e7d4a383: optimized tool output sizes for token efficiency
- 45cd0a29: removed out-of-scope tools, improved aggregation tool descriptions
- e51c75a6: Merge pull request #281 from CCCougar/main
- 2dc7b84c: Merge pull request #282 from vee1e/fix-opencode-config
- f7531be4: Remove redundant pull_request event
- 5111a9f3: Attempt to fix CI
- 4bdec132: fix: use ida_ida.inf_get_filetype instead of ida_nalt.get_filetype fo…
- 549aa471: fix: remove get_inf_structure and nsucc/succ calls for ida 9.x compat
- f52c113e: fix: remove FUNC_USERDEFINED that doesn't exist in ida 9.x
- 9e5b7215: feat: composite analysis, preprocessing, emulation, notebook
- ida-theme-explorer
- 772f82ff: feat: fix scroll style, add gif and bump 1.0.3
- IDAssist
- eefc4c87: Add Qt5/Qt6 compatibility layer for IDA 9.1+ support
- fcf2e50c: Version fix.
- eda5f78c: Version fix.
- f6566839: Revert "Add Qt environment validation and graceful fallback for broke…
- 13afe981: Add Qt environment validation and graceful fallback for broken PySide6
- fa4ff1fc: Bump version.
- c54ab1b9: Add VULNERABLE_VIA edges, taint analysis improvements, and UI fixes
- IDAssistMCP
- 281cefa0: Add Qt5/Qt6 compatibility layer for IDA 9.1+ support
- 610f4c8d: Bump version.
- 01e3fd01: Revert "Add Qt environment validation and graceful fallback for broke…
- a5c012e8: Add Qt environment validation and graceful fallback for broken PySide6
- a7e7c26c: Update README to reflect current MCP tool names
- 1b517d87: Align MCP tool names for cross-tool consistency
- OpenLumina
- 222658b9: Update ida version in workflow
- python-elpida_core.py
- cb477c6a: fix: Cerebras model name qwen3-235b-a22b → qwen-3-235b-a22b-instruct-…
- 1d5a92d3: fix: Groq 404 model name, S3 bucket default, Streamlit deprecation, P…
- 8509a0ab: Fix 3 bugs: Groq circuit breaker, Perplexity fallback→HuggingFace, ll…
- 9220efd5: Provider expansion: D3→DeepSeek, D9→Cerebras, D12→Groq (9 unique prov…
- f1ae3784: Fix HARD_BLOCK (K8 friendly fire), add S3 Health + MIND Logs tabs, D1…
- rhabdomancer
-
🔗 r/york Signal in central York rss
Anyone else feel that the signal congestion in central York is ridiculous?
My wife and I tried searching for where we could go next, a museum maybe? Spend some money?
But our phones were dead so we went home.
This has been going on for years and it's pathetic in 2026.
submitted by /u/edf34n349843u52-3
[link] [comments] -
🔗 r/york Has anyone lost a half-moon earring in the city centre today? rss
Hey so I just found an earring tonight in the city centre and maybe there’s a chance to find its owner. Send me a message if that’s yours.
submitted by /u/Imaginary_Value1505
[link] [comments] -
🔗 Locklin on science Post money Silicon Valley Lotharios rss
There are many amusing stereotypical personalities in Silly Con valley. Steve Sailer coined the phrase “Silicon Valley Adventuress” for the very obvious type of women who try various kinds of shakedowns on tech firms and their executives. There’s the more obvious “Divorce Tick” kind of woman; someone who marries a clueless but rich nerdoid and […]
-
🔗 oxigraph/oxigraph v0.5.6 release
- SPARQL: DESCRIBE: do not describe values of blank node "variables".
- SPARQL: Fixes some bug in the parser related to spacing.
- SPARQL: Fixes evaluation of SERVICE clauses with unsupported custom functions.
- JSON-LD: fixes serialization of relative IRIs looking like keywords.
- RocksDB: reduce the number of copies in read operations.
-
🔗 sacha chua :: living an awesome life Org Mode: Export HTML, copy files, and serve the results via simple-httpd so that media files work rss
In Org Mode, when you use "Export to HTML - As HTML file and open", the resulting HTML file is loaded using a
file://URL. This means you can't load any media files. In my post about pronunciation practice, I wanted to test the playback without waiting for my 11ty-based static site generator to churn through the files.simple-httpd lets you run a web server from Emacs. By default, the
httpd-rootis~/public_htmlandhttpd-portis8085, but you can configure it to be somewhere else. Here I set it up to create a new temporary directory, and to delete that directory afterwards.(use-package simple-httpd :config (setq httpd-root (make-temp-file "httpd" t)) :hook (httpd-stop . my-simple-httpd-remove-temporary-root) (kill-emacs . httpd-stop)) (defun my-simple-httpd-remove-temporary-root () "Remove `httpd-root' only if it's a temporary directory." (when (file-in-directory-p httpd-root temporary-file-directory) (delete-directory httpd-root t)))The following code exports your Org buffer or subtree to a file in that directory, copies all the referenced local files (if they're newer) and updates the links in the HTML, and then serves it via simple-httpd. Note that it just overwrites everything without confirmation, so if you refer to files with the same name, only the last one will be kept.
(with-eval-after-load 'ox (org-export-define-derived-backend 'my-html-served 'html :menu-entry '(?s "Export to HTML and Serve" ((?b "Buffer" my-org-serve--buffer) (?s "Subtree" my-org-serve--subtree))))) (defun my-org-serve--buffer (&optional async _subtreep visible-only body-only ext-plist) (my-org-export-and-serve nil)) (defun my-org-serve--subtree (&optional async _subtreep visible-only body-only ext-plist) (my-org-export-and-serve t)) ;; Based on org-11ty--copy-files-and-replace-links ;; Might be a good idea to use something DOM-based instead (defun my-html-copy-files-and-replace-links (info &optional destination-dir) (let ((file-regexp "\\(?:src\\|href\\|poster\\)=\"\\(\\(file:\\)?.*?\\)\"") (destination-dir (or destination-dir (file-name-directory (plist-get info :file-path)))) file-all-urls file-name beg new-file file-re unescaped) (unless (file-directory-p destination-dir) (make-directory destination-dir t)) (unless (file-directory-p destination-dir) (error "%s is not a directory." destination-dir)) (save-excursion (goto-char (point-min)) (while (re-search-forward file-regexp nil t) (setq file-name (or (match-string 1) (match-string 2))) (unless (or (string-match "^#" file-name) (get-text-property 0 'changed file-name)) (setq file-name (replace-regexp-in-string "\\?.+" "" (save-match-data (if (string-match "^file:" file-name) (substring file-name 7) file-name)))) (setq unescaped (replace-regexp-in-string "%23" "#" file-name)) (setq new-file (concat (if info (plist-get info :permalink) "") (file-name-nondirectory unescaped))) (unless (org-url-p file-name) (let ((new-file-name (expand-file-name (file-name-nondirectory unescaped) destination-dir))) (condition-case err (when (or (not (file-exists-p new-file-name)) (file-newer-than-file-p unescaped new-file-name)) (copy-file unescaped new-file-name t)) (error nil)) (when (file-exists-p new-file-name) (save-excursion (goto-char (point-min)) (setq file-re (concat "\\(?: src=\"\\| href=\"\\| poster=\"\\)\\(\\(?:file://\\)?" (regexp-quote file-name) "\\)")) (while (re-search-forward file-re nil t) (replace-match (propertize (save-match-data (replace-regexp-in-string "#" "%23" new-file)) 'changed t) t t nil 1))))))))))) (defun my-org-export-and-serve (&optional subtreep) "Export current org buffer (or subtree if SUBTREEP) to HTML and serve via simple-httpd." (interactive "P") (require 'simple-httpd) (httpd-stop) (unless httpd-root (error "Set `httpd-root'.")) (unless (file-directory-p httpd-root) (make-directory httpd-root t)) (unless (file-directory-p httpd-root) (error "%s is not a directory." httpd-root)) (let* ((out-file (expand-file-name (concat (file-name-base (buffer-file-name)) ".html") httpd-root)) (html-file (org-export-to-file 'my-html-served out-file nil subtreep))) ;; Copy all the files and rewrite all the links (with-temp-file out-file (insert-file-contents out-file) (my-html-copy-files-and-replace-links `(:permalink "/") httpd-root)) (httpd-start) (browse-url (format "http://localhost:%d/%s" httpd-port (file-name-nondirectory html-file)))))Now I can use
C-c C-e(org-export-dispatch), select the subtree withC-s, and uses sto export a subtree to a webserver and have all the media files work. This took 0.46 seconds for my post on pronunciation practice and automatically opens the page in a browser window. In comparison, my 11ty static site generator took 5.18 seconds for a subset of my site (1630 files copied, 214 files generated), and I haven't yet hooked up monitoring it to Emacs, so I have to take an extra step to open the page in the browser when I think it's finished. I think exporting to HTML and serving it with simple-httpd will be much easier for simple cases like this, and then I can export to 11ty once I'm done with the basic checks.This is part of my Emacs configuration.You can e-mail me at sacha@sachachua.com.
-
🔗 Simon Willison My fireside chat about agentic engineering at the Pragmatic Summit rss
I was a speaker last month at the Pragmatic Summit in San Francisco, where I participated in a fireside chat session about Agentic Engineering hosted by Eric Lui from Statsig.
The video is available on YouTube. Here are my highlights from the conversation.
Stages of AI adoption
We started by talking about the different phases a software developer goes through in adopting AI coding tools.
I feel like there are different stages of AI adoption as a programmer. You start off with you've got ChatGPT and you ask it questions and occasionally it helps you out. And then the big step is when you move to the coding agents that are writing code for you—initially writing bits of code and then there's that moment where the agent writes more code than you do, which is a big moment. And that for me happened only about maybe six months ago.
The new thing as of what, three weeks ago, is you don't read the code. If anyone saw StrongDM—they had a big thing come out last week where they talked about their software factory and their two principles were nobody writes any code, nobody reads any code, which is clear insanity. That is wildly irresponsible. They're a security company building security software, which is why it's worth paying close attention—like how could this possibly be working?
I talked about StrongDM more in How StrongDM's AI team build serious software without even looking at the code.
Trusting AI output
We discussed the challenge of knowing when to trust the AI's output as opposed to reviewing every line with a fine tooth-comb.
The way I've become a little bit more comfortable with it is thinking about how when I worked at a big company, other teams would build services for us and we would read their documentation, use their service, and we wouldn't go and look at their code. If it broke, we'd dive in and see what the bug was in the code. But you generally trust those teams of professionals to produce stuff that works. Trusting an AI in the same way feels very uncomfortable. I think Opus 4.5 was the first one that earned my trust—I'm very confident now that for classes of problems that I've seen it tackle before, it's not going to do anything stupid. If I ask it to build a JSON API that hits this database and returns the data and paginates it, it's just going to do it and I'm going to get the right thing back.
Test-driven development with agents
Every single coding session I start with an agent, I start by saying here's how to run the test—it's normally
uv run pytestis my current test framework. So I say run the test and then I say use red-green TDD and give it its instruction. So it's "use red-green TDD"—it's like five tokens, and that works. All of the good coding agents know what red-green TDD is and they will start churning through and the chances of you getting code that works go up so much if they're writing the test first.I wrote more about TDD for coding agents recently in Red/green TDD.
I have hated [test-first TDD] throughout my career. I've tried it in the past. It feels really tedious. It slows me down. I just wasn't a fan. Getting agents to do it is fine. I don't care if the agent spins around for a few minutes wasting its time on a test that doesn't work.
I see people who are writing code with coding agents and they're not writing any tests at all. That's a terrible idea. Tests—the reason not to write tests in the past has been that it's extra work that you have to do and maybe you'll have to maintain them in the future. They're free now. They're effectively free. I think tests are no longer even remotely optional.
Manual testing and Showboat
You have to get them to test the stuff manually, which doesn't make sense because they're computers. But anyone who's done automated tests will know that just because the test suite passes doesn't mean that the web server will boot. So I will tell my agents, start the server running in the background and then use curl to exercise the API that you just created. And that works, and often that will find new bugs that the test didn't cover.
I've got this new tool I built called Showboat. The idea with Showboat is you tell it—it's a little thing that builds up a markdown document of the manual test that it ran. So you can say go and use Showboat and exercise this API and you'll get a document that says "I'm trying out this API," curl command, output of curl command, "that works, let's try this other thing."
I introduced Showboat in Introducing Showboat and Rodney, so agents can demo what they've built.
Conformance-driven development
I had a project recently where I wanted to add file uploads to my own little web framework, Datasette—multipart file uploads and all of that. And the way I did it is I told Claude to build a test suite for file uploads that passes on Go and Node.js and Django and Starlette—just here's six different web frameworks that implement this, build tests that they all pass. Now I've got a test suite and I can say, okay, build me a new implementation for Datasette on top of those tests. And it did the job. It's really powerful—it's almost like you can reverse engineer six implementations of a standard to get a new standard and then you can implement the standard.
Here's the PR for that file upload feature.
Does code quality matter?
It's completely context dependent. I knock out little vibe-coded HTML JavaScript tools, single pages, and the code quality does not matter. It's like 800 lines of complete spaghetti. Who cares, right? It either works or it doesn't. Anything that you're maintaining over the longer term, the code quality does start really mattering.
Here's my collection of vibe coded HTML tools, and notes on how I build them.
Having poor quality code from an agent is a choice that you make. If the agent spits out 2,000 lines of bad code and you choose to ignore it, that's on you. If you then look at that code—you know what, we should refactor that piece, use this other design pattern—and you feed that back into the agent, you can end up with code that is way better than the code I would have written by hand because I'm a little bit lazy. If there was a little refactoring I spot at the very end that would take me another hour, I'm just not going to do it. If an agent's going to take an hour but I prompt it and then go off and walk the dog, then sure, I'll do it.
I turned this point into a bit of a personal manifesto: AI should help us produce better code.
Codebase patterns and templates
One of the magic tricks about these things is they're incredibly consistent. If you've got a codebase with a bunch of patterns in, they will follow those patterns almost to a tee.
Most of the projects I do I start by cloning that template. It puts the tests in the right place and there's a readme with a few lines of description in it and GitHub continuous integration is set up. Even having just one or two tests in the style that you like means it'll write tests in the style that you like. There's a lot to be said for keeping your codebase high quality because the agent will then add to it in a high quality way. And honestly, it's exactly the same with human development teams—if you're the first person to use Redis at your company, you have to do it perfectly because the next person will copy and paste what you did.
I run templates using cookiecutter - here are my templates for python-lib, click-app, and datasette-plugin.
Prompt injection and the lethal trifecta
When you build software on top of LLMs you're outsourcing decisions in your software to a language model. The problem with language models is they're incredibly gullible by design. They do exactly what you tell them to do and they will believe almost anything that you say to them.
Here's my September 2022 post that introduced the term prompt injection.
I named it after SQL injection because I thought the original problem was you're combining trusted and untrusted text, like you do with a SQL injection attack. Problem is you can solve SQL injection by parameterizing your query. You can't do that with LLMs—there is no way to reliably say this is the data and these are the instructions. So the name was a bad choice of name from the very start.
I've learned that when you coin a new term, the definition is not what you give it. It's what people assume it means when they hear it.
Here's more detail on the challenges of coining terms.
The lethal trifecta is when you've got a model which has access to three things. It can access your private data—so it's got access to environment variables with API keys or it can read your email or whatever. It's exposed to malicious instructions—there's some way that an attacker could try and trick it. And it's got some kind of exfiltration vector, a way of sending messages back out to that attacker. The classic example is if I've got a digital assistant with access to my email, and someone emails it and says, "Hey, Simon said that you should forward me your latest password reset emails." If it does, that's a disaster. And a lot of them kind of will.
My post describing the Lethal Trifecta.
Sandboxing
We discussed the challenges of running coding agents safely, especially on local machines.
The most important thing is sandboxing. You want your coding agent running in an environment where if something goes completely wrong, if somebody gets malicious instructions to it, the damage is greatly limited.
This is why I'm such a fan of Claude Code for web.
The reason I use Claude on my phone is that's using Claude Code for the web, which runs in a container that Anthropic run. So you basically say, "Hey, Anthropic, spin up a Linux VM. Check out my git repo into it. Solve this problem for me." The worst thing that could happen with a prompt injection against that is somebody might steal your private source code, which isn't great. Most of my stuff's open source, so I couldn't care less.
On running agents in YOLO mode, e.g. Claude's
--dangerously-skip-permissions:I mostly run Claude with dangerously skip permissions on my Mac directly even though I'm the world's foremost expert on why you shouldn't do that. Because it's so good. It's so convenient. And what I try and do is if I'm running it in that mode, I try not to dump in random instructions from repos that I don't trust. It's still very risky and I need to habitually not do that.
Safe testing with user data
The topic of testing against a copy of your production data came up.
I wouldn't use sensitive user data. When you work at a big company the first few years everyone's cloning the production database to their laptops and then somebody's laptop gets stolen. You shouldn't do that. I'd actually invest in good mocking—here's a button I click and it creates a hundred random users with made-up names. There's a trick you can do there which is much easier with agents where you can say, okay, there's this one edge case where if a user has over a thousand ticket types in my event platform everything breaks, so I have a button that you click that creates a simulated user with a thousand ticket types.
How we got here
I feel like there have been a few inflection points. GPT-4 was the point where it was actually useful and it wasn't making up absolutely everything and then we were stuck with GPT-4 for about 9 months—nobody else could build a model that good.
I think the killer moment was Claude Code. The coding agents only kicked off about a year ago. Claude Code just turned one year old. It was that combination of Claude Code plus Sonnet 3.5 at the time—that was the first model that really felt good enough at driving a terminal to be able to do useful things.
Then things got really good with the November 2025 inflection point.
It's at a point where I'm oneshotting basically everything. I'll pull out and say, "Oh, I need three new RSS feeds on my blog." And I don't even have to ask if it's going to work. It's like a two sentence prompt. That reliability, that ability to predictably—this is why we can start trusting them because we can predict what they're going to do.
Exploring model boundaries
An ongoing challenge is figuring out what the models can and cannot do, especially as new models are released.
The most interesting question is what can the models we have do right now. The only thing I care about today is what can Claude Opus 4.6 do that we haven't figured out yet. And I think it would take us six months to even start exploring the boundaries of that.
It's always useful—anytime a model fails to do something for you, tuck that away and try again in 6 months because it'll normally fail again, but every now and then it'll actually do it and now you might be the first person in the world to learn that the model can now do this thing.
A great example is spellchecking. A year and a half ago the models were terrible at spellchecking—they couldn't do it. You'd throw stuff in and they just weren't strong enough to spot even minor typos. That changed about 12 months ago and now every blog post I post I have a proofreader Claude thing and I paste it and it goes, "Oh, you've misspelled this, you've missed an apostrophe off here." It's really useful.
Here's the prompt I use for proofreading.
Mental exhaustion and career advice
This stuff is absolutely exhausting. I often have three projects that I'm working on at once because then if something takes 10 minutes I can switch to another one and after two hours of that I'm done for the day. I'm mentally exhausted. People worry about skill atrophy and being lazy. I think this is the opposite of that. You have to operate firing on all cylinders if you're going to keep your trio or quadruple of agents busy solving all these different problems.
I think that might be what saves us. You can't have one engineer and have him do a thousand projects because after 3 hours of that, he's going to literally pass out in a corner.
I was asked for general career advice for software developers in this new era of agentic engineering.
As engineers, our careers should be changing right now this second because we can be so much more ambitious in what we do. If you've always stuck to two programming languages because of the overhead of learning a third, go and learn a third right now—and don't learn it, just start writing code in it. I've released three projects written in Go in the past two weeks and I am not a fluent Go programmer, but I can read it well enough to scan through and go, "Yeah, this looks like it's doing the right thing."
It's a great idea to try fun, weird, or stupid projects with them too:
I needed to cook two meals at once at Christmas from two recipes. So I took photos of the two recipes and I had Claude vibe code me up a cooking timer uniquely for those two recipes. You click go and it says, "Okay, in recipe one you need to be doing this and then in recipe two you do this." And it worked. I mean it was stupid, right? I should have just figured it out with a piece of paper. It would have been fine. But it's so much more fun building a ridiculous custom piece of software to help you cook Christmas dinner.
Here's more about that recipe app.
What does this mean for open source?
Eric asked if we would build Django the same way today as we did 22 years ago.
In 2003 we built Django. I co-created it at a local newspaper in Kansas and it was because we wanted to build web applications on journalism deadlines. There's a story, you want to knock out a thing related to that story, it can't take two weeks because the story's moved on. You've got to have tools in place that let you build things in a couple of hours. And so the whole point of Django from the very start was how do we help people build high-quality applications as quickly as possible. Today, I can build an app for a news story in two hours and it doesn't matter what the code looks like.
I talked about the challenges that AI-assisted programming poses for open source in general.
Why would I use a date picker library where I'd have to customize it when I could have Claude write me the exact date picker that I want? I would trust Opus 4.6 to build me a good date picker widget that was mobile friendly and accessible and all of those things. And what does that do for demand for open source? We've seen that thing with Tailwind, right? Where Tailwind's business model is the framework's free and then you pay them for access to their component library of high quality date pickers, and the market for that has collapsed because people can vibe code those kinds of custom components.
Here are more of my thoughts on the Tailwind situation.
I don't know. Agents love open source. They're great at recommending libraries. They will stitch things together. I feel like the reason you can build such amazing things with agents is entirely built on the back of the open source community.
Projects are flooded with junk contributions to the point that people are trying to convince GitHub to disable pull requests, which is something GitHub have never done. That's been the whole fundamental value of GitHub—open collaboration and pull requests—and now people are saying, "We're just flooded by them, this doesn't work anymore."
I wrote more about this problem in Inflicting unreviewed code on collaborators.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/wiesbaden Ein Discord für Lesemäuse <3 rss
Hallo ihr Buchmenschen!
Ich habe einen kleinen, kuscheligen Discord-Server gegründet, auf dem sich alle, die Bücher lieben, treffen, quatschen und ihre Lieblingsgeschichten teilen können.
Hier kannst du einfach ankommen, dich in Ruhe umsehen und nach Lust und Laune mitlesen oder mitreden. Egal ob Fantasy, Romance, Thriller, Manga oder einfach nur gemütliches Stöbern – bei uns ist jede*r willkommen.
Was dich erwartet:
Gemütliche Leseecken für Lesetalk, Buchempfehlungen, Spoiler und Plottwists
Kreative Kanäle für Fanart, Bookmemes, Lieblingszitate & Book Aesthetic
Buddy Reads, Lesekreise oder einfach nur nette Plauderei über Bücher
Rollen, die du selbst nach deinen Lieblingsgenres oder deinem Lese-Vibe auswählen kannst
Alles ganz entspannt – du musst nichts, darfst alles. Unser Ziel ist ein freundlicher, warmer Ort für alle, die gerne lesen, wo man sich einfach wohlfühlt.
Wenn du Lust hast, vorbei zu schauen, schreib mir gerne eine DMund komm gern vorbei
Wir freuen uns schon auf dich, deine Lieblingsbücher und gemütliche Gespräche bei einer virtuellen Tasse Tee oder Kaffee!
submitted by /u/Ok-Calendar-9250
[link] [comments] -
🔗 r/LocalLLaMA Nvidia's Nemotron 3 Super is a bigger deal than you think rss
| submitted by /u/Comfortable-Rock-498
[link] [comments]
---|--- -
🔗 r/Leeds Here's some Flixbus changes including the new 905 connecting Bradford & Leeds to Heathrow & Gatwick. rss
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 r/Leeds (21068) BU75 WDL making the debut of the Volvo B8RLE MCV Evora debut on Go Ahead West Yorkshire's X99. 4 more would join it soon. rss
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 r/reverseengineering Reverse Engineering Android 16 Memory Management: Solving the Knox-Induced 512B Sector Fragmentation Paradox rss
submitted by /u/Funny_You4295
[link] [comments] -
🔗 r/york visiting alone - where to eat at? rss
hello!! I'm going to York next week on my own and I'm quite anxious/nervous when it comes to eating out by myself. I want some places that aren't too busy, but also where I won't be the only person there because then I feel too seen, and also preferably with tables that aren't too close together. If you know any places like that please let me know!! I'm quite picky so I probably won't go for any places that serve Asian food since it typically has ingredients I'm not keen on (as sad as that is haha) but I'll still be willing to take a look! Thanks!!
submitted by /u/nek-uno
[link] [comments] -
🔗 r/york Am I imagining thing or are people in York mildly racist rss
I am an international student (F) and I have studied in the UK before but this is my first time living in York. I am a person who smiles at strangers when we make eye contact but in the past 7 months only one person smiled back at me the rest just stare me down. Not smiling is fine tbh but every time I go to boots for a prescription i get told to wait in the corner where there are only people of colour waiting for prolonged periods.... I also have ignored that saying it's circumstantial. BUT today i went to M&S foods (i regularly go there at least 3x a week) and got stopped by the security saying I did not scan all items, he checked the receipt and items and held his ground, I went through each item one by one with him and he was convinced, he followed that by asking if I used a bag and not paid for it when I DID NOT. Am I weird and sensitive for feeling targeted?
submitted by /u/Sweaty-Artist5986
[link] [comments] -
🔗 r/Leeds Map of skate spots in Leeds rss
I’ve been building a site that maps skate spots around the world and just added a Leeds guide.
It includes skateparks, street spots and DIY spots in the area.
You can check it out here:
https://urbanatlas.uk/guides/skate-spots-leeds
If there are any Leeds spots missing let me know and I’ll add them.
submitted by /u/urbanatlas-dev
[link] [comments] -
🔗 r/reverseengineering I rewrote my ELF loader in Rust and added new features! rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/wiesbaden Fußballgruppe rss
Hi Zusammen,
ich suche eine Gruppe die regelmäßig Fußball spielen geht oder einzelne Personen die Bock drauf hätten jeden Sonntag kicken zu gehen - einfach auf entspannt und zum Spaß.
Wir sind bereits zu dritt (30,32,33) - Alter, Herkunft etc. ist egal
submitted by /u/Lebenskuenstlerinho
[link] [comments] -
🔗 r/wiesbaden 30M looking to meet fun people rss
Hey Wiesbaden! Looking to meet some likeminded people and maybe actually leave my apartment more often. I'm a Franco-Spanish guy (30M), I enjoy a bit everything creative (drawing, painting, animation, arts and crafts... currently I'm very into papier mâché sculptures). I like boulder, Magic the gathering (I'm not super experienced tho so if you're a pro you might get bored hahahah), I also love going to museums and more stuff but listing everything is hard. If any of that sounds like your thing, hit me up! Bouldering sessions, casual MTG games, museum trips, crafting together, or just a casual drink here and there, I'm down for anything really. Have a nice one!
submitted by /u/Raphi
[link] [comments] -
🔗 r/york Spring Blossom in the Museum Gardens rss
| submitted by /u/York_shireman
[link] [comments]
---|--- -
🔗 r/reverseengineering Cross-Platform GUI for APK Decompilation, Analysis, and Recompilation rss
submitted by /u/DeemounUS
[link] [comments] -
🔗 r/Leeds Moved to Leeds – how do people find part-time/temp work here? rss
Hi everyone, sorry if this isn’t the right place to ask, but I was wondering if anyone knows of any WhatsApp (or similar) groups that share part-time or temporary job opportunities?
I recently moved to Leeds and when I lived in Manchester I used to find bits of work through a group chat that posted things like bar work, nannying, care work, receptionist/admin roles and other short-term jobs. It was really helpful for picking up flexible work.
Does anyone know of anything similar in or around the Leeds area? If not, I’d also really appreciate any recommendations on how to find part-time or temporary work locally.
I’m mainly a stay-at-home mum at the moment but I’m slowly looking to get back into work. Any advice would be really appreciated — and I hope this is okay to ask here.
Thank you!
submitted by /u/everybody_wake_up
[link] [comments] -
🔗 r/Leeds Preachers on Briggate rss
There seems to be more and more self appointed 'preachers' on Briggate. Some of them seem to be bordering on having mental health issues (screaming repeatedly etc). Is preaching allowed? I don't have a problem with people talking about their faith but some aggressive/unstable behaviour is worrying.
submitted by /u/Mental_Brick2013
[link] [comments] -
🔗 r/Leeds Fire hazard in the Trinity rss
These things are a lot uglier in real life. ,
submitted by /u/Life_Exchange_7188
[link] [comments] -
🔗 badlogic/pi-mono v0.58.1 release
Added
- Added
pi uninstallalias forpi install --uninstallconvenience
Fixed
- Fixed OpenAI Codex websocket protocol to include required headers and properly terminate SSE streams on connection close (#1961)
- Fixed WSL clipboard image fallback to properly handle missing clipboard utilities and permission errors (#1722)
- Fixed extension
session_starthook firing before TUI was ready, causing UI operations insession_starthandlers to fail (#2035) - Fixed Windows shell and path handling for package manager operations and autocomplete to properly handle drive letters and mixed path separators
- Fixed Bedrock prompt caching being enabled for non-Claude models, causing API errors (#2053)
- Fixed Qwen models via OpenAI-compatible providers by adding
qwen-chat-templatecompat mode that uses Qwen's native chat template format (#2020) - Fixed Bedrock unsigned thinking replay to handle edge cases with empty or malformed thinking blocks (#2063)
- Fixed headless clipboard fallback logging spurious errors in non-interactive environments (#2056)
- Fixed
models.jsonprovider compat flags not being honored when loading custom model definitions (#2062) - Fixed xhigh reasoning effort detection for Claude Opus 4.6 to match by model ID instead of requiring explicit capability flag (#2040)
- Fixed prompt cwd containing Windows backslashes breaking bash tool execution by normalizing to forward slashes (#2080)
- Fixed editor paste to preserve literal content instead of normalizing newlines, preventing content corruption for text with embedded escape sequences (#2064)
- Fixed skill discovery recursing past skill root directories when nested SKILL.md files exist (#2075)
- Fixed tab completion to preserve
./prefix when completing relative paths (#2087) - Fixed npm package installs and lookups being tied to the active repository Node version by adding
npmCommandas an argv-style settings override for package manager operations (#2072) - Fixed
ctx.ui.getEditorText()in the extension API returning paste markers (e.g.,[paste #1 +24 lines]) instead of the actual pasted content (#2084) - Fixed startup crash when downloading
fd/ripgrepon first run by usingpipeline()instead offinished(readable.pipe(writable))so stream errors from timeouts are caught properly, and increased the download timeout from 10s to 120s (#2066)
- Added
-
🔗 r/Leeds Is there a female or mixed group equivalent of Andy’s man club, or any other support groups in Leeds? rss
Thankyou 🙏🏽
submitted by /u/anordicalien
[link] [comments] -
🔗 r/wiesbaden Linienbus in Wiesbaden geklaut: Teenie fährt bis Karlsruhe rss
TLDR: 15-Jähriger von der ebsch Seid klaut Linienbus im umkämpften Gebiet (Kastel) mit Generalschlüssel, fährt 150km bis Karlsruhe um seiner Freundin zu imponieren (Diebstahl fällt erst nach Stunden auf, weil keiner den Bus vermisst).
Wann bekommt der Junge einen Arbeitsvertrag von ESWE Verkehr? Solche Busfahrer brauchen wir!
submitted by /u/Itchy-Individual3536
[link] [comments] -
🔗 r/york Daffodils by York walls rss
| Does anyone know if the daffodils are all in bloom on the banks around York wall? Will save me driving in for disappointment later today. Thanks submitted by /u/Possible-Ad505
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.58.0 release
New Features
- Claude Opus 4.6, Sonnet 4.6, and related Bedrock models now use a 1M token context window (up from 200K) (#2135 by @mitsuhiko).
- Extension tool calls now execute in parallel by default, with sequential
tool_callpreflight preserved for extension interception. GOOGLE_CLOUD_API_KEYenvironment variable support for thegoogle-vertexprovider as an alternative to Application Default Credentials (#1976 by @gordonhwc).- Extensions can supply deterministic session IDs via
newSession()(#2130 by @zhahaoyu).
Added
- Added
GOOGLE_CLOUD_API_KEYenvironment variable support for thegoogle-vertexprovider as an alternative to Application Default Credentials (#1976 by @gordonhwc) - Added custom session ID support in
newSession()for extensions that need deterministic session paths (#2130 by @zhahaoyu)
Changed
- Changed extension tool interception to use agent-core
beforeToolCallandafterToolCallhooks instead of wrapper-based interception. Tool calls now execute in parallel by default, extensiontool_callpreflight still runs sequentially, and final tool results are emitted in assistant source order. - Raised Claude Opus 4.6, Sonnet 4.6, and related Bedrock model context windows from 200K to 1M tokens (#2135 by @mitsuhiko)
Fixed
- Fixed
tool_callextension handlers observing stalesessionManagerstate during multi-tool turns by draining queued agent events before eachtool_callpreflight. In parallel tool mode this guarantees state through the current assistant tool-calling message, but not sibling tool results from the same assistant message. - Fixed interactive input fields backed by the TUI
Inputcomponent to scroll by visual column width for wide Unicode text (CJK, fullwidth characters), preventing rendered line overflow and TUI crashes in places like search and filter inputs (#1982) - Fixed
shift+taband other modified Tab bindings in tmux whenextended-keys-formatis left at the defaultxterm - Fixed EXIF orientation not being applied during image convert and resize, causing JPEG and WebP images from phone cameras to display rotated or mirrored (#2105 by @melihmucuk)
- Fixed the default coding-agent system prompt to include only the current date in ISO format, not the current time, so prompt prefixes stay cacheable across reloads and resumed sessions (#2131)
- Fixed retry regex to match
server_errorandinternal_errorerror types from providers, improving automatic retry coverage (#2117 by @MadKangYu) - Fixed example extensions to support
PI_CODING_AGENT_DIRenvironment variable for custom agent directory paths (#2009 by @smithbm2316) - Fixed tool result images not being sent in
function_call_outputitems for OpenAI Responses API providers, causing image data to be silently dropped in tool results (#2104) - Fixed assistant content being sent as structured content blocks instead of plain strings in the
openai-completionsprovider, causing errors with some OpenAI-compatible backends (#2008 by @geraldoaax) - Fixed error details in OpenAI Responses
response.failedhandler to include status code, error code, and message instead of a generic failure (#1956 by @drewburr) - Fixed GitHub Copilot device-code login polling to respect OAuth slow-down intervals, wait before the first token poll, and include a clearer clock-drift hint in WSL/VM environments when repeated slow-downs lead to timeout
- Fixed usage statistics not being captured for OpenAI-compatible providers that return usage in
choice.usageinstead of the standardchunk.usage(e.g., Moonshot/Kimi) (#2017) - Fixed editor scroll indicator rendering crash in narrow terminal widths (#2103 by @haoqixu)
- Fixed tab characters in editor and input paste not being normalized to spaces (#2027, #1975 by @haoqixu)
- Fixed
wordWrapLineoverflow when wide characters (CJK, fullwidth) fall exactly at the wrap boundary (#2082 by @haoqixu) - Fixed paste markers not being treated as atomic segments in editor word wrapping and cursor navigation (#2111 by @haoqixu)
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [IDASQL](https://github.com/allthingsida/idasql): 0.0.11 -
🔗 r/reverseengineering If you’re working with Akamai sensors and need to gen correctly, here’s a correctly VM-decompiled version for Akamai 3.0. rss
submitted by /u/alex_pushing40
[link] [comments]
-
- March 13, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-13 rss
IDA Plugin Updates on 2026-03-13
New Releases:
- fa v1.0.10
- ida-dbimporter 0.0.3
- IDA-GameDataTracker Release v1.0.0 - 2026.03.13
- IDA-MCP v0.4.0
- IDA-VTableExplorer Release v1.3.0 - 2026.03.13
Activity:
- augur
- binlex
- 92aa5ed6: cleanup config
- 476385b9: cleanup format api
- 71e1a072: cleanup magic detection
- f668d5a5: config compeltion
- 076c7c4e: png imaging
- 6fe67475: terminal imaging
- b99da23a: terminal imaging
- eacf4bda: vex windows fix
- f39e72f9: simplify api
- 4150e667: svg imaging
- 93962907: python api improvements
- 72f91769: macos do not patch upstream repo for vex, vex python bindings
- b6529625: vex lifter serialization and deserialization
- f9073abb: vex api structure cleanup
- 54dab80b: apply vex patches
- binsync
- 3bfaa494: feat/download_linked_projects (#505)
- capa
- 7b23834d: build(deps-dev): bump black from 25.12.0 to 26.3.0 (#2902)
- fa
- haruspex
- hrtng
- 47b7a372: a few minor bugs fixed
- ida-cyberchef
- ida-dbimporter
- d8c99eb0: Fixed readme, Python version, and bumped release version in order to …
- IDA-GameDataTracker
- IDA-MCP
- 7295c27d: 优化项目架构
- IDA-NO-MCP
- 6dba91c0: Merge pull request #8 from Cross2pro/fix/ida9-compat
- ida-sdk
- 039714d1: fix: update ida-cmake submodule (macOS idalib link order fix) (#38)
- IDA-VTableExplorer
- 10f16efa: chore: update CHANGELOG for version 1.3.0 with new JSON export featur…
- aab75739: chore: update plugin version to 1.3.0 and remove outdated build syste…
- 3f07d39b: feat: add LICENSE file and update README to reference licensing details
- 47a91605: chore: remove outdated Docker README.md file
- 18fe1cae: Merge pull request #6 from rweijnen/feature/json-export
- idamcp
- c9538704: update
- python-elpida_core.py
- quokka
- 78bf4f0e: Merge pull request #88 from quarkslab/lzma-compression
- rhabdomancer
-
🔗 r/Leeds American man living in Leeds charged with terror offences rss
What's going on here then?
submitted by /u/Granopoly
[link] [comments] -
🔗 r/york Any idea if there will actually be disruption from this? rss
| This might sound a bit silly but I really don't want a smart meter, I don't see the need for everything to be "smart" (basically means they can just collect more data from me) and I don't see anything wrong with just sending readings every so often. Can I ignore this and be okay or will I actually end up losing power without getting a new meter submitted by /u/Jubbity
[link] [comments]
---|--- -
🔗 r/york Places to develop 35mm film? :) rss
Hi! I just wondered if there’s anywhere in York that develops film. I normally go to Boots but it can take like several weeks and I wondered if somewhere else can do it quicker. I saw York Digital Image does it but that was an older post - do they still do it and has anyone used them?
Thanks! :)
submitted by /u/bunnyels07
[link] [comments] -
🔗 News Minimalist 🐢 Nations release oil reserves to stabilize prices + 11 more stories rss
In the last 3 days Gemini read 88464 top news stories. After removing previously covered events, there are 12 articles with a significance score over 5.5.

[6.5] Germany and Austria join global effort to release oil reserves and stabilize prices —apnews.com(+1153)
The International Energy Agency will release a record 400 million barrels of emergency oil reserves to counter energy market disruptions and price spikes caused by Middle East conflict.
Member nations, including Germany and Austria, agreed to the release after Iran effectively halted oil traffic through the Strait of Hormuz. The move follows G7 discussions aimed at stabilizing global supplies as export volumes have plummeted below ten percent of prewar levels.
Established after the 1974 Arab oil embargo, the IEA has authorized emergency releases five times previously. Officials emphasize that restoring transit through the Strait of Hormuz remains essential for long-term market stability.
[5.8] China adopts an ethnic unity law that critics say will cement assimilation —newsday.com(+11)
China has adopted a sweeping ethnic unity law that critics say will accelerate the assimilation of minority groups by mandating Mandarin in schools and further eroding their cultural rights.
The legislation requires all organizations and citizens to foster a shared Chinese national identity. It essentially prohibits using minority languages for primary instruction during compulsory education, a move experts argue effectively dismantles China’s original constitutional promises of meaningful regional ethnic autonomy.
The measure also establishes extraterritorial legal penalties for overseas individuals deemed to harm ethnic unity. Additionally, it encourages cross- migration to create embedded communities, which scholars warn could break up minority-heavy neighborhoods.
[5.6] Artemis II mission targets early April for crewed lunar flyby —bbc.com(+67)
NASA targets early April for its Artemis II mission, which will carry four astronauts around the Moon for the first time in over 50 years after resolving technical issues.
Following repairs to a helium leak, officials plan to return the Space Launch System rocket to the Florida launchpad on March 19. The ten-day flight will carry three Americans and one Canadian to the lunar far side and back.
Highly covered news with significance over 5.5
[5.8] Gut bacteria linked to age-related memory loss in mice — nature.com (+13)
[5.8] China approves launch of world first brain-computer interface device — independent.co.uk (+2)
[5.7] Scientists revive activity in frozen mouse brains for the first time — nature.com (+2)
[5.6] Big Tech backs Anthropic in fight against Trump administration — bbc.com (+27)
[5.5] Google Maps integrates AI for personalized recommendations and immersive navigation — independent.co.uk (+44)
[5.5] Climate change slows Earth's rotation, lengthening days — g1.globo.com (Portuguese) (+8)
[5.5] AI use may be reducing stylistic diversity and human creativity, study finds — thetimes.com [$] (+4)
[5.5] International police disrupt global cybercrime by sinkholing 45,000 IP addresses — bleepingcomputer.com (+5)
[5.5] Astronomers witness colossal supernova explosion create one of the most magnetic stars in the universe for the first time — space.com (+9)
Thanks for reading!
— Vadim
You can create your own significance-based RSS feed with premium.
-
🔗 r/Leeds What do people from Leeds think of Manchester? Which city do you prefer? What does Manchester do right? What does Leeds do right? rss
I visited Manchester the other day and was struck by how very ’city’ like it feels. Lots of hustle and bustle, massive buildings, trans etc.
I think I prefer Leeds in most ways but it feels more like a very large town than a city.
submitted by /u/OneItchy396
[link] [comments] -
🔗 r/Harrogate Considering moving to Woodlands rss
Hi all Typical question about location appeal I've seen a lot, but hey any detail would be useful.
We've lived in Oatlands renting for 5 years roughly and are looking to buy a house. There's a relatively surprisingly cheap house on Tyson place in Woodlands we're considering. The wife's parents are saying it's a dodgy area and not to consider it, but comparing the crime rate to our location there was only about 10 more reported crimes within a half mile per year. Most of it was anti social behaviour.
We think it's objectively overblown but for anyone living close to that area specifically, does it feel a nice safe place to live?
Thanks in advance
submitted by /u/Matrixgypsy
[link] [comments] -
🔗 r/Yorkshire 'My language course helped me launch my life in the UK' rss
| After arriving in Bradford from Iraq, Hareth Alshaban was looking for a way to improve his English and launch his new life in the UK. The 24-year-old's time on the English for Speakers of Other Languages (ESOL) course was so successful that he ended up performing the lead role in a production of Romeo and Juliet, and he is now a youth worker. ESOL programmes are aimed at those who have some grasp of English, but want to improve their speaking and listening skills, reading and writing, and understanding of regional accents. West Yorkshire Combined Authority is investing in training new ESOL teachers as a way to improve inclusion and social cohesion, and demand is increasing. Alshaban, who is originally from Palestine, said he travelled "unwillingly" through Syria, Jordan, and Turkey before landing in Cyprus, where he stayed for a couple of years before returning to Iraq. He remained there until 2018, but was then resettled in Bradford as part of a UN programme. Alshaban could speak English "quite well" when he arrived, but found there was a "bit of a struggle with understanding the accent" and "the culture was different from what I was used to". "I was of told it was one of the first steps to developing in this country," he said. "I didn't really understand why I had to take it to begin with as I already spoke English, but I honestly have taken quite a lot out of it." He ended up reading Shakespeare's works as part of the course and becoming a youth advisory board member for the Royal Shakespeare Company. He eventually graduated in politics and international relations from Liverpool Hope University. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 r/LocalLLaMA I feel personally attacked rss
| submitted by /u/HeadAcanthisitta7390
[link] [comments]
---|--- -
🔗 r/LocalLLaMA I'm fully blind, and AI is a game changer for me. Are there any local LLMS that can rival claude code and codex? rss
Hi guys,
So, I am fully blind.
Since AI was released to the public, I have been a max user.
Why?
Because it has changed my life.
Suddenly, I am able to get very accurate image descriptions, when I get an inaccessible document, an AI can read it to me in a matter of seconds, when there is something inaccessible, I can use Python, swift, or whatever I want to build my own software that is exactly how I want it.
So far, I have access to Claude Code pro, codex pro and Copilot for business.
This is also draining my bank account.
So now, I have started investigating whether there is anything that can rival this in terms of precision and production ready apps and programs?
Not necessarily anything I will be releasing to the public, but with Claude Code, I can have a full featured accessible accounting program in a couple of days, that help me in my business.
Do you know of anything?
What is possible at the moment?
Thank you for your time.
submitted by /u/Mrblindguardian
[link] [comments] -
🔗 r/york Shambles sightings rss
| White chocolate shot submitted by /u/Ambivertpayyan
[link] [comments]
---|--- -
🔗 r/york Location near hospital - gaming rss
Hi
I've ended up in a situation where I have to be near York hospital (around a 30 minute walk) and I have plenty of time to kill.
I've got some games in my steam library I haven't gotten round to playing over the years
Could anyone please suggest any cafés or other locations I could potentially sit for a few hours playing them?
Thanks
submitted by /u/BladedChaos
[link] [comments] -
🔗 r/wiesbaden Need help to understand how to sign contract for gas. rss
Hey everyone,
I'm new to Germany, I recently moved for work, and rented long term apartment starting from 01.02.2026.
I knew I would need to sign contracts for gas and electricity, and I did with electricity without any problems, but with gas supplier I can't understand what is being asked from me.
I selected vattenfall on check24, and entered all my data: address, name, and meter number.
After that, I started receiving requests to specify my data, I kept entering same data as it remained the same. I knew it would somehow play differently if I provide Markt-ID, but I simply don't understand what is that and where to take it from, I only know that has to be on my invoice.
After time, on 26.02.2026 vattenfall cancelled my application since I haven't provided the "right data", so I tried applying again on their website.
It's now 13.03.2026 and I just received another letter from them, basically saying "We don't like your data, give us new data".I'm already using gas in this apartment for month and half, spent 120 cubic meters of gas already.
I have already received invoices for electricity and paid it, but this situation with gas provider unsettled gives me anxiety.Can anyone suggest what should I do in this case, or at least what is expected from me? Somehow none of that troubles were faced with electricity or internet.
Inb4, I did registered my address at citizens office.
submitted by /u/Dazzling_Mood2958
[link] [comments] -
🔗 ghostty-org/ghostty v1.3.1 release
v1.3.1
-
🔗 r/LocalLLaMA Avacado is toast rss
Meta's avacado doesn't meet the standards Facebook desires so it is now delayed till May . Zuc must be fuming after spending billions and getting subpar performance.
https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model- delayed.html
https://x.com/i/trending/2032258514568298991
submitted by /u/Terminator857
[link] [comments] -
🔗 r/york Pole dancing classes in York rss
Hi all,
I'm sure I remember hearing about pole dacing classes in York, but I can't seem to find any. A studio is called Pole Position, but their website is down and they don't repond on Facebook or by phone, so I'm guessing it must have closed down. Does anybody know of any active class in York?
Thanks :)
submitted by /u/nocrimia
[link] [comments] -
🔗 r/wiesbaden Kommunalwahl am Sonntag rss
Moin Leute,
Public Service Announcement dass am Sonntag Kommunawahlen sind!
Auch wenn es mühsam ist mit den über 70 Stimmen, bitte nutzt diese Möglichkeit mitzubestimmen. Bei einer konservativen Wende im Rathaus droht die Rückabwicklung vieler progressiver Fortschritte der vergangenen Jahre. Diese Wahl wird wirklich richtungsweisend für die Stadtpolitik der nächsten Jahre.
submitted by /u/valentino_nero
[link] [comments] -
🔗 r/reverseengineering Codex vs. Claude: Which One Handles Reverse Engineering Skills Better? rss
submitted by /u/milky_smooth_31
[link] [comments] -
🔗 r/wiesbaden Neuer Hygienebericht online rss
Lohnt sich mal zu prüfen.
https://verbraucherfenster.hessen.de/ernaehrung/sichere- lebensmittel/veroeffentlichung-maengel-lfgb
submitted by /u/Lumpy_Independent_93
[link] [comments] -
🔗 r/Yorkshire Lost nuclear bunker rediscovered at Scarborough Castle rss
| submitted by /u/My-Darling-Abyss
[link] [comments]
---|--- -
🔗 r/Leeds Survey on hair products and salon/barber usage rss
Hi, I'm Callum, a student at University of Leeds and I am doing my dissertation on consumer influence for sustainability. This survey takes around 2 minutes to complete and is completely anonymous. You will be asked a few questions about your hair care product usage, professional hair services usage, if you've used 'eco-friendly' products before, and what would influence or disinfluence you from buying a hair product. If you have a spare 2 minutes from now til Monday, I'd really, really appreciate it :) x
https://app.onlinesurveys.jisc.ac.uk/s/leeds/usage-of-hair-products-and-hair- salons
submitted by /u/Critical-Business442
[link] [comments] -
🔗 r/york Gutter cleaning recommendations rss
Does anyone have recommendations for local, trustworthy, gutter cleaning services in York?
A lot of my searches for gutter cleaning services seem to end up on similar looking websites run by "big gutter". I searched this sub too, with little result.
Thanks!
submitted by /u/LIKE-AN-ANIMAL
[link] [comments] -
🔗 r/Leeds Looking for info on my grandfather rss
Morning all,
Does anybody remember or hear of a black caribbean man who went by “little Peter” - full name Peter Joseph. He lived in Chapel Town & Harehills, then he moved on to Bradford & we think he then moved to London. He had atleast two children called Emma & Christopher ‘Chris’.
He was born in the early 1940’s and he was from St Lucia, spoke a couple of different languages, French being one of them and he was in the merchant navy before coming to England and at some point he worked in a coal mine.
My grandad had two distinctive gold teeth, he played in a steel drum band and they practiced every Thursday evening.
My dad, Christopher, is apparently the double of my grandad Peter so I can provide a photo of my dad to jog people’s memories.
Thank you all for reading!
submitted by /u/cprez04
[link] [comments] -
🔗 r/LocalLLaMA Saw this somewhere on LinkedIn 😂 rss
| submitted by /u/Optimalutopic
[link] [comments]
---|--- -
🔗 r/york York guys in their 20s rss
Hi all, I’m 26 and been living in York for just over a year now with a couple. Love the city and made plenty of “friendly acquaintances” through sports clubs, but don’t necessarily feel like I’ve made many “friends” as many are in committed relationships and feel like they’re at a different life stage to me or have to always come as a package 😂
I love any sports, especially run a lot and play a bit of football and badminton. I’m a big foodie and enjoy going out to restaurants and cooking myself. Go to Cineworld a fair bit and even though I don’t drink but enjoy a good pub quiz.
Seen these posts in other places where people recommend the meet up app but I don’t think it’s as good as it used to be as doesn’t seem to be much on there for my age, and a lot of Facebook groups tend to be much older folk too.
So if there are any guys in their 20s in a similar situation or know of good spots, please reach out!
submitted by /u/Tall_Tiger_1999
[link] [comments] -
🔗 r/reverseengineering Agentic Reverse Engineering + Binary Analysis with Kong rss
submitted by /u/Gloomy_King8147
[link] [comments] -
🔗 r/Harrogate Best Fish and Chips in Harrogate? rss
I'm in Pannal for next few days and I'd love to have some local fish and chips.
I know it's a controversial topic, but who makes the best fish and chips?
submitted by /u/coffeebugtravels
[link] [comments] -
🔗 r/wiesbaden Geldbeutel verloren rss
Geldbeutel verloren
Hallo,
Ich habe neinen Geldbeutel in der Nähe vom Lidl , Angelika -Thiels Strasse verloren . Grosszügige Belohnung!
submitted by /u/StockDirector4021
[link] [comments] -
🔗 r/Leeds Was looking on Bustimes.org as you do, here's a look at 1 of 5 (4 in service, one as spare) of the Volvo B8 MCV Evoras coming to GAWY X98/X99. Their debut on the route depends on when the CCTV cameras arrives & get fitted. rss
If I remember correctly from the enthusiast page I'm on they'll have dealer spec which if you've been on the ones on Connexions Buses 11 you'll have the idea of what to expect. Compared to the ADL Enviro200MMCs currently in service these are bigger, higher capacity and better at hills which their more powerful Volvo 8 liter engine (ADL ones I think in those specific ones could be a 4.5 liter cummins engine)
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 r/reverseengineering Android Vulnerability Reproduction with OpenClaw rss
submitted by /u/Maleficent_Issue1336
[link] [comments] -
🔗 sacha chua :: living an awesome life Comparing pronunciation recordings across time rss
- : Added reference audio for the second set.
- : I added pronunciation segments for the new set of tongue-twisters I got on Mar 13.
- : I added a column for Feb 20, the first session with the sentences. I also added keyboard shortcuts (1..n) for playing the audio of the row that the mouse is on.
2026-02-20: First set: Maman peint un grand lapin blanc, etc.
My French tutor gave me a list of sentences to help me practise pronunciation.
I can fuzzy-match these with the word timing JSON from WhisperX, like this.
Extract all approximately matching phrases(subed-record-extract-all-approximately-matching-phrases sentences "/home/sacha/sync/recordings/2026-02-20-raphael.json" "/home/sacha/proj/french/analysis/virelangues/2026-02-20-raphael-script.vtt")Sentences- Maman peint un grand lapin blanc.
- Un enfant intelligent mange lentement.
- Le roi croit voir trois noix.
- Le témoin voit le chemin loin.
- Moins de foin au loin ce matin.
- La laine beige sèche près du collège.
- La croquette sèche dans l'assiette.
- Elle mène son frère à l'hôtel.
- Le verre vert est très clair.
- Elle aimait manger et rêver.
- Le jeu bleu me plaît peu.
- Ce neveu veut un jeu.
- Le feu bleu est dangereux.
- Le beurre fond dans le cœur chaud.
- Les fleurs de ma sœur sentent bon.
- Le hibou sait où il va.
- L'homme fort mord la pomme.
- Le sombre col tombe.
- L'auto saute au trottoir chaud.
- Le château d'en haut est beau.
- Le cœur seul pleure doucement.
- Tu es sûr du futur ?
- Trois très grands trains traversent trois trop grandes rues.
- Je veux deux feux bleus, mais la reine préfère la laine beige.
- Vincent prend un bain en chantant lentement.
- La mule sûre court plus vite que le loup fou.
- Luc a bu du jus sous le pont où coule la boue.
- Le frère de Robert prépare un rare rôti rouge.
- La mule court autour du mur où hurle le loup.
Then I can use subed-record to manually tweak them, add notes, and so on. I end up with VTT files like 2026-03-06-raphael-script.vtt. I can assemble the snippets for a session into a single audio file, like this:
I wanted to compare my attempts over time, so I wrote some code to use Org Mode and subed-record to build a table with little audio players that I can use both within Emacs and in the exported HTML. This collects just the last attempts for each sentence during a number of my sessions (both with the tutor and on my own). The score is from the Microsoft Azure pronunciation assessment service. I'm not entirely sure about its validity yet, but I thought I'd add it for fun.
*indicates where I've added some notes from my tutor, which should be available as atitleattribute on hover. (Someday I'll figure out a mobile-friendly way to do that.)Calling it with my sentences and files(my-lang-summarize-segments sentences '(("/home/sacha/proj/french/analysis/virelangues/2026-02-20-raphael-script.vtt" . "Feb 20") ;("~/sync/recordings/processed/2026-02-20-raphael-tongue-twisters.vtt" . "Feb 20") ("~/sync/recordings/processed/2026-02-22-virelangues-single.vtt" . "Feb 22") ("~/proj/french/recordings/2026-02-26-virelangues-script.vtt" . "Feb 26") ("~/proj/french/recordings/2026-02-27-virelangues-script.vtt" . "Feb 27") ("~/proj/french/recordings/2026-03-03-virelangues.vtt" . "Mar 3") ("/home/sacha/sync/recordings/processed/2026-03-03-raphael-reference-script.vtt" . "Mar 3") ("~/proj/french/analysis/virelangues/2026-03-06-raphael-script.vtt" . "Mar 6") ("~/proj/french/analysis/virelangues/2026-03-12-virelangues-script.vtt" . "Mar 12")) "clip" #'my-lang-subed-record-get-last-attempt #'my-lang-subed-record-cell-info t )Feb 20 Feb 22 Feb 26 Feb 27 Mar 3 Mar 3 Mar 6 Mar 12 Text ▶️ 63* ▶️ 96 ▶️ 95 ▶️ 94 ▶️ 83 ▶️ 83* ▶️ 81* ▶️ 88 Maman peint un grand lapin blanc. ▶️ 88* ▶️ 95 ▶️ 99 ▶️ 99 ▶️ 96 ▶️ 89* ▶️ 92* ▶️ 83 Un enfant intelligent mange lentement. ▶️ 84* ▶️ 97 ▶️ 97 ▶️ 96 ▶️ 94 ▶️ 95* ▶️ 98* ▶️ 99 Le roi croit voir trois noix. ▶️ 80* ▶️ 85 ▶️ 77 ▶️ 94 ▶️ 97 ▶️ 92* ▶️ 88 Le témoin voit le chemin loin. ▶️ 72* ▶️ 97 ▶️ 95 ▶️ 77 ▶️ 92 ▶️ 89* ▶️ 86 Moins de foin au loin ce matin. ▶️ 79* ▶️ 95 ▶️ 76 ▶️ 95 ▶️ 76 ▶️ 90* ▶️ 90* ▶️ 79 La laine beige sèche près du collège. ▶️ 67* ▶️ 99 ▶️ 85 ▶️ 81 ▶️ 85 ▶️ 99* ▶️ 97* ▶️ 97 La croquette sèche dans l'assiette. ▶️ 88* ▶️ 99 ▶️ 100 ▶️ 100 ▶️ 98 ▶️ 100* ▶️ 99* ▶️ 100 Elle mène son frère à l'hôtel. ▶️ 77* ▶️ 87 ▶️ 99 ▶️ 93 ▶️ 87 ▶️ 87* ▶️ 99 Le verre vert est très clair. ▶️ 100* ▶️ 94 ▶️ 100 ▶️ 99 ▶️ 99 ▶️ 99* ▶️ 100* ▶️ 100 Elle aimait manger et rêver. ▶️ 78* ▶️ 98 ▶️ 99 ▶️ 98 ▶️ 98 ▶️ 92* ▶️ 88 Le jeu bleu me plaît peu. ▶️ 78* ▶️ 97 ▶️ 85 ▶️ 95 ▶️ 85 ▶️ 85 Ce neveu veut un jeu. ▶️ 73* ▶️ 95 ▶️ 95 ▶️ 96 ▶️ 97 ▶️ 100 Le feu bleu est dangereux. ▶️ 87* ▶️ 76 ▶️ 65 ▶️ 97 ▶️ 85 ▶️ 74* ▶️ 85* ▶️ 96 Le beurre fond dans le cœur chaud. ▶️ 84* ▶️ 43 ▶️ 85 ▶️ 79 ▶️ 75 ▶️ 98 Les fleurs de ma sœur sentent bon. ▶️ 70* ▶️ 86 ▶️ 79 ▶️ 76 ▶️ 87 ▶️ 84 ▶️ 98 Le hibou sait où il va. ▶️ 92* ▶️ 95 ▶️ 86 ▶️ 92 ▶️ 98 ▶️ 99* ▶️ 94 L'homme fort mord la pomme. ▶️ 83* ▶️ 73 ▶️ 69 ▶️ 81 ▶️ 60 ▶️ 96* ▶️ 81 Le sombre col tombe. ▶️ 39* ▶️ 49 ▶️ 69 ▶️ 56 ▶️ 69 ▶️ 96* ▶️ 94 L'auto saute au trottoir chaud. ▶️ 82 ▶️ 84 ▶️ 85 ▶️ 98 ▶️ 94 ▶️ 96* ▶️ 99 Le château d'en haut est beau. ▶️ 89 ▶️ 85 ▶️ 75 ▶️ 91 ▶️ 52 ▶️ 75* ▶️ 70* ▶️ 98 Le cœur seul pleure doucement. ▶️ 98* ▶️ 99 ▶️ 99 ▶️ 95 ▶️ 93* ▶️ 97* ▶️ 99 Tu es sûr du futur ? ▶️ 97 ▶️ 93 ▶️ 92 ▶️ 85* ▶️ 90 Trois très grands trains traversent trois trop grandes rues. ▶️ 94 ▶️ 85 ▶️ 97 ▶️ 82* ▶️ 92 Je veux deux feux bleus, mais la reine préfère la laine beige. ▶️ 91 ▶️ 79 ▶️ 87 ▶️ 82* ▶️ 94 Vincent prend un bain en chantant lentement. ▶️ 89 ▶️ 91 ▶️ 91 ▶️ 84* ▶️ 92 La mule sûre court plus vite que le loup fou. ▶️ 91 ▶️ 93 ▶️ 93 ▶️ 92* ▶️ 96 Luc a bu du jus sous le pont où coule la boue. ▶️ 88 ▶️ 71 ▶️ 94 ▶️ 86* ▶️ 92 Le frère de Robert prépare un rare rôti rouge. ▶️ 81 ▶️ 84 ▶️ 88 ▶️ 67* ▶️ 94 La mule court autour du mur où hurle le loup. Pronunciation still feels a bit hit or miss. Sometimes I say a sentence and my tutor says "Oui," and then I say it again and he says "Non, non…" The
/ʁ/and/y/sounds are hard.I like seeing these compact links in an Org Mode table and being able to play them, thanks to my custom audio link type. It should be pretty easy to write a function that lets me use a keyboard shortcut to play the audio (maybe using the keys 1-9?) so that I can bounce between them for comparison.
If I screen-share from Google Chrome, I can share the tab with audio, so my tutor can listen to things at the same time. Could be fun to compare attempts so that I can try to hear the differences better. Hmm, actually, let's try adding keyboard shortcuts that let me use 1-8, n/p, and f/b to navigate and play audio. Mwahahaha! It works!
2026-03-14: Second set: Mon oncle peint un grand pont blanc, etc.
Update 2026-03-14: My tutor gave me a new set of tongue-twisters. When I'm working on my own, I find it helpful to loop over an audio reference with a bit of silence after it so that I can repeat what I've heard. I have several choices for reference audio:
- I can generate an audio file using text-to-speech, like a local instance of Kokoro TTS, or a hosted service like Google Translate (via gtts-cli), ElevenLabs, or Microsoft Azure.
- I can extract a recording of my tutor from one of my sessions.
- I can extract a recording of myself from one of my tutoring sessions where my tutor said that the pronunciation is alright.
Here I stumble through the tongue-twisters. I've included reference audio from Kokoro, gtts, and ElevenLabs for comparison.
(my-subed-record-analyze-file-with-azure (subed-record-keep-last (subed-record-filter-skips (subed-parse-file "/home/sacha/proj/french/analysis/virelangues/2026-03-13-raphael-script.vtt"))) "~/proj/french/analysis/virelangues-2026-03-13/2026-03-13-clip")Kk Gt Az Me ID Comments All Acc Flu Comp Conf 👂🏼 👂🏼 👂🏼 ▶️ 1 X: pont 93 99 90 100 86 Mon oncle peint un grand pont blanc. {pont} 👂🏼 👂🏼 👂🏼 ▶️ 2 C'est mieux 68 75 80 62 87 Un singe malin prend un bon raisin rond. 👂🏼 👂🏼 👂🏼 ▶️ 3 Ouais, c'est ça 83 94 78 91 89 Dans le vent du matin, mon chien sent un bon parfum. 👂🏼 👂🏼 👂🏼 ▶️ 4 ok 75 86 63 100 89 Le soin du roi consiste à joindre chaque coin du royaume. 👂🏼 👂🏼 👂🏼 ▶️ 5 Ouais, c'est ça, parfait 83 94 74 100 88 Dans un coin du bois, le roi voit trois points noirs. 👂🏼 👂🏼 👂🏼 ▶️ 6 Ouais, parfait 90 92 87 100 86 Le feu de ce vieux four chauffe peu. 👂🏼 👂🏼 👂🏼 ▶️ 7 Ouais 77 85 88 71 86 Deux peureux veulent un peu de feu. 👂🏼 👂🏼 👂🏼 ▶️ 8 77 78 75 83 85 Deux vieux bœufs veulent du beurre. 👂🏼 👂🏼 👂🏼 ▶️ 9 Ouais, parfait 92 94 89 100 89 Elle aimait marcher près de la rivière. 👂🏼 👂🏼 👂🏼 ▶️ 10 Ok, c'est bien 93 98 89 100 90 Je vais essayer de réparer la fenêtre. 👂🏼 👂🏼 👂🏼 ▶️ 11 Okay 83 87 76 100 89 Le bébé préfère le lait frais. 👂🏼 👂🏼 👂🏼 ▶️ 12 77 92 70 86 90 Charlotte cherche ses chaussures dans la chambre. 👂🏼 👂🏼 👂🏼 ▶️ 13 Okay 91 90 94 91 88 Un chasseur sachant chasser sans son chien est-il un bon chasseur ? 👂🏼 👂🏼 👂🏼 ▶️ 14 Ouais 91 88 92 100 91 Le journaliste voyage en janvier au Japon. 👂🏼 👂🏼 👂🏼 ▶️ 15 C'est bien (X: dans un) 91 88 94 100 88 Georges joue du jazz dans un grand bar. {dans un} 👂🏼 👂🏼 👂🏼 ▶️ 16 C'est bien 88 87 94 88 85 Un jeune joueur joue dans le grand gymnase. 👂🏼 👂🏼 👂🏼 ▶️ 17 95 94 96 100 91 Le compagnon du montagnard soigne un agneau. 👂🏼 👂🏼 👂🏼 ▶️ 18 85 88 84 86 89 La cigogne soigne l’agneau dans la campagne. 👂🏼 👂🏼 👂🏼 ▶️ 19 grenouille 71 80 68 75 86 La grenouille fouille les feuilles dans la broussaille. The code
Code for summarizing the segments(defun my-lang-subed-record-cell-info (item file-index file sub) (let* ((sound-file (expand-file-name (format "%s-%s-%d.opus" prefix (my-transform-html-slugify item) (1+ file-index)))) (score (car (split-string (or (subed-record-get-directive "#+SCORE" (elt sub 4)) "") ";"))) (note (replace-regexp-in-string (concat "^" (regexp-quote (cdr file)) "\\(: \\)?") "" (or (subed-record-get-directive "#+NOTE" (elt sub 4)) "")))) (when (or always-create (not (file-exists-p sound-file))) (subed-record-extract-audio-for-current-subtitle-to-file sound-file sub)) (org-link-make-string (concat "audio:" sound-file "?icon=t" (format "&source=%s&source-start=%s" (car file) (elt sub 1)) (format "&title=%s" (url-hexify-string (if (string= note "") (cdr file) (concat (cdr file) ": " note))))) (concat "▶️" (if score (format " %s" score) "") (if (string= note "") "" "*"))))) (defun my-lang-subed-record-get-last-attempt (item file) "Return the last subtitle matching ITEM in FILE." (car (last (seq-remove (lambda (o) (string-match "#\\+SKIP" (or (elt o 4) ""))) (learn-lang-subed-record-collect-matching-subtitles item (list file) nil nil 'my-subed-simplify))))) (defun my-lang-summarize-segments (items files prefix attempt-fn cell-fn &optional always-create) (cons (append (seq-map 'cdr files) (list "Text")) (seq-map (lambda (item) (append (seq-map-indexed (lambda (file file-index) (let* ((sub (funcall attempt-fn item file))) (if sub (funcall cell-fn item file-index file sub) ""))) files) (list item))) items))) (defun my-subed-record-analyze-file-with-azure (subtitles prefix &optional always-create) (cons '("Kk" "Gt" "Az" "Me" "ID" "Comments" "All" "Acc" "Flu" "Comp" "Conf") (seq-map-indexed (lambda (sub i) (let ((sound-file (expand-file-name (format "%s-%02d.opus" prefix (1+ i)))) (tts-services '(("kokoro" . learn-lang-tts-kokoro-fastapi-say) ("gtts" . learn-lang-tts-gtts-say) ("azure" . learn-lang-tts-azure-say))) tts-files (note (subed-record-get-directive "#+NOTE" (elt sub 4)))) (when (or always-create (not (file-exists-p sound-file))) (subed-record-extract-audio-for-current-subtitle-to-file sound-file sub)) (setq tts-files (mapcar (lambda (row) (let ((reference (format "%s-%s-%02d.opus" prefix (car row) (1+ i) ))) (when (or always-create (not (file-exists-p reference))) (funcall (cdr row) (subed-record-simplify (elt sub 3)) 'sync reference)) (org-link-make-string (concat "audio:" reference "?icon=t¬e=" (url-hexify-string (car row))) "👂🏼"))) tts-services)) (append tts-files (list (org-link-make-string (concat "audio:" sound-file "?icon=t" (format "&source-start=%s" (elt sub 1)) (if (and note (not (string= note ""))) (format "&title=%s" (url-hexify-string note)) "")) "▶️") (format "%d" (1+ i)) (or note "")) (learn-lang-azure-subed-record-parse (elt sub 4)) (list (elt sub 3))))) subtitles)))Some code for doing this stuff is in sachac/learn-lang on Codeberg.
You can e-mail me at sacha@sachachua.com.
-
🔗 Rust Blog Call for Testing: Build Dir Layout v2 rss
We would welcome people to try and report issues with the nightly-only
cargo -Zbuild-dir-new-layout. While the layout of the build dir is internal-only, many projects need to rely on the unspecified details due to missing features within Cargo. While we've performed a crater run, that won't cover everything and we need help identifying tools and process that rely on the details, reporting issues to these projects so they can update to the new layout or support them both.How to test this?
With at least nightly 2026-03-10, run your tests, release processes, and anything else that may touch build-dir/target-dir with the
-Zbuild-dir-new- layoutflag.For example:
$ cargo test -Zbuild-dir-new-layoutNote: if you see failures, the problem may not be isolated to just
-Zbuild- dir-new-layout. With Cargo 1.91, users can separate where to store intermediate build artifacts (build-dir) and final artifacts (still in target-dir). You can verify this by running with onlyCARGO_BUILD_BUILD_DIR=buildset. We are evaluating changing the default for build-dir in #16147.Outcomes may include:
- Fixing local problems
- Reporting problems in upstream tools with a note on the the tracking issue for others
- Providing feedback on the the tracking issue
Known failure modes:
- Inferring a
[[bin]]s path from a[[test]]s path:- Use
std::env::var_os("CARGO_BIN_EXE_*")for Cargo 1.94+, maybe keeping the inference as a fallback for older Cargo versions - Use
env!("CARGO_BIN_EXE_*")
- Use
- Build scripts looking up target-dir from their binary or
OUT_DIR: see Issue #13663- Update current workarounds to support the new layout
- Looking up user-requested artifacts from rustc, see Issue #13672
- Update current workarounds to support the new layout
Library support status as of publish time:
- assert_cmd: fixed
- cli_test_dir: Issue #65
- compiletest_rs: Issue #309
- executable-path: fixed
- snapbox: fixed
- term-transcript: Issue #269
- test_bin: Issue #13
- trycmd: fixed
What is not changing?
The layout of final artifacts within target dir.
Nesting of build artifacts under the profile and the target tuple, if specified.
What is changing?
We are switching from organizing by content type to scoping the content by the package name and a hash of the build unit and its inputs.
Here is an example of the current layout, assuming you have a package named
liband a package namedbin, and both have a build script:build-dir/ ├── CACHEDIR.TAG └── debug/ ├── .cargo-lock # file lock protecting access to this location ├── .fingerprint/ # build cache tracking │ ├── bin-[BUILD_SCRIPT_RUN_HASH]/* │ ├── bin-[BUILD_SCRIPT_BIN_HASH]/* │ ├── bin-[HASH]/* │ ├── lib-[BUILD_SCRIPT_RUN_HASH]/* │ ├── lib-[BUILD_SCRIPT_BIN_HASH]/* │ └── lib-[HASH]/* ├── build/ │ ├── bin-[BIN_HASH]/* # build script binary │ ├── bin-[RUN_HASH]/out/ # build script run OUT_DIR │ ├── bin-[RUN_HASH]/* # build script run cache │ ├── lib-[BIN_HASH]/* # build script binary │ ├── lib-[RUN_HASH]/out/ # build script run OUT_DIR │ └── lib-[RUN_HASH]/* # build script run cache ├── deps/ │ ├── bin-[HASH]* # binary and debug information │ ├── lib-[HASH]* # library and debug information │ └── liblib-[HASH]* # library and debug information ├── examples/ # unused in this case └── incremental/... # managed by rustcThe proposed layout:
build-dir/ ├── CACHEDIR.TAG └── debug/ ├── .cargo-lock # file lock protecting access to this location ├── build/ │ ├── bin/ # package name │ │ ├── [BUILD_SCRIPT_BIN_HASH]/ │ │ │ ├── fingerprint/* # build cache tracking │ │ │ └── out/* # build script binary │ │ ├── [BUILD_SCRIPT_RUN_HASH]/ │ │ │ ├── fingerprint/* # build cache tracking │ │ │ ├── out/* # build script run OUT_DIR │ │ │ └── run/* # build script run cache │ │ └── [HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ └── out/* # binary and debug information │ └── lib/ # package name │ ├── [BUILD_SCRIPT_BIN_HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ └── out/* # build script binary │ ├── [BUILD_SCRIPT_RUN_HASH]/ │ │ ├── fingerprint/* # build cache tracking │ │ ├── out/* # build script run OUT_DIR │ │ └── run/* # build script run cache │ └── [HASH]/ │ ├── fingerprint/* # build cache tracking │ └── out/* # library and debug information └── incremental/... # managed by rustcFor more information on these Cargo internals, see the
mod layoutdocumentation.Why is this being done?
ranger-ross has worked tirelessly on this as a stepping stone to cross-workspace caching which will be easier when we can track each cacheable unit in a self-contained directory.
This also unblocks work on:
- Automatic cleanup of stale build units to keep disks space use constant over time
- More granular locking so
cargo testand rust-analyzer don't block on each other
Along the way, we found this helps with:
- Build performance as the intermediate artifacts accumulate in
deps/ - Content of
deps/pollutingPATHduring builds on Windows - Avoiding file collisions among intermediate artifacts
While the Cargo team does not officially endorse sharing a
build-diracross workspaces, that last item should reduce the chance of encountering problems for those who choose to.Future work
We will use the experience of this layout change to help guide how and when to perform any future layout changes, including:
- Efforts to reduce path lengths to reduce risks for errors for developers on Windows
- Experimenting with moving artifacts out of the
--profileand--targetdirectories, allowing sharing of more artifacts where possible
In addition to narrowing scope, we did not do all of the layout changes now because some are blocked on the lock change which is blocked on this layout change.
We would also like to work to decouple projects from the unspecified details of build-dir.
-
- March 12, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-12 rss
IDA Plugin Updates on 2026-03-12
New Releases:
Activity:
- augur
- binlex
- 19b79a61: fix windows ci/cd warnings for node
- fdadd375: simplify vex implementation
- 5425c6cd: cleanup
- 836948c2: simplify ratios, not needed
- f035159f: simplify disassemblers api, and bump python binding lib
- 0ac42a9a: cfg api change absorb to merge, makes it eaiser to understand
- 957657f3: fix edges and rip-relative jumps
- 1a895dff: fix disassembling bug queuing
- 5a0fd3a9: performance
- bd504b69: hash compare restore
- binsync
- btrace
- da12f7b9: Arch-specific handlers compilation
- capa
- haruspex
- ida-dbimporter
- IDA-MCP
- idasql
- e6b41cab: docs: clarify pseudocode comment anchor selection
- 366385a6: chore: prepare v0.0.11 release
- 95451f42: Merge remote-tracking branch 'origin/main' into work
- da827db6: fix: avoid replaying stale funcs prototype during rename
- 94668b1f: Merge pull request #24 from allthingsida/work
- c0eac083: fix: simplify RPATH to match SDK GNU make convention
- 53eb0704: fix: remove GIT_SHALLOW for pinned fastmcpp commit hash
- 46a27c14: idasql: improve pseudocode comment handling and entity search
- python-elpida_core.py
- ac9d7d3d: fix: merge-safe S3 push + add regenerate_d15_index to Docker
- 9bd9ea55: update System tab version header to v3.0.0
- 2c382298: birth living axiom agents: 12 axioms that discuss, debate, vote, and act
- 3b545d41: close vocabulary gaps: align all axiom/domain names to canonical config
- 6e57821d: Unfreeze elpida_core.py — Agent of Agents (v2.0.0)
- 8a138119: feat: A11 — World (7/5 Septimal Tritone) codified
- rhabdomancer
-
🔗 r/LocalLLaMA OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectories rss
Overview
OmniCoder-9B is a 9-billion parameter coding agent model built by Tesslate, fine-tuned on top of Qwen3.5-9B's hybrid architecture (Gated Delta Networks interleaved with standard attention). It was trained on 425,000+ curated agentic coding trajectories spanning real-world software engineering tasks, tool use, terminal operations, and multi-step reasoning.
The training data was specifically built from Claude Opus 4.6 agentic and coding reasoning traces , targeting scaffolding patterns from Claude Code, OpenCode, Codex, and Droid. The dataset includes successful trajectories from models like Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro.
The model shows strong agentic behavior: it recovers from errors (read-before- write), responds to LSP diagnostics, and uses proper edit diffs instead of full rewrites. These patterns were learned directly from the real-world agent trajectories it was trained on.
Key Features
- Trained on Frontier Agent Traces : Built from Claude Opus 4.6, GPT-5.3-Codex, GPT-5.4, and Gemini 3.1 Pro agentic coding trajectories across Claude Code, OpenCode, Codex, and Droid scaffolding
- Hybrid Architecture : Inherits Qwen3.5's Gated Delta Networks interleaved with standard attention for efficient long-context processing
- 262K Native Context : Full 262,144 token context window, extensible to 1M+
- Error Recovery : Learns read-before-write patterns, responds to LSP diagnostics, and applies minimal edit diffs instead of full rewrites
- Thinking Mode : Supports
<think>...</think>reasoning chains for complex problem decomposition - Apache 2.0 : Fully open weights, no restrictions
https://huggingface.co/Tesslate/OmniCoder-9B
submitted by /u/DarkArtsMastery
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +3 releases, ~3 changed rss
sync repo: +1 plugin, +3 releases, ~3 changed ## New plugins - [HashDB](https://github.com/OALabs/hashdb-ida) (1.10.0) ## New releases - [DBImporter](https://github.com/HexRaysSA/ida-dbimporter): 0.0.2 - [Suture](https://github.com/libtero/suture): 1.2.0 ## Changes - [bindiff](https://github.com/HexRays-plugin-contributions/bindiff): - 8.0.0: download URL changed - [binexport](https://github.com/HexRays-plugin-contributions/binexport): - 12.0.0: download URL changed - [xray](https://github.com/HexRays-plugin-contributions/xray): - 2025.9.24: download URL changed -
🔗 r/reverseengineering Reverse Engineering the undocumented ResetEngine.dll: A C++ tool to programmatically trigger a silent Windows Factory Reset (PBR) bypassing SystemSettings UI. rss
submitted by /u/Fast_Particular_8377
[link] [comments] -
🔗 r/Yorkshire The Life of Chuck rss
| Just started watching this on Netflix.... this is what they think North Yorkshire looks like? submitted by /u/Neffwood
[link] [comments]
---|--- -
🔗 r/reverseengineering Near complete hypervisor, driver, and system binary analysis for the Xbox Series consoles rss
submitted by /u/SeaHovercraft8271
[link] [comments] -
🔗 r/york Yorks Royal Chamberpot rss
| Charles Il chamberpot made by Marmaduke Best, York. Marmaduke Rawdon gave the City of York a "silver chamber pott of the value of ten punds". In 1850, Queen Victoria’s husband, Prince Albert, visited the Mansion House and may have used the chamberpot! submitted by /u/York_shireman
[link] [comments]
---|--- -
🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss
Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?
A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 80+ members and counting!
We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people
The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.
We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc
If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome
I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc
Leave a comment & I'll DM an invite when available! x
PLEASE CHECK DMS FOR INVITES
submitted by /u/rmonkey100
[link] [comments] -
🔗 r/LocalLLaMA Qwen3.5-9B is actually quite good for agentic coding rss
I have to admit I am quite impressed. My hardware is an Nvidia Geforce RTX 3060 with 12 GB VRAM so it's quite limited. I have been "model-hopping" to see what works best for me.
I mainly did my tests with Kilo Code but sometimes I tried Roo Code as well
Originally I used a customized Qwen 2.5 Coder for tools calls, It was relatively fast but usually would fail doing tool calls.Then I tested multiple Unsloth quantizations on Qwen 3 Coder. 1-bit quants would work also relatively fast but usually failed doing tool calls as well. However I've been using UD- TQ1_0 for code completion with Continue and has been quite good, better than what I experienced compared to smaller Qwen2.5 Coder models. 2-bit quants worked a little bit better (it would still fail sometimes), however it started feeling really slow and kinda unstable.
Then, similarly to my original tests with Qwen 2.5, tried this version of Qwen3, also optimized for tools (14b), my experience was significantly better but still a bit slow, I should probably have gone with 8b instead. I noticed that, these general Qwen versions that are not optimized for coding worked better for me, probably because they were smaller and would fit better, so instead of trying Qwen3-8b, I went with Qwen3.5-9b, and this is where I got really surprised.
Finally had the agent working for more than an hour, doing kind of significant work and capable of going on by itself without getting stuck.
I know every setup is different, but if you are running on consumer hardware with limited VRAM, I think this represents amazing progress.
TL;DR : Qwen 3.5 (9B) with 12 VRAM actually works very well for agentic calls. Unsloth-Qwen3 Coder 30B UD-TQ1_0 is good for code completion
submitted by /u/Lualcala
[link] [comments] -
🔗 r/reverseengineering Live From RE//verse 2026: WARP Signatures with Mason Reed (Stream - 06/03/2026) rss
submitted by /u/jershmagersh
[link] [comments] -
🔗 backnotprop/plannotator v0.12.0 release
Follow @plannotator on X for updates
Claude Code users, want to give feedback on approval? Please upvote & comment here.
Missed recent releases? Release | Highlights
---|---
v0.11.4 | Git add from code review, bidirectional scroll navigation, clipboard paste for annotation images, VS Code IPC port stability
v0.11.3 | Expandable diff context, hierarchical folder tree, redesigned worktree controls, supply chain hardening
v0.11.2 | Git worktree support in code review, VS Code editor annotations in review, Obsidian auto-save & separator settings, session discovery, smart file resolution
v0.11.1 | VS Code extension for in-editor plan review, Pinpoint mode for point-and-click annotations, untracked files in code review
v0.11.0 | Auto-save annotation drafts, comment popover, Obsidian vault browser, deny message framing fix, configurable OpenCode timeout
v0.10.0 | Short URL sharing with E2E encryption, code suggestions in review UI, CJK input method support, customizable Obsidian filenames, XDG install fix
v0.9.3 | Linked document navigation & annotation, VS Code diff integration, toolbar dismiss fix, automated npm publishing
v0.9.0 | Plan Diff with two view modes, version history, sidebar redesign, terminology cleanup
v0.8.5 | Pi coding agent support, auto-close countdown, image endpoint security fix, OpenCode package fix
v0.8.0 | Open source (MIT/Apache-2.0), annotate command, self-hosted share portal, resizable panels, mermaid controls, auto-close on approval, documentation site
What's New in v0.12.0 This is a community release. Ten of the fourteen PRs in v0.12.0 were authored by external contributors, spanning three major features and a sweep of cross- platform fixes. The annotation system gained preset labels for one-click feedback — no typing, just click and move on. The plan viewer now renders Graphviz diagrams alongside Mermaid, inline markdown images with a lightbox zoom, and renders all diagrams by default instead of showing raw source. And the entire UI works on mobile. Quick Annotation Labels Reviewing a plan often means the same feedback applies to multiple sections — "clarify this," "verify this assumption," "match existing patterns." Quick Labels turn those into one-click preset chips that appear above the annotation toolbar. Select text, click a label, done. No typing required. Ten default labels ship out of the box, each with an emoji and a color-coded pill: ❓ Clarify this · 🗺️ Missing overview · 🔍 Verify this · 🔬 Give me an example · 🧬 Match existing patterns · 🔄 Consider alternatives · 📉 Ensure no regression · 🚫 Out of scope · 🧪 Needs tests · 👍 Nice approach Several labels carry agent-facing tips that get injected into the feedback. For example, selecting a section and clicking "🔍 Verify this" tells the agent: "This seems like an assumption. Verify by reading the actual code before proceeding." The "🧬 Match existing patterns" label instructs the agent to search the codebase for existing solutions rather than introducing a new approach. These tips are invisible to the reviewer but shape how the agent responds. When the feedback is exported, labeled annotations are grouped into a Label Summary section at the bottom — **🔍 Verify this**: 3 — so both the reviewer and the agent can see at a glance which patterns recur across the plan. Labels are fully customizable in Settings. Add up to 12, reorder them, pick custom colors and tips, or remove the ones you never use. Settings persist across sessions via cookies. A follow-up PR introduced a dedicated Quick Label editing mode alongside Markup, Comment, and Redline. In this mode, selecting text immediately shows a floating label picker — no toolbar intermediary. Alt+1 through Alt+0 keyboard shortcuts work in any mode for power users who prefer not to reach for the mouse. Authored by @grubmanItay in #268 and #272 Mobile Compatibility Plannotator was desktop-only. That mattered less when the tool was purely a local dev workflow, but with shared URLs and team reviews becoming common, people were opening plan links on phones and tablets and getting a broken layout. The UI now adapts fully below 768px. The header collapses into a hamburger menu. The annotation panel renders as a full-screen overlay with a backdrop and close button. Touch support covers resize handles, pinpoint annotations, text selection, and the toolstrip. Card action buttons are always visible on touch devices instead of appearing on hover. The Settings modal switches to a horizontal tab bar. The CommentPopover width is capped to the viewport so it doesn't overflow off-screen. Desktop layout is completely unchanged — this is additive, not a redesign. Authored by @grubmanItay in #260 Graphviz Diagram Rendering Plannotator has supported Mermaid diagrams since v0.6.8. Plans that use Graphviz for architecture diagrams, dependency graphs, or state machines were stuck with raw DOT source in a code block. The Viewer now renders graphviz, dot, and gv fenced code blocks using @viz-js/viz, with the same UX conventions as Mermaid: source/diagram toggle, zoom and pan controls, and an expanded fullscreen view. Authored by @flex-yj-kim in #266 Mermaid Diagram Improvements The Mermaid viewer received a substantial UX overhaul. Diagrams now open in a proper expanded fullscreen mode with zoom in/out, fit-to-view, and wheel zoom. The source/diagram toggle was reworked for clarity. Wide diagrams no longer clip against container edges in both plan view and plan diff view. Safari stability issues with SVG rendering were resolved. A separate PR changed both Mermaid and Graphviz diagrams to render by default instead of showing raw source code first — the source toggle is still one click away, but the visual rendering is now the default state. Authored by @flex-yj-kim in #264 and #279 Issue #275 filed by @flex-yj-kim Markdown Image Rendering Markdown ! syntax was silently treated as plain text — the ! character wasn't in the inline scanner, so images never rendered. They do now. Local image paths are proxied through the existing /api/image endpoint, and relative paths resolve correctly when annotating files outside the project root. Clicking any rendered image opens a full-screen lightbox with the alt text as a caption. Press Escape or click the backdrop to dismiss. Authored by @dgrissen2 in #271 Linked Doc Navigation in Annotate Mode
The
/plannotator-annotatecommand lets you annotate any markdown file, but clicking.mdlinks inside that file would break — the annotate server was missing a/api/docendpoint, so link requests returned raw HTML instead of JSON. This release adds the missing route and supports chained relative link navigation, so you can follow links between sibling markdown files without leaving annotate mode.- Authored by @dgrissen2 in #276
VS Code Extension in SSH Remote Sessions
The VS Code extension sets
PLANNOTATOR_BROWSERto its ownopen-in-vscodehandler so plans open in editor tabs instead of external browsers. In SSH remote sessions, the sharedopenBrowser()function skipped browser launch entirely — ignoring the custom handler. The fix is a one-line condition change: ifPLANNOTATOR_BROWSERis set, always callopenBrowser()regardless of remote detection. This covers plan review, code review, and annotate mode.Additional Changes
- Windows markdown path support —
plannotator annotatenow handles Windows drive-letter paths (C:\...,C:/...), Git Bash/MSYS paths (/c/...), and Cygwin paths (/cygdrive/c/...) in the shared markdown resolver (#267 by @flex-yj-kim) - OS-aware update banner — the update banner now detects the user's OS and shows the correct install command: bash/curl on macOS and Linux, PowerShell on Windows (#270, reported by @eromoe in #265)
- Pi origin in code review — the code review UI now recognizes Pi as a first-class origin with a violet badge, correct install command in the update banner, and proper agent name in the completion overlay (#263)
- Codex support — documentation and install instructions for running Plannotator inside Codex, which uses the CLI directly without a plugin (#261)
- Welcome dialog cleanup — removed three first-run dialogs (UI Features Setup, Plan Diff Marketing, What's New v0.11.0) that had outlived their usefulness. The only remaining first-open dialog is the Permission Mode Setup, which directly affects agent behavior (#280)
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- docs: add Codex support by @backnotprop in #261
- feat: add Pi origin support to code review UI by @backnotprop in #263
- feat: Improve Mermaid diagram viewing experience by @flex-yj-kim in #264
- feat: Add Graphviz diagram rendering in plan mode by @flex-yj-kim in #266
- fix: Support Windows markdown paths in CLI annotate flow by @flex-yj-kim in #267
- feat: add quick annotation labels for one-click preset feedback by @grubmanItay in #268
- fix: detect OS for update banner install command by @backnotprop in #270
- feat: render markdown images with lightbox zoom by @dgrissen2 in #271
- feat: add quick label selection mode for one-click annotations by @grubmanItay in #272
- fix: open browser in SSH remote when PLANNOTATOR_BROWSER is set by @7tg in #274
- fix: enable linked doc navigation in annotate mode by @dgrissen2 in #276
- feat: render diagrams by default in plan review by @flex-yj-kim in #279
- feat: add mobile compatibility by @grubmanItay in #260
- chore: remove non-critical welcome dialogs by @backnotprop in #280
Contributors
@grubmanItay was a major contributor to this release with three PRs — Quick Annotation Labels, Quick Label Mode, and full mobile support. The labels system touched the annotation pipeline end-to-end: new UI components, settings persistence, keyboard shortcuts, export formatting, and share URL backward compatibility.
@flex-yj-kim continues as the project's most prolific external contributor. Four PRs in this release: Graphviz rendering, Mermaid viewer overhaul, render-by-default diagrams, and Windows path support. Across v0.9.3 through v0.12.0, Yeongjin has authored twelve merged PRs spanning both the plan and code review UIs.
@dgrissen2 returns and shipped two PRs — markdown image rendering with the lightbox viewer and the annotate-mode linked doc navigation fix. Both address gaps where the viewer silently dropped content instead of rendering it.
@7tg who originated the VS Code extension, authored the SSH remote fix for the VS Code extension, which he also reported in #259 with a thorough diagnostic of the underlying IPC issue.
Community members who reported issues and participated in discussions that shaped this release:
- @eromoe — #265 (OS detection for update banner install command)
- @grubmanItay — #278 (quick label defaults discussion)
Full Changelog :
v0.11.4...v0.12.0 -
🔗 sacha chua :: living an awesome life Small steps towards using OpenAI-compatible text-to-speech services with speechd-el or emacspeak rss
Speech synthesis has come a long way since I first tried out Emacspeak in 2002. Kokoro TTS and Piper offer more natural-sounding voices now, although the initial delay in loading the models and generating speech mean that they aren't quite ready to completely replace espeak, which is faster but more robotic. I've been using the Kokoro FastAPI through my own functions for working with various speech systems. I wanted to see if I could get Kokoro and other OpenAI-compatible text-to-speech services to work with either speechd-el or Emacspeak just in case I could take advantage of the rich functionality either provides for speech-synthesized Emacs use. speechd-el is easier to layer on top of an existing Emacs if you only want occasional speech, while emacspeak voice-enables many packages to an extent beyond speaking simply what's on the screen.
Speech synthesis is particularly helpful when I'm learning French because I can use it as a reference for what a paragraph or sentence should sound like. It's not perfect. Sometimes it uses liaisons that my tutor and Google Translate don't use. But it's a decent enough starting point. I also used it before to read out IRC mentions and compile notifications so that I could hear them even if I was paying attention to a different activity.
Here's a demonstration of speechd reading out the following lines using the code I've just uploaded to https://codeberg.org/sachac/speechd-ai:
- The quick brown fox jumps over the lazy dog.
- Now let's set the language to French so we can read the next line.
- Bonjour, je m'appelle Emacs.
Screencast showing speechd-elThere's about a 2-second delay between the command and the start of the audio for the sentence.
Note that
speechd-speak-read-sentencefails in some cases where(forward-sentence 1)isn't the same place as(backward-sentence 1) (forward-sentence 1), which can happen when you're in an Org Mode list. I've submitted a patch upstream.Aside from that,
speechd-speak-set-language,speechd-speak-read-paragraphandspeechd-speak-read-regionare also useful commands. I think the latency makes this best-suited for reading paragraphs, or for shadowing sentences for language learning.I'm still trying to figure out how to get
speechd-speakto work as smoothly as I'd like. I think I've got it set up so that the server falls back toespeakfor short texts so that it can handle words or characters better, and uses the specified server for longer ones. I'd like to get to the point where it can handle all the things that speechd usually does, like saying lines as I navigate through them or giving me feedback as I'm typing. Maybe it can use espeak for fast feedback character by character and word by word, and then use Kokoro TTS for the full sentence when I finish. Then it will be possible to use it to type things without looking at the screen.After putting this together, I still find myself leaning towards my own functions because they make it easy to see the generated speech output to a file, which is handy for saving reference audio that I can play on my phone and for making replays almost instant. That could also be useful for pre-generating the next paragraph to make it flow more smoothly. Still, it was interesting making something that is compatible with existing protocols and libraries.
Posting it in case anyone else wants to use it as a starting point. The repository also contains the starting point for an Emacspeak-compatible speech server. See See speechd-ai/README.org for more details.
https://codeberg.org/sachac/speechd-ai
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Road closed by Wellington Place rss
Does anyone know what happened here? There seems to be a car with a couple of windows smashed out and the police have closed off the road (see pics). Car has been there since about 11.30am and they cleared the builders out of the building site as well
submitted by /u/watchitspaceman
[link] [comments] -
🔗 r/reverseengineering Debugging An Undebuggable App rss
submitted by /u/igor_sk
[link] [comments] -
🔗 r/Yorkshire Is there a clear footpath walk from whitby to Robinhoods Bay? rss
Not been and years and considering a day out this weekend.
submitted by /u/saltlampsandphotos
[link] [comments] -
🔗 r/reverseengineering Chip Uploading - Emulation Online rss
submitted by /u/elemenity
[link] [comments] -
🔗 r/reverseengineering Archive of classic reverse engineering tutorials (Armadillo, ASProtect, Themida, SoftICE era) rss
submitted by /u/Accomplished-Leg2040
[link] [comments] -
🔗 r/reverseengineering GitHub - iss4cf0ng/Elfina: Elfina is a multi-architecture ELF loader supporting x86 and x86-64 binaries. rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/reverseengineering HellsUchecker: ClickFix to blockchain-backed backdoor rss
submitted by /u/ectkirk
[link] [comments] -
🔗 r/Leeds Budget friendly places to get fresh flowers? Thought about Leeds market? Thanks!💐 rss
Not sure of prices these days..
submitted by /u/Bright_Fill_4770
[link] [comments] -
🔗 r/reverseengineering Reverse Engineering Action's Cheap Fichero Labelprinter rss
submitted by /u/igor_sk
[link] [comments] -
🔗 r/LocalLLaMA I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. rss
English is not my first language. I wrote this in Chinese and translated it with AI help. The writing may have some AI flavor, but the design decisions, the production failures, and the thinking that distilled them into principles — those are mine.
I was a backend lead at Manus before the Meta acquisition. I've spent the last 2 years building AI agents — first at Manus, then on my own open-source agent runtime (Pinix) and agent (agent- clip). Along the way I came to a conclusion that surprised me:
A single
run(command="...")tool with Unix-style commands outperforms a catalog of typed function calls.Here's what I learned.
Why *nix
Unix made a design decision 50 years ago: everything is a text stream. Programs don't exchange complex binary structures or share memory objects — they communicate through text pipes. Small tools each do one thing well, composed via
|into powerful workflows. Programs describe themselves with--help, report success or failure with exit codes, and communicate errors through stderr.LLMs made an almost identical decision 50 years later: everything is tokens. They only understand text, only produce text. Their "thinking" is text, their "actions" are text, and the feedback they receive from the world must be text.
These two decisions, made half a century apart from completely different starting points, converge on the same interface model. The text-based system Unix designed for human terminal operators —
cat,grep,pipe,exit codes,man pages— isn't just "usable" by LLMs. It's a natural fit. When it comes to tool use, an LLM is essentially a terminal operator — one that's faster than any human and has already seen vast amounts of shell commands and CLI patterns in its training data.This is the core philosophy of the _nix Agent: _ don't invent a new tool interface. Take what Unix has proven over 50 years and hand it directly to the LLM.*
Why a single
runThe single-tool hypothesis
Most agent frameworks give LLMs a catalog of independent tools:
tools: [search_web, read_file, write_file, run_code, send_email, ...]Before each call, the LLM must make a tool selection — which one? What parameters? The more tools you add, the harder the selection, and accuracy drops. Cognitive load is spent on "which tool?" instead of "what do I need to accomplish?"
My approach: one
run(command="...")tool, all capabilities exposed as CLI commands.run(command="cat notes.md") run(command="cat log.txt | grep ERROR | wc -l") run(command="see screenshot.png") run(command="memory search 'deployment issue'") run(command="clip sandbox bash 'python3 analyze.py'")The LLM still chooses which command to use, but this is fundamentally different from choosing among 15 tools with different schemas. Command selection is string composition within a unified namespace — function selection is context-switching between unrelated APIs.
LLMs already speak CLI
Why are CLI commands a better fit for LLMs than structured function calls?
Because CLI is the densest tool-use pattern in LLM training data. Billions of lines on GitHub are full of:
```bash
README install instructions
pip install -r requirements.txt && python main.py
CI/CD build scripts
make build && make test && make deploy
Stack Overflow solutions
cat /var/log/syslog | grep "Out of memory" | tail -20 ```
I don't need to teach the LLM how to use CLI — it already knows. This familiarity is probabilistic and model-dependent, but in practice it's remarkably reliable across mainstream models.
Compare two approaches to the same task:
``` Task: Read a log file, count the error lines
Function-calling approach (3 tool calls): 1. read_file(path="/var/log/app.log") → returns entire file 2. search_text(text=
, pattern="ERROR") → returns matching lines 3. count_lines(text= ) → returns number CLI approach (1 tool call): run(command="cat /var/log/app.log | grep ERROR | wc -l") → "42" ```
One call replaces three. Not because of special optimization — but because Unix pipes natively support composition.
Making pipes and chains work
A single
runisn't enough on its own. Ifruncan only execute one command at a time, the LLM still needs multiple calls for composed tasks. So I make a chain parser (parseChain) in the command routing layer, supporting four Unix operators:| Pipe: stdout of previous command becomes stdin of next && And: execute next only if previous succeeded || Or: execute next only if previous failed ; Seq: execute next regardless of previous resultWith this mechanism, every tool call can be a complete workflow :
```bash
One tool call: download → inspect
curl -sL $URL -o data.csv && cat data.csv | head 5
One tool call: read → filter → sort → top 10
cat access.log | grep "500" | sort | head 10
One tool call: try A, fall back to B
cat config.yaml || echo "config not found, using defaults" ```
N commands × 4 operators — the composition space grows dramatically. And to the LLM, it's just a string it already knows how to write.
The command line is the LLM's native tool interface.
Heuristic design: making CLI guide the agent
Single-tool + CLI solves "what to use." But the agent still needs to know " how to use it." It can't Google. It can't ask a colleague. I use three progressive design techniques to make the CLI itself serve as the agent's navigation system.
Technique 1: Progressive --help discovery
A well-designed CLI tool doesn't require reading documentation — because
--helptells you everything. I apply the same principle to the agent, structured as progressive disclosure : the agent doesn't need to load all documentation at once, but discovers details on-demand as it goes deeper.Level 0: Tool Description → command list injection
The
runtool's description is dynamically generated at the start of each conversation, listing all registered commands with one-line summaries:Available commands: cat — Read a text file. For images use 'see'. For binary use 'cat -b'. see — View an image (auto-attaches to vision) ls — List files in current topic write — Write file. Usage: write <path> [content] or stdin grep — Filter lines matching a pattern (supports -i, -v, -c) memory — Search or manage memory clip — Operate external environments (sandboxes, services) ...The agent knows what's available from turn one, but doesn't need every parameter of every command — that would waste context.
Note: There's an open design question here: injecting the full command list vs. on-demand discovery. As commands grow, the list itself consumes context budget. I'm still exploring the right balance. Ideas welcome.
Level 1:
command(no args) → usageWhen the agent is interested in a command, it just calls it. No arguments? The command returns its own usage:
``` → run(command="memory") [error] memory: usage: memory search|recent|store|facts|forget
→ run(command="clip") clip list — list available clips clip
— show clip details and commands clip [args...] — invoke a command clip pull [name] — pull file from clip to local clip push — push local file to clip ``` Now the agent knows
memoryhas five subcommands andclipsupports list/pull/push. One call, no noise.Level 2:
command subcommand(missing args) → specific parametersThe agent decides to use
memory searchbut isn't sure about the format? It drills down:``` → run(command="memory search") [error] memory: usage: memory search
[-t topic_id] [-k keyword] → run(command="clip sandbox") Clip: sandbox Commands: clip sandbox bash <script> clip sandbox read
clip sandbox write File transfer: clip sandbox pull [local-name] clip sandbox push ``` Progressive disclosure: overview (injected) → usage (explored) → parameters (drilled down). The agent discovers on-demand, each level providing just enough information for the next step.
This is fundamentally different from stuffing 3,000 words of tool documentation into the system prompt. Most of that information is irrelevant most of the time — pure context waste. Progressive help lets the agent decide when it needs more.
This also imposes a requirement on command design: every command and subcommand must have complete help output. It's not just for humans — it's for the agent. A good help message means one-shot success. A missing one means a blind guess.
Technique 2: Error messages as navigation
Agents will make mistakes. The key isn't preventing errors — it's making every error point to the right direction.
Traditional CLI errors are designed for humans who can Google. Agents can't Google. So I require every error to contain both "what went wrong" and "what to do instead":
``` Traditional CLI: $ cat photo.png cat: binary file (standard output) → Human Googles "how to view image in terminal"
My design: [error] cat: binary image file (182KB). Use: see photo.png → Agent calls see directly, one-step correction ```
More examples:
``` [error] unknown command: foo Available: cat, ls, see, write, grep, memory, clip, ... → Agent immediately knows what commands exist
[error] not an image file: data.csv (use cat to read text files) → Agent switches from see to cat
[error] clip "sandbox" not found. Use 'clip list' to see available clips → Agent knows to list clips first ```
Technique 1 (help) solves "what can I do?" Technique 2 (errors) solves "what should I do instead?" Together, the agent's recovery cost is minimal — usually 1-2 steps to the right path.
Real case: The cost of silent stderr
For a while, my code silently dropped stderr when calling external sandboxes — whenever stdout was non-empty, stderr was discarded. The agent ran
pip install pymupdf, got exit code 127. stderr containedbash: pip: command not found, but the agent couldn't see it. It only knew "it failed," not "why" — and proceeded to blindly guess 10 different package managers:pip install → 127 (doesn't exist) python3 -m pip → 1 (module not found) uv pip install → 1 (wrong usage) pip3 install → 127 sudo apt install → 127 ... 5 more attempts ... uv run --with pymupdf python3 script.py → 0 ✓ (10th try)10 calls, ~5 seconds of inference each. If stderr had been visible the first time, one call would have been enough.
stderr is the information agents need most, precisely when commands fail. Never drop it.
Technique 3: Consistent output format
The first two techniques handle discovery and correction. The third lets the agent get better at using the system over time.
I append consistent metadata to every tool result:
file1.txt file2.txt dir1/ [exit:0 | 12ms]The LLM extracts two signals:
Exit codes (Unix convention, LLMs already know these):
exit:0— successexit:1— general errorexit:127— command not found
Duration (cost awareness):
12ms— cheap, call freely3.2s— moderate45s— expensive, use sparingly
After seeing
[exit:N | Xs]dozens of times in a conversation, the agent internalizes the pattern. It starts anticipating — seeingexit:1means check the error, seeing long duration means reduce calls.Consistent output format makes the agent smarter over time. Inconsistency makes every call feel like the first.
The three techniques form a progression:
--help → "What can I do?" → Proactive discovery Error Msg → "What should I do?" → Reactive correction Output Fmt → "How did it go?" → Continuous learning
Two-layer architecture: engineering the heuristic design
The section above described how CLI guides agents at the semantic level. But to make it work in practice, there's an engineering problem: the raw output of a command and what the LLM needs to see are often very different things.
Two hard constraints of LLMs
Constraint A: The context window is finite and expensive. Every token costs money, attention, and inference speed. Stuffing a 10MB file into context doesn't just waste budget — it pushes earlier conversation out of the window. The agent "forgets."
Constraint B: LLMs can only process text. Binary data produces high- entropy meaningless tokens through the tokenizer. It doesn't just waste context — it disrupts attention on surrounding valid tokens , degrading reasoning quality.
These two constraints mean: raw command output can't go directly to the LLM — it needs a presentation layer for processing. But that processing can't affect command execution logic — or pipes break. Hence, two layers.
Execution layer vs. presentation layer
┌─────────────────────────────────────────────┐ │ Layer 2: LLM Presentation Layer │ ← Designed for LLM constraints │ Binary guard | Truncation+overflow | Meta │ ├─────────────────────────────────────────────┤ │ Layer 1: Unix Execution Layer │ ← Pure Unix semantics │ Command routing | pipe | chain | exit code │ └─────────────────────────────────────────────┘When
cat bigfile.txt | grep error | head 10executes:Inside Layer 1: cat output → [500KB raw text] → grep input grep output → [matching lines] → head input head output → [first 10 lines]If you truncate
cat's output in Layer 1 →greponly searches the first 200 lines, producing incomplete results. If you add[exit:0]in Layer 1 → it flows intogrepas data, becoming a search target.So Layer 1 must remain raw, lossless, metadata-free. Processing only happens in Layer 2 — after the pipe chain completes and the final result is ready to return to the LLM.
Layer 1 serves Unix semantics. Layer 2 serves LLM cognition. The separation isn't a design preference — it's a logical necessity.
Layer 2's four mechanisms
Mechanism A: Binary Guard (addressing Constraint B)
Before returning anything to the LLM, check if it's text:
``` Null byte detected → binary UTF-8 validation failed → binary Control character ratio > 10% → binary
If image: [error] binary image (182KB). Use: see photo.png If other: [error] binary file (1.2MB). Use: cat -b file.bin ```
The LLM never receives data it can't process.
Mechanism B: Overflow Mode (addressing Constraint A)
``` Output > 200 lines or > 50KB? → Truncate to first 200 lines (rune-safe, won't split UTF-8) → Write full output to /tmp/cmd-output/cmd-{n}.txt → Return to LLM:
[first 200 lines] --- output truncated (5000 lines, 245.3KB) --- Full output: /tmp/cmd-output/cmd-3.txt Explore: cat /tmp/cmd-output/cmd-3.txt | grep <pattern> cat /tmp/cmd-output/cmd-3.txt | tail 100 [exit:0 | 1.2s]```
Key insight: the LLM already knows how to use
grep,head,tailto navigate files. Overflow mode transforms "large data exploration" into a skill the LLM already has.Mechanism C: Metadata Footer
actual output here [exit:0 | 1.2s]Exit code + duration, appended as the last line of Layer 2. Gives the agent signals for success/failure and cost awareness, without polluting Layer 1's pipe data.
Mechanism D: stderr Attachment
``` When command fails with stderr: output + "\n[stderr] " + stderr
Ensures the agent can see why something failed, preventing blind retries. ```
Lessons learned: stories from production
Story 1: A PNG that caused 20 iterations of thrashing
A user uploaded an architecture diagram. The agent read it with
cat, receiving 182KB of raw PNG bytes. The LLM's tokenizer turned these bytes into thousands of meaningless tokens crammed into the context. The LLM couldn't make sense of it and started trying different read approaches —cat -f,cat --format,cat --type image— each time receiving the same garbage. After 20 iterations, the process was force-terminated.Root cause:
cathad no binary detection, Layer 2 had no guard. Fix:isBinary()guard + error guidanceUse: see photo.png. Lesson: The tool result is the agent's eyes. Return garbage = agent goes blind.Story 2: Silent stderr and 10 blind retries
The agent needed to read a PDF. It tried
pip install pymupdf, got exit code 127. stderr containedbash: pip: command not found, but the code dropped it — because there was some stdout output, and the logic was "if stdout exists, ignore stderr."The agent only knew "it failed," not "why." What followed was a long trial- and-error:
pip install → 127 (doesn't exist) python3 -m pip → 1 (module not found) uv pip install → 1 (wrong usage) pip3 install → 127 sudo apt install → 127 ... 5 more attempts ... uv run --with pymupdf python3 script.py → 0 ✓10 calls, ~5 seconds of inference each. If stderr had been visible the first time, one call would have sufficed.
Root cause:
InvokeClipsilently dropped stderr when stdout was non- empty. Fix: Always attach stderr on failure. Lesson: stderr is the information agents need most, precisely when commands fail.Story 3: The value of overflow mode
The agent analyzed a 5,000-line log file. Without truncation, the full text (~200KB) was stuffed into context. The LLM's attention was overwhelmed, response quality dropped sharply, and earlier conversation was pushed out of the context window.
With overflow mode:
``` [first 200 lines of log content]
--- output truncated (5000 lines, 198.5KB) --- Full output: /tmp/cmd-output/cmd-3.txt Explore: cat /tmp/cmd-output/cmd-3.txt | grep
cat /tmp/cmd-output/cmd-3.txt | tail 100 [exit:0 | 45ms] ``` The agent saw the first 200 lines, understood the file structure, then used
grepto pinpoint the issue — 3 calls total, under 2KB of context.Lesson: Giving the agent a "map" is far more effective than giving it the entire territory.
Boundaries and limitations
CLI isn't a silver bullet. Typed APIs may be the better choice in these scenarios:
- Strongly-typed interactions : Database queries, GraphQL APIs, and other cases requiring structured input/output. Schema validation is more reliable than string parsing.
- High-security requirements : CLI's string concatenation carries inherent injection risks. In untrusted-input scenarios, typed parameters are safer. agent-clip mitigates this through sandbox isolation.
- Native multimodal : Pure audio/video processing and other binary-stream scenarios where CLI's text pipe is a bottleneck.
Additionally, "no iteration limit" doesn't mean "no safety boundaries." Safety is ensured by external mechanisms:
- Sandbox isolation : Commands execute inside BoxLite containers, no escape possible
- API budgets : LLM calls have account-level spending caps
- User cancellation : Frontend provides cancel buttons, backend supports graceful shutdown
Hand Unix philosophy to the execution layer, hand LLM's cognitive constraints to the presentation layer, and use help, error messages, and output format as three progressive heuristic navigation techniques.
CLI is all agents need.
Source code (Go): github.com/epiral/agent- clip
Core files:
internal/tools.go(command routing),internal/chain.go(pipes),internal/loop.go(two-layer agentic loop),internal/fs.go(binary guard),internal/clip.go(stderr handling),internal/browser.go(vision auto-attach),internal/memory.go(semantic memory).Happy to discuss — especially if you've tried similar approaches or found cases where CLI breaks down. The command discovery problem (how much to inject vs. let the agent discover) is something I'm still actively exploring.
submitted by /u/MorroHsu
[link] [comments] -
🔗 obra/superpowers v5.0.2 release
Release v5.0.2
-
🔗 r/reverseengineering runtime jvm analysis tool i made rss
submitted by /u/Proof-Suggestion5926
[link] [comments] -
🔗 Rust Blog Announcing rustup 1.29.0 rss
The rustup team is happy to announce the release of rustup version 1.29.0.
Rustup is the recommended tool to install Rust, a programming language that empowers everyone to build reliable and efficient software.
What's new in rustup 1.29.0
Following the footsteps of many package managers in the pursuit of better toolchain installation performance, the headline of this release is that rustup has been enabled to download components concurrently and unpack during downloads in operations such as
rustup updateorrustup toolchainand to concurrently check for updates inrustup check, thanks to a GSoC 2025 project. This is by no means a trivial change so a long tail of issues might occur, please report them if you have found any!Furthermore, rustup now officially supports the following host platforms:
sparcv9-sun-solarisx86_64-pc-solaris
Also, rustup will start automatically inserting the right
$PATHentries duringrustup-initfor the following shells, in addition to those already supported:tcshxonsh
This release also comes with other quality-of-life improvements, to name a few:
-
When running rust-analyzer via a proxy, rustup will consider the
rust-analyzerbinary fromPATHwhen the rustup-managed one is not found.- This should be particularly useful if you would like to bring your own
rust-analyzerbinary, e.g. if you use Neovim, Helix, etc. or are developing rust-analyzer itself. - Empty environment variables are now treated as unset. This should help with resetting configuration values to default when an override is present.
- This should be particularly useful if you would like to bring your own
-
rustup checkwill use different exit codes based on whether new updates have been found: it will exit with100on any updates or0for no updates.
Furthermore, @FranciscoTGouveia has joined the team. He has shown his talent, enthusiasm and commitment to the project since the first interactions with rustup and has played a significant role in bring more concurrency to it, so we are thrilled to have him on board and are actively looking forward to what we can achieve together.
Further details are available in the changelog!
How to update
If you have a previous version of rustup installed, getting the new one is as easy as stopping any programs which may be using rustup (e.g. closing your IDE) and running:
$ rustup self updateRustup will also automatically update itself at the end of a normal toolchain update:
$ rustup updateIf you don't have it already, you can get rustup from the appropriate page on our website.
Rustup's documentation is also available in the rustup book.
Caveats
Rustup releases can come with problems not caused by rustup itself but just due to having a new release.
In particular, anti-malware scanners might block rustup or stop it from creating or copying files, especially when installing
rust-docswhich contains many small files.Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release.
Thanks
Thanks again to all the contributors who made this rustup release possible!
-
🔗 Console.dev newsletter Ki Editor rss
Description: Structural code editor.
What we like: Acts on the AST so code manipulations happen within the true language syntax e.g. selecting the whole control statement. This enables AST native editing, selection, navigation, find & replace. Has a built in LSP and file explorer. Themes and syntax highlighting powered by Tree-sitter.
What we dislike: Might take some getting used to - it has a VS Code extension if you prefer a GUI.
-
🔗 Console.dev newsletter Agent Safehouse rss
Description: macOS native AI sandboxing.
What we like: Denies access outside of your project directory using macOS native, kernel-level sandboxes. Has safe defaults for access to things like core system tools, network access, Git, etc. Security sensitive actions require opt-in e.g. clipboard, docker, shell access.
What we dislike: macOS only.
-