🏡


to read (pdf)

  1. Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
  2. Letting AI Actively Manage Its Own Context | 明天的乌云
  3. Garden Offices for Sale UK - Portable Space
  4. Cord: Coordinating Trees of AI Agents | June Kim
  5. Style tips for less experienced developers coding with AI · honnibal.dev

  1. March 14, 2026
    1. 🔗 r/wiesbaden Ein Discord für Lesemäuse <3 rss

      Hallo ihr Buchmenschen!

      Ich habe einen kleinen, kuscheligen Discord-Server gegründet, auf dem sich alle, die Bücher lieben, treffen, quatschen und ihre Lieblingsgeschichten teilen können.

      Hier kannst du einfach ankommen, dich in Ruhe umsehen und nach Lust und Laune mitlesen oder mitreden. Egal ob Fantasy, Romance, Thriller, Manga oder einfach nur gemütliches Stöbern – bei uns ist jede*r willkommen.

      Was dich erwartet:

      Gemütliche Leseecken für Lesetalk, Buchempfehlungen, Spoiler und Plottwists

      Kreative Kanäle für Fanart, Bookmemes, Lieblingszitate & Book Aesthetic

      Buddy Reads, Lesekreise oder einfach nur nette Plauderei über Bücher

      Rollen, die du selbst nach deinen Lieblingsgenres oder deinem Lese-Vibe auswählen kannst

      Alles ganz entspannt – du musst nichts, darfst alles. Unser Ziel ist ein freundlicher, warmer Ort für alle, die gerne lesen, wo man sich einfach wohlfühlt.

      Wenn du Lust hast, vorbei zu schauen, schreib mir gerne eine DMund komm gern vorbei

      Wir freuen uns schon auf dich, deine Lieblingsbücher und gemütliche Gespräche bei einer virtuellen Tasse Tee oder Kaffee!

      submitted by /u/Ok-Calendar-9250
      [link] [comments]

    2. 🔗 r/Leeds Here's some Flixbus changes including the new 905 connecting Bradford & Leeds to Heathrow & Gatwick. rss
    3. 🔗 r/reverseengineering Reverse Engineering Android 16 Memory Management: Solving the Knox-Induced 512B Sector Fragmentation Paradox rss
    4. 🔗 r/york visiting alone - where to eat at? rss

      hello!! I'm going to York next week on my own and I'm quite anxious/nervous when it comes to eating out by myself. I want some places that aren't too busy, but also where I won't be the only person there because then I feel too seen, and also preferably with tables that aren't too close together. If you know any places like that please let me know!! I'm quite picky so I probably won't go for any places that serve Asian food since it typically has ingredients I'm not keen on (as sad as that is haha) but I'll still be willing to take a look! Thanks!!

      submitted by /u/nek-uno
      [link] [comments]

    5. 🔗 r/reverseengineering I rewrote my ELF loader in Rust and added new features! rss
    6. 🔗 r/wiesbaden Fußballgruppe rss

      Hi Zusammen,

      ich suche eine Gruppe die regelmäßig Fußball spielen geht oder einzelne Personen die Bock drauf hätten jeden Sonntag kicken zu gehen - einfach auf entspannt und zum Spaß.

      Wir sind bereits zu dritt (30,32,33) - Alter, Herkunft etc. ist egal

      submitted by /u/Lebenskuenstlerinho
      [link] [comments]

    7. 🔗 vercel-labs/agent-browser v0.20.1 release

      Patch Changes

      • bd05917: ### Bug Fixes

        • Fixed AX tree deserialization to accept integer nodeId and childIds values for compatibility with Lightpanda, which sends numeric IDs where Chrome sends strings (#775)
        • Fixed misleading SIGPIPE comment to accurately describe the default Rust SIGPIPE behavior and why it is reset to SIG_DFL (#776)
        • Fixed WebM recording output to use the VP9 codec (libvpx-vp9) instead of H.264, producing valid WebM files; also adds a padding filter to ensure even frame dimensions (#779)
    8. 🔗 r/wiesbaden 30M looking to meet fun people rss

      Hey Wiesbaden! Looking to meet some likeminded people and maybe actually leave my apartment more often. I'm a Franco-Spanish guy (30M), I enjoy a bit everything creative (drawing, painting, animation, arts and crafts... currently I'm very into papier mâché sculptures). I like boulder, Magic the gathering (I'm not super experienced tho so if you're a pro you might get bored hahahah), I also love going to museums and more stuff but listing everything is hard. If any of that sounds like your thing, hit me up! Bouldering sessions, casual MTG games, museum trips, crafting together, or just a casual drink here and there, I'm down for anything really. Have a nice one!

      submitted by /u/Raphi
      [link] [comments]

    9. 🔗 r/york Spring Blossom in the Museum Gardens rss

      Spring Blossom in the Museum Gardens | submitted by /u/York_shireman
      [link] [comments]
      ---|---

    10. 🔗 r/reverseengineering Cross-Platform GUI for APK Decompilation, Analysis, and Recompilation rss
    11. 🔗 r/york Wedding hire venue advice rss

      Hi, I’m currently planning a wedding in Poppleton and would love some advice and recommendations regarding venues and catering. I’ve looked at the Poppleton Tithe Barn as an option, but I’m not quite sure yet.

      Additionally, for anyone who has hired the Tithe Barn for a wedding: how long did it take you to set everything up? Do they offer a grace period for setup and cleaning, or must everything be done within the hired time period? Do you think it’s realistic to hire the venue for a single day and manage both the setup and the cleaning on the same day?

      Thanks!

      submitted by /u/Traditional-Jury-405
      [link] [comments]

    12. 🔗 r/york Wedding advise rss

      Hi, I’m currently planning a wedding in Poppleton and would love some advice and recommendations regarding venues and catering. I’ve looked at the Poppleton Tithe Barn as an option, but I’m not quite sure yet.

      Additionally, for anyone who has hired the Tithe Barn for a wedding: how long did it take you to set everything up? Do they offer a grace period for setup and cleaning, or must everything be done within the hired time period? Do you think it’s realistic to hire the venue for a single day and manage both the setup and the cleaning on the same day?

      Thanks

      submitted by /u/Traditional-Jury-405
      [link] [comments]

    13. 🔗 r/Leeds Preachers on Briggate rss

      There seems to be more and more self appointed 'preachers' on Briggate. Some of them seem to be bordering on having mental health issues (screaming repeatedly etc). Is preaching allowed? I don't have a problem with people talking about their faith but some aggressive/unstable behaviour is worrying.

      submitted by /u/Mental_Brick2013
      [link] [comments]

    14. 🔗 r/Leeds Fire hazard in the Trinity rss

      These things are a lot uglier in real life. ,

      submitted by /u/Life_Exchange_7188
      [link] [comments]

    15. 🔗 badlogic/pi-mono v0.58.1 release

      Added

      • Added pi uninstall alias for pi install --uninstall convenience

      Fixed

      • Fixed OpenAI Codex websocket protocol to include required headers and properly terminate SSE streams on connection close (#1961)
      • Fixed WSL clipboard image fallback to properly handle missing clipboard utilities and permission errors (#1722)
      • Fixed extension session_start hook firing before TUI was ready, causing UI operations in session_start handlers to fail (#2035)
      • Fixed Windows shell and path handling for package manager operations and autocomplete to properly handle drive letters and mixed path separators
      • Fixed Bedrock prompt caching being enabled for non-Claude models, causing API errors (#2053)
      • Fixed Qwen models via OpenAI-compatible providers by adding qwen-chat-template compat mode that uses Qwen's native chat template format (#2020)
      • Fixed Bedrock unsigned thinking replay to handle edge cases with empty or malformed thinking blocks (#2063)
      • Fixed headless clipboard fallback logging spurious errors in non-interactive environments (#2056)
      • Fixed models.json provider compat flags not being honored when loading custom model definitions (#2062)
      • Fixed xhigh reasoning effort detection for Claude Opus 4.6 to match by model ID instead of requiring explicit capability flag (#2040)
      • Fixed prompt cwd containing Windows backslashes breaking bash tool execution by normalizing to forward slashes (#2080)
      • Fixed editor paste to preserve literal content instead of normalizing newlines, preventing content corruption for text with embedded escape sequences (#2064)
      • Fixed skill discovery recursing past skill root directories when nested SKILL.md files exist (#2075)
      • Fixed tab completion to preserve ./ prefix when completing relative paths (#2087)
      • Fixed npm package installs and lookups being tied to the active repository Node version by adding npmCommand as an argv-style settings override for package manager operations (#2072)
      • Fixed ctx.ui.getEditorText() in the extension API returning paste markers (e.g., [paste #1 +24 lines]) instead of the actual pasted content (#2084)
      • Fixed startup crash when downloading fd/ripgrep on first run by using pipeline() instead of finished(readable.pipe(writable)) so stream errors from timeouts are caught properly, and increased the download timeout from 10s to 120s (#2066)
    16. 🔗 r/Leeds Is there a female or mixed group equivalent of Andy’s man club, or any other support groups in Leeds? rss

      Thankyou 🙏🏽

      submitted by /u/anordicalien
      [link] [comments]

    17. 🔗 r/wiesbaden Linienbus in Wiesbaden geklaut: Teenie fährt bis Karlsruhe rss

      TLDR: 15-Jähriger von der ebsch Seid klaut Linienbus im umkämpften Gebiet (Kastel) mit Generalschlüssel, fährt 150km bis Karlsruhe um seiner Freundin zu imponieren (Diebstahl fällt erst nach Stunden auf, weil keiner den Bus vermisst).

      Wann bekommt der Junge einen Arbeitsvertrag von ESWE Verkehr? Solche Busfahrer brauchen wir!

      submitted by /u/Itchy-Individual3536
      [link] [comments]

    18. 🔗 r/wiesbaden Ich kann nicht mal sauer sein... rss
    19. 🔗 r/york Daffodils by York walls rss

      Daffodils by York walls | Does anyone know if the daffodils are all in bloom on the banks around York wall? Will save me driving in for disappointment later today. Thanks submitted by /u/Possible-Ad505
      [link] [comments]
      ---|---

    20. 🔗 roboflow/supervision supervision-0.27.0.post release

      Full Changelog : 0.27.0...0.27.0.post2

    21. 🔗 badlogic/pi-mono v0.58.0 release

      New Features

      • Claude Opus 4.6, Sonnet 4.6, and related Bedrock models now use a 1M token context window (up from 200K) (#2135 by @mitsuhiko).
      • Extension tool calls now execute in parallel by default, with sequential tool_call preflight preserved for extension interception.
      • GOOGLE_CLOUD_API_KEY environment variable support for the google-vertex provider as an alternative to Application Default Credentials (#1976 by @gordonhwc).
      • Extensions can supply deterministic session IDs via newSession() (#2130 by @zhahaoyu).

      Added

      • Added GOOGLE_CLOUD_API_KEY environment variable support for the google-vertex provider as an alternative to Application Default Credentials (#1976 by @gordonhwc)
      • Added custom session ID support in newSession() for extensions that need deterministic session paths (#2130 by @zhahaoyu)

      Changed

      • Changed extension tool interception to use agent-core beforeToolCall and afterToolCall hooks instead of wrapper-based interception. Tool calls now execute in parallel by default, extension tool_call preflight still runs sequentially, and final tool results are emitted in assistant source order.
      • Raised Claude Opus 4.6, Sonnet 4.6, and related Bedrock model context windows from 200K to 1M tokens (#2135 by @mitsuhiko)

      Fixed

      • Fixed tool_call extension handlers observing stale sessionManager state during multi-tool turns by draining queued agent events before each tool_call preflight. In parallel tool mode this guarantees state through the current assistant tool-calling message, but not sibling tool results from the same assistant message.
      • Fixed interactive input fields backed by the TUI Input component to scroll by visual column width for wide Unicode text (CJK, fullwidth characters), preventing rendered line overflow and TUI crashes in places like search and filter inputs (#1982)
      • Fixed shift+tab and other modified Tab bindings in tmux when extended-keys-format is left at the default xterm
      • Fixed EXIF orientation not being applied during image convert and resize, causing JPEG and WebP images from phone cameras to display rotated or mirrored (#2105 by @melihmucuk)
      • Fixed the default coding-agent system prompt to include only the current date in ISO format, not the current time, so prompt prefixes stay cacheable across reloads and resumed sessions (#2131)
      • Fixed retry regex to match server_error and internal_error error types from providers, improving automatic retry coverage (#2117 by @MadKangYu)
      • Fixed example extensions to support PI_CODING_AGENT_DIR environment variable for custom agent directory paths (#2009 by @smithbm2316)
      • Fixed tool result images not being sent in function_call_output items for OpenAI Responses API providers, causing image data to be silently dropped in tool results (#2104)
      • Fixed assistant content being sent as structured content blocks instead of plain strings in the openai-completions provider, causing errors with some OpenAI-compatible backends (#2008 by @geraldoaax)
      • Fixed error details in OpenAI Responses response.failed handler to include status code, error code, and message instead of a generic failure (#1956 by @drewburr)
      • Fixed GitHub Copilot device-code login polling to respect OAuth slow-down intervals, wait before the first token poll, and include a clearer clock-drift hint in WSL/VM environments when repeated slow-downs lead to timeout
      • Fixed usage statistics not being captured for OpenAI-compatible providers that return usage in choice.usage instead of the standard chunk.usage (e.g., Moonshot/Kimi) (#2017)
      • Fixed editor scroll indicator rendering crash in narrow terminal widths (#2103 by @haoqixu)
      • Fixed tab characters in editor and input paste not being normalized to spaces (#2027, #1975 by @haoqixu)
      • Fixed wordWrapLine overflow when wide characters (CJK, fullwidth) fall exactly at the wrap boundary (#2082 by @haoqixu)
      • Fixed paste markers not being treated as atomic segments in editor word wrapping and cursor navigation (#2111 by @haoqixu)
    22. 🔗 vercel-labs/agent-browser v0.20.0 release

      Minor Changes

      • 235fa88: ### Full Native Rust

        • 100% native Rust -- Removed the entire Node.js/Playwright daemon. The Rust native daemon is now the only implementation. No Node.js runtime or Playwright dependency required. (#754)
        • 99x smaller install -- Install size reduced from 710 MB to 7 MB by eliminating the Node.js dependency tree.
        • 18x less memory -- Daemon memory usage reduced from 143 MB to 8 MB.
        • 1.6x faster cold start -- Cold start time reduced from 1002ms to 617ms.
        • Benchmarks -- Added benchmark suite comparing native vs Node.js daemon performance.
        • Chromium installer hardened -- Fixed zip path traversal vulnerability in Chrome for Testing installer.

      Bug Fixes

      * Fixed `--headed false` flag not being respected in CLI ([#757](https://github.com/vercel-labs/agent-browser/pull/757))
      * Fixed "not found" error pattern in `to_ai_friendly_error` incorrectly catching non-element errors ([#759](https://github.com/vercel-labs/agent-browser/pull/759))
      * Fixed storage local key lookup parsing and text output ([#761](https://github.com/vercel-labs/agent-browser/pull/761))
      * Fixed Lightpanda engine launch with release binaries ([#760](https://github.com/vercel-labs/agent-browser/pull/760))
      * Hardened Lightpanda startup timeouts ([#762](https://github.com/vercel-labs/agent-browser/pull/762))
      
    23. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [IDASQL](https://github.com/allthingsida/idasql): 0.0.11
      
    24. 🔗 mitsuhiko/agent-stuff 1.5.0 release

      1.5.0

    25. 🔗 r/reverseengineering If you’re working with Akamai sensors and need to gen correctly, here’s a correctly VM-decompiled version for Akamai 3.0. rss
  2. March 13, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-13 rss

      IDA Plugin Updates on 2026-03-13

      New Releases:

      Activity:

    2. 🔗 r/Leeds American man living in Leeds charged with terror offences rss

      What's going on here then?

      submitted by /u/Granopoly
      [link] [comments]

    3. 🔗 r/york Any idea if there will actually be disruption from this? rss

      Any idea if there will actually be disruption from this? | This might sound a bit silly but I really don't want a smart meter, I don't see the need for everything to be "smart" (basically means they can just collect more data from me) and I don't see anything wrong with just sending readings every so often. Can I ignore this and be okay or will I actually end up losing power without getting a new meter submitted by /u/Jubbity
      [link] [comments]
      ---|---

    4. 🔗 r/york Places to develop 35mm film? :) rss

      Hi! I just wondered if there’s anywhere in York that develops film. I normally go to Boots but it can take like several weeks and I wondered if somewhere else can do it quicker. I saw York Digital Image does it but that was an older post - do they still do it and has anyone used them?

      Thanks! :)

      submitted by /u/bunnyels07
      [link] [comments]

    5. 🔗 News Minimalist 🐢 Nations release oil reserves to stabilize prices + 11 more stories rss

      In the last 3 days Gemini read 88464 top news stories. After removing previously covered events, there are 12 articles with a significance score over 5.5.

      [6.5] Germany and Austria join global effort to release oil reserves and stabilize prices —apnews.com(+1153)

      The International Energy Agency will release a record 400 million barrels of emergency oil reserves to counter energy market disruptions and price spikes caused by Middle East conflict.

      Member nations, including Germany and Austria, agreed to the release after Iran effectively halted oil traffic through the Strait of Hormuz. The move follows G7 discussions aimed at stabilizing global supplies as export volumes have plummeted below ten percent of prewar levels.

      Established after the 1974 Arab oil embargo, the IEA has authorized emergency releases five times previously. Officials emphasize that restoring transit through the Strait of Hormuz remains essential for long-term market stability.

      [5.8] China adopts an ethnic unity law that critics say will cement assimilation —newsday.com(+11)

      China has adopted a sweeping ethnic unity law that critics say will accelerate the assimilation of minority groups by mandating Mandarin in schools and further eroding their cultural rights.

      The legislation requires all organizations and citizens to foster a shared Chinese national identity. It essentially prohibits using minority languages for primary instruction during compulsory education, a move experts argue effectively dismantles China’s original constitutional promises of meaningful regional ethnic autonomy.

      The measure also establishes extraterritorial legal penalties for overseas individuals deemed to harm ethnic unity. Additionally, it encourages cross- migration to create embedded communities, which scholars warn could break up minority-heavy neighborhoods.

      [5.6] Artemis II mission targets early April for crewed lunar flyby —bbc.com(+67)

      NASA targets early April for its Artemis II mission, which will carry four astronauts around the Moon for the first time in over 50 years after resolving technical issues.

      Following repairs to a helium leak, officials plan to return the Space Launch System rocket to the Florida launchpad on March 19. The ten-day flight will carry three Americans and one Canadian to the lunar far side and back.

      Highly covered news with significance over 5.5

      [5.8] Gut bacteria linked to age-related memory loss in mice — nature.com (+13)

      [5.8] China approves launch of world first brain-computer interface device — independent.co.uk (+2)

      [5.7] Scientists revive activity in frozen mouse brains for the first time — nature.com (+2)

      [5.6] Big Tech backs Anthropic in fight against Trump administration — bbc.com (+27)

      [5.5] Google Maps integrates AI for personalized recommendations and immersive navigation — independent.co.uk (+44)

      [5.5] Climate change slows Earth's rotation, lengthening days — g1.globo.com (Portuguese) (+8)

      [5.5] AI use may be reducing stylistic diversity and human creativity, study finds — thetimes.com [$] (+4)

      [5.5] International police disrupt global cybercrime by sinkholing 45,000 IP addresses — bleepingcomputer.com (+5)

      [5.5] Astronomers witness colossal supernova explosion create one of the most magnetic stars in the universe for the first time — space.com (+9)

      Thanks for reading!

      — Vadim


      You can create your own significance-based RSS feed with premium.


      Powered by beehiiv

    6. 🔗 r/Leeds What do people from Leeds think of Manchester? Which city do you prefer? What does Manchester do right? What does Leeds do right? rss

      I visited Manchester the other day and was struck by how very ’city’ like it feels. Lots of hustle and bustle, massive buildings, trans etc.

      I think I prefer Leeds in most ways but it feels more like a very large town than a city.

      submitted by /u/OneItchy396
      [link] [comments]

    7. 🔗 r/Harrogate Considering moving to Woodlands rss

      Hi all Typical question about location appeal I've seen a lot, but hey any detail would be useful.

      We've lived in Oatlands renting for 5 years roughly and are looking to buy a house. There's a relatively surprisingly cheap house on Tyson place in Woodlands we're considering. The wife's parents are saying it's a dodgy area and not to consider it, but comparing the crime rate to our location there was only about 10 more reported crimes within a half mile per year. Most of it was anti social behaviour.

      We think it's objectively overblown but for anyone living close to that area specifically, does it feel a nice safe place to live?

      Thanks in advance

      submitted by /u/Matrixgypsy
      [link] [comments]

    8. 🔗 r/Yorkshire 'My language course helped me launch my life in the UK' rss

      'My language course helped me launch my life in the UK' | After arriving in Bradford from Iraq, Hareth Alshaban was looking for a way to improve his English and launch his new life in the UK. The 24-year-old's time on the English for Speakers of Other Languages (ESOL) course was so successful that he ended up performing the lead role in a production of Romeo and Juliet, and he is now a youth worker. ESOL programmes are aimed at those who have some grasp of English, but want to improve their speaking and listening skills, reading and writing, and understanding of regional accents. West Yorkshire Combined Authority is investing in training new ESOL teachers as a way to improve inclusion and social cohesion, and demand is increasing. Alshaban, who is originally from Palestine, said he travelled "unwillingly" through Syria, Jordan, and Turkey before landing in Cyprus, where he stayed for a couple of years before returning to Iraq. He remained there until 2018, but was then resettled in Bradford as part of a UN programme. Alshaban could speak English "quite well" when he arrived, but found there was a "bit of a struggle with understanding the accent" and "the culture was different from what I was used to". "I was of told it was one of the first steps to developing in this country," he said. "I didn't really understand why I had to take it to begin with as I already spoke English, but I honestly have taken quite a lot out of it." He ended up reading Shakespeare's works as part of the course and becoming a youth advisory board member for the Royal Shakespeare Company. He eventually graduated in politics and international relations from Liverpool Hope University. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    9. 🔗 r/LocalLLaMA I feel personally attacked rss
    10. 🔗 r/LocalLLaMA I'm fully blind, and AI is a game changer for me. Are there any local LLMS that can rival claude code and codex? rss

      Hi guys,

      So, I am fully blind.

      Since AI was released to the public, I have been a max user.

      Why?

      Because it has changed my life.

      Suddenly, I am able to get very accurate image descriptions, when I get an inaccessible document, an AI can read it to me in a matter of seconds, when there is something inaccessible, I can use Python, swift, or whatever I want to build my own software that is exactly how I want it.

      So far, I have access to Claude Code pro, codex pro and Copilot for business.

      This is also draining my bank account.

      So now, I have started investigating whether there is anything that can rival this in terms of precision and production ready apps and programs?

      Not necessarily anything I will be releasing to the public, but with Claude Code, I can have a full featured accessible accounting program in a couple of days, that help me in my business.

      Do you know of anything?

      What is possible at the moment?

      Thank you for your time.

      submitted by /u/Mrblindguardian
      [link] [comments]

    11. 🔗 r/york Shambles sightings rss

      Shambles sightings | White chocolate shot submitted by /u/Ambivertpayyan
      [link] [comments]
      ---|---

    12. 🔗 r/york Location near hospital - gaming rss

      Hi

      I've ended up in a situation where I have to be near York hospital (around a 30 minute walk) and I have plenty of time to kill.

      I've got some games in my steam library I haven't gotten round to playing over the years

      Could anyone please suggest any cafés or other locations I could potentially sit for a few hours playing them?

      Thanks

      submitted by /u/BladedChaos
      [link] [comments]

    13. 🔗 r/wiesbaden Need help to understand how to sign contract for gas. rss

      Hey everyone,
      I'm new to Germany, I recently moved for work, and rented long term apartment starting from 01.02.2026.
      I knew I would need to sign contracts for gas and electricity, and I did with electricity without any problems, but with gas supplier I can't understand what is being asked from me.
      I selected vattenfall on check24, and entered all my data: address, name, and meter number.
      After that, I started receiving requests to specify my data, I kept entering same data as it remained the same. I knew it would somehow play differently if I provide Markt-ID, but I simply don't understand what is that and where to take it from, I only know that has to be on my invoice.
      After time, on 26.02.2026 vattenfall cancelled my application since I haven't provided the "right data", so I tried applying again on their website.
      It's now 13.03.2026 and I just received another letter from them, basically saying "We don't like your data, give us new data".

      I'm already using gas in this apartment for month and half, spent 120 cubic meters of gas already.
      I have already received invoices for electricity and paid it, but this situation with gas provider unsettled gives me anxiety.

      Can anyone suggest what should I do in this case, or at least what is expected from me? Somehow none of that troubles were faced with electricity or internet.

      Inb4, I did registered my address at citizens office.

      submitted by /u/Dazzling_Mood2958
      [link] [comments]

    14. 🔗 ghostty-org/ghostty v1.3.1 release

      v1.3.1

    15. 🔗 r/LocalLLaMA Avacado is toast rss

      Meta's avacado doesn't meet the standards Facebook desires so it is now delayed till May . Zuc must be fuming after spending billions and getting subpar performance.

      https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model- delayed.html

      https://x.com/i/trending/2032258514568298991

      submitted by /u/Terminator857
      [link] [comments]

    16. 🔗 r/york Pole dancing classes in York rss

      Hi all,

      I'm sure I remember hearing about pole dacing classes in York, but I can't seem to find any. A studio is called Pole Position, but their website is down and they don't repond on Facebook or by phone, so I'm guessing it must have closed down. Does anybody know of any active class in York?

      Thanks :)

      submitted by /u/nocrimia
      [link] [comments]

    17. 🔗 r/wiesbaden Kommunalwahl am Sonntag rss

      Moin Leute,

      Public Service Announcement dass am Sonntag Kommunawahlen sind!

      Auch wenn es mühsam ist mit den über 70 Stimmen, bitte nutzt diese Möglichkeit mitzubestimmen. Bei einer konservativen Wende im Rathaus droht die Rückabwicklung vieler progressiver Fortschritte der vergangenen Jahre. Diese Wahl wird wirklich richtungsweisend für die Stadtpolitik der nächsten Jahre.

      submitted by /u/valentino_nero
      [link] [comments]

    18. 🔗 r/reverseengineering Codex vs. Claude: Which One Handles Reverse Engineering Skills Better? rss
    19. 🔗 r/wiesbaden Neuer Hygienebericht online rss
    20. 🔗 r/Yorkshire Lost nuclear bunker rediscovered at Scarborough Castle rss
    21. 🔗 r/Leeds Survey on hair products and salon/barber usage rss

      Hi, I'm Callum, a student at University of Leeds and I am doing my dissertation on consumer influence for sustainability. This survey takes around 2 minutes to complete and is completely anonymous. You will be asked a few questions about your hair care product usage, professional hair services usage, if you've used 'eco-friendly' products before, and what would influence or disinfluence you from buying a hair product. If you have a spare 2 minutes from now til Monday, I'd really, really appreciate it :) x

      https://app.onlinesurveys.jisc.ac.uk/s/leeds/usage-of-hair-products-and-hair- salons

      submitted by /u/Critical-Business442
      [link] [comments]

    22. 🔗 r/york Gutter cleaning recommendations rss

      Does anyone have recommendations for local, trustworthy, gutter cleaning services in York?

      A lot of my searches for gutter cleaning services seem to end up on similar looking websites run by "big gutter". I searched this sub too, with little result.

      Thanks!

      submitted by /u/LIKE-AN-ANIMAL
      [link] [comments]

    23. 🔗 r/york Minster tower - no tix available? rss

      Does anyone know why I can’t book tickets to the minster tower today?

      Apparently they can’t be booked in advance either they have to be booked on the day?

      All a bit odd !

      Thanks !

      submitted by /u/lancelon
      [link] [comments]

    24. 🔗 vercel-labs/agent-browser v0.19.0 release

      Minor Changes

      • 56bb92b: ### New Features

        • Browserless.io provider -- Added browserless.io as a browser provider, supported in both Node.js and native daemon paths. Connect to remote Browserless instances with --provider browserless or AGENT_BROWSER_PROVIDER=browserless. Configurable via BROWSERLESS_API_KEY, BROWSERLESS_API_URL, and BROWSERLESS_BROWSER_TYPE environment variables. (#502, #746)
        • clipboard command -- Read from and write to the browser clipboard. Supports read, write <text>, copy (simulates Ctrl+C), and paste (simulates Ctrl+V) operations. (#749)
        • Screenshot output configuration -- New global flags --screenshot-dir, --screenshot-quality, --screenshot-format and corresponding AGENT_BROWSER_SCREENSHOT_DIR, AGENT_BROWSER_SCREENSHOT_QUALITY, AGENT_BROWSER_SCREENSHOT_FORMAT environment variables for persistent screenshot settings. (#749)

      Bug Fixes

      * Fixed `wait --text` not working in native daemon path ([#749](https://github.com/vercel-labs/agent-browser/pull/749))
      * Fixed `BrowserManager.navigate()` and package entry point ([#748](https://github.com/vercel-labs/agent-browser/pull/748))
      * Fixed extensions not being loaded from `config.json` ([#750](https://github.com/vercel-labs/agent-browser/pull/750))
      * Fixed scroll on page load ([#747](https://github.com/vercel-labs/agent-browser/pull/747))
      * Fixed HTML retrieval by using `browser.getLocator()` for selector operations ([#745](https://github.com/vercel-labs/agent-browser/pull/745))
      
    25. 🔗 r/Leeds Looking for info on my grandfather rss

      Morning all,

      Does anybody remember or hear of a black caribbean man who went by “little Peter” - full name Peter Joseph. He lived in Chapel Town & Harehills, then he moved on to Bradford & we think he then moved to London. He had atleast two children called Emma & Christopher ‘Chris’.

      He was born in the early 1940’s and he was from St Lucia, spoke a couple of different languages, French being one of them and he was in the merchant navy before coming to England and at some point he worked in a coal mine.

      My grandad had two distinctive gold teeth, he played in a steel drum band and they practiced every Thursday evening.

      My dad, Christopher, is apparently the double of my grandad Peter so I can provide a photo of my dad to jog people’s memories.

      Thank you all for reading!

      submitted by /u/cprez04
      [link] [comments]

    26. 🔗 r/LocalLLaMA Saw this somewhere on LinkedIn 😂 rss

      Saw this somewhere on LinkedIn 😂 | submitted by /u/Optimalutopic
      [link] [comments]
      ---|---

    27. 🔗 r/york York guys in their 20s rss

      Hi all, I’m 26 and been living in York for just over a year now with a couple. Love the city and made plenty of “friendly acquaintances” through sports clubs, but don’t necessarily feel like I’ve made many “friends” as many are in committed relationships and feel like they’re at a different life stage to me or have to always come as a package 😂

      I love any sports, especially run a lot and play a bit of football and badminton. I’m a big foodie and enjoy going out to restaurants and cooking myself. Go to Cineworld a fair bit and even though I don’t drink but enjoy a good pub quiz.

      Seen these posts in other places where people recommend the meet up app but I don’t think it’s as good as it used to be as doesn’t seem to be much on there for my age, and a lot of Facebook groups tend to be much older folk too.

      So if there are any guys in their 20s in a similar situation or know of good spots, please reach out!

      submitted by /u/Tall_Tiger_1999
      [link] [comments]

    28. 🔗 r/reverseengineering Agentic Reverse Engineering + Binary Analysis with Kong rss
    29. 🔗 r/Harrogate Best Fish and Chips in Harrogate? rss

      I'm in Pannal for next few days and I'd love to have some local fish and chips.

      I know it's a controversial topic, but who makes the best fish and chips?

      submitted by /u/coffeebugtravels
      [link] [comments]

    30. 🔗 r/wiesbaden Geldbeutel verloren rss

      Geldbeutel verloren

      Hallo,

      Ich habe neinen Geldbeutel in der Nähe vom Lidl , Angelika -Thiels Strasse verloren . Grosszügige Belohnung!

      submitted by /u/StockDirector4021
      [link] [comments]

    31. 🔗 r/Leeds Was looking on Bustimes.org as you do, here's a look at 1 of 5 (4 in service, one as spare) of the Volvo B8 MCV Evoras coming to GAWY X98/X99. Their debut on the route depends on when the CCTV cameras arrives & get fitted. rss

      If I remember correctly from the enthusiast page I'm on they'll have dealer spec which if you've been on the ones on Connexions Buses 11 you'll have the idea of what to expect. Compared to the ADL Enviro200MMCs currently in service these are bigger, higher capacity and better at hills which their more powerful Volvo 8 liter engine (ADL ones I think in those specific ones could be a 4.5 liter cummins engine)

      submitted by /u/CaptainYorkie1
      [link] [comments]

    32. 🔗 r/reverseengineering Android Vulnerability Reproduction with OpenClaw rss
    33. 🔗 sacha chua :: living an awesome life Comparing pronunciation recordings across time rss

      : I added a column for Feb 20, the first session with the sentences. I also added keyboard shortcuts (1..n) for playing the audio of the row that the mouse is on.

      My French tutor gave me a list of sentences to help me practise pronunciation.

      Sentences
      • Maman peint un grand lapin blanc.
      • Un enfant intelligent mange lentement.
      • Le roi croit voir trois noix.
      • Le témoin voit le chemin loin.
      • Moins de foin au loin ce matin.
      • La laine beige sèche près du collège.
      • La croquette sèche dans l'assiette.
      • Elle mène son frère à l'hôtel.
      • Le verre vert est très clair.
      • Elle aimait manger et rêver.
      • Le jeu bleu me plaît peu.
      • Ce neveu veut un jeu.
      • Le feu bleu est dangereux.
      • Le beurre fond dans le cœur chaud.
      • Les fleurs de ma sœur sentent bon.
      • Le hibou sait où il va.
      • L'homme fort mord la pomme.
      • Le sombre col tombe.
      • L'auto saute au trottoir chaud.
      • Le château d'en haut est beau.
      • Le cœur seul pleure doucement.
      • Tu es sûr du futur ?
      • Trois très grands trains traversent trois trop grandes rues.
      • Je veux deux feux bleus, mais la reine préfère la laine beige.
      • Vincent prend un bain en chantant lentement.
      • La mule sûre court plus vite que le loup fou.
      • Luc a bu du jus sous le pont où coule la boue.
      • Le frère de Robert prépare un rare rôti rouge.
      • La mule court autour du mur où hurle le loup.

      I can fuzzy-match these with the word timing JSON from WhisperX, like this.

      Extract all approximately matching phrases
      (subed-record-extract-all-approximately-matching-phrases
         sentences
         "/home/sacha/sync/recordings/2026-02-20-raphael.json"
         "/home/sacha/proj/french/analysis/virelangues/2026-02-20-raphael-script.vtt")
      

      Then I can use subed-record to manually tweak them, add notes, and so on. I end up with VTT files like 2026-03-06-raphael-script.vtt. I can assemble the snippets for a session into a single audio file, like this:

      I wanted to compare my attempts over time, so I wrote some code to use Org Mode and subed-record to build a table with little audio players that I can use both within Emacs and in the exported HTML. This collects just the last attempts for each sentence during a number of my sessions (both with the tutor and on my own). The score is from the Microsoft Azure pronunciation assessment service. I'm not entirely sure about its validity yet, but I thought I'd add it for fun. * indicates where I've added some notes from my tutor, which should be available as a title attribute on hover. (Someday I'll figure out a mobile-friendly way to do that.)

      Calling it with my sentences and files
      (my-lang-summarize-segments
       sentences
       '(("/home/sacha/proj/french/analysis/virelangues/2026-02-20-raphael-script.vtt" . "Feb 20")
       ;("~/sync/recordings/processed/2026-02-20-raphael-tongue-twisters.vtt" . "Feb 20")
              ("~/sync/recordings/processed/2026-02-22-virelangues-single.vtt" . "Feb 22")
              ("~/proj/french/recordings/2026-02-26-virelangues-script.vtt" . "Feb 26")
              ("~/proj/french/recordings/2026-02-27-virelangues-script.vtt" . "Feb 27")
              ("~/proj/french/recordings/2026-03-03-virelangues.vtt" . "Mar 3")
              ("/home/sacha/sync/recordings/processed/2026-03-03-raphael-reference-script.vtt" . "Mar 3")
              ("~/proj/french/analysis/virelangues/2026-03-06-raphael-script.vtt" . "Mar 6")
              ("~/proj/french/analysis/virelangues/2026-03-12-virelangues-script.vtt" . "Mar 12"))
       "clip"
       #'my-lang-subed-record-get-last-attempt
       #'my-lang-subed-record-cell-info
       t
       )
      
      Feb 20 Feb 22 Feb 26 Feb 27 Mar 3 Mar 3 Mar 6 Mar 12 Text
      ▶️ 63* ▶️ 96 ▶️ 95 ▶️ 94 ▶️ 83 ▶️ 83* ▶️ 81* ▶️ 88 Maman peint un grand lapin blanc.
      ▶️ 88* ▶️ 95 ▶️ 99 ▶️ 99 ▶️ 96 ▶️ 89* ▶️ 92* ▶️ 83 Un enfant intelligent mange lentement.
      ▶️ 84* ▶️ 97 ▶️ 97 ▶️ 96 ▶️ 94 ▶️ 95* ▶️ 98* ▶️ 99 Le roi croit voir trois noix.
      ▶️ 80* ▶️ 85 ▶️ 77 ▶️ 94 ▶️ 97   ▶️ 92* ▶️ 88 Le témoin voit le chemin loin.
      ▶️ 72* ▶️ 97 ▶️ 95 ▶️ 77 ▶️ 92   ▶️ 89* ▶️ 86 Moins de foin au loin ce matin.
      ▶️ 79* ▶️ 95 ▶️ 76 ▶️ 95 ▶️ 76 ▶️ 90* ▶️ 90* ▶️ 79 La laine beige sèche près du collège.
      ▶️ 67* ▶️ 99 ▶️ 85 ▶️ 81 ▶️ 85 ▶️ 99* ▶️ 97* ▶️ 97 La croquette sèche dans l'assiette.
      ▶️ 88* ▶️ 99 ▶️ 100 ▶️ 100 ▶️ 98 ▶️ 100* ▶️ 99* ▶️ 100 Elle mène son frère à l'hôtel.
      ▶️ 77* ▶️ 87 ▶️ 99 ▶️ 93 ▶️ 87   ▶️ 87* ▶️ 99 Le verre vert est très clair.
      ▶️ 100* ▶️ 94 ▶️ 100 ▶️ 99 ▶️ 99 ▶️ 99* ▶️ 100* ▶️ 100 Elle aimait manger et rêver.
      ▶️ 78* ▶️ 98 ▶️ 99 ▶️ 98 ▶️ 98 ▶️ 92*   ▶️ 88 Le jeu bleu me plaît peu.
      ▶️ 78* ▶️ 97 ▶️ 85 ▶️ 95 ▶️ 85     ▶️ 85 Ce neveu veut un jeu.
      ▶️ 73* ▶️ 95 ▶️ 95 ▶️ 96 ▶️ 97     ▶️ 100 Le feu bleu est dangereux.
      ▶️ 87* ▶️ 76 ▶️ 65 ▶️ 97 ▶️ 85 ▶️ 74* ▶️ 85* ▶️ 96 Le beurre fond dans le cœur chaud.
      ▶️ 84* ▶️ 43 ▶️ 85 ▶️ 79 ▶️ 75     ▶️ 98 Les fleurs de ma sœur sentent bon.
      ▶️ 70* ▶️ 86 ▶️ 79 ▶️ 76 ▶️ 87 ▶️ 84   ▶️ 98 Le hibou sait où il va.
      ▶️ 92* ▶️ 95 ▶️ 86 ▶️ 92 ▶️ 98 ▶️ 99*   ▶️ 94 L'homme fort mord la pomme.
      ▶️ 83* ▶️ 73 ▶️ 69 ▶️ 81 ▶️ 60 ▶️ 96*   ▶️ 81 Le sombre col tombe.
      ▶️ 39* ▶️ 49 ▶️ 69 ▶️ 56 ▶️ 69 ▶️ 96*   ▶️ 94 L'auto saute au trottoir chaud.
      ▶️ 82 ▶️ 84 ▶️ 85 ▶️ 98 ▶️ 94 ▶️ 96*   ▶️ 99 Le château d'en haut est beau.
      ▶️ 89 ▶️ 85 ▶️ 75 ▶️ 91 ▶️ 52 ▶️ 75* ▶️ 70* ▶️ 98 Le cœur seul pleure doucement.
      ▶️ 98*   ▶️ 99 ▶️ 99 ▶️ 95 ▶️ 93* ▶️ 97* ▶️ 99 Tu es sûr du futur ?
          ▶️ 97 ▶️ 93 ▶️ 92 ▶️ 85*   ▶️ 90 Trois très grands trains traversent trois trop grandes rues.
          ▶️ 94 ▶️ 85 ▶️ 97 ▶️ 82*   ▶️ 92 Je veux deux feux bleus, mais la reine préfère la laine beige.
          ▶️ 91 ▶️ 79 ▶️ 87 ▶️ 82*   ▶️ 94 Vincent prend un bain en chantant lentement.
          ▶️ 89 ▶️ 91 ▶️ 91 ▶️ 84*   ▶️ 92 La mule sûre court plus vite que le loup fou.
          ▶️ 91 ▶️ 93 ▶️ 93 ▶️ 92*   ▶️ 96 Luc a bu du jus sous le pont où coule la boue.
          ▶️ 88 ▶️ 71 ▶️ 94 ▶️ 86*   ▶️ 92 Le frère de Robert prépare un rare rôti rouge.
          ▶️ 81 ▶️ 84 ▶️ 88 ▶️ 67*   ▶️ 94 La mule court autour du mur où hurle le loup.

      Pronunciation still feels a bit hit or miss. Sometimes I say a sentence and my tutor says "Oui," and then I say it again and he says "Non, non…" The /ʁ/ and /y/ sounds are hard.

      I like seeing these compact links in an Org Mode table and being able to play them, thanks to my custom audio link type. It should be pretty easy to write a function that lets me use a keyboard shortcut to play the audio (maybe using the keys 1-9?) so that I can bounce between them for comparison.

      If I screen-share from Google Chrome, I can share the tab with audio, so my tutor can listen to things at the same time. Could be fun to compare attempts so that I can try to hear the differences better. Hmm, actually, let's try adding keyboard shortcuts that let me use 1-8 to play the current table row. Mwahahaha! It works!

      Code for summarizing the segments
      (defun my-lang-subed-record-cell-info (item file-index file sub)
        (let* ((sound-file (expand-file-name (format "%s-%s-%d.opus"
                                                     prefix
                                                     (my-transform-html-slugify item)
                                                     (1+ file-index))))
               (score (car (split-string
                            (or
                             (subed-record-get-directive "#+SCORE" (elt sub 4)) "")
                            ";")))
               (note (replace-regexp-in-string
                      (concat "^" (regexp-quote (cdr file))
                              "\\(: \\)?")
                      ""
                      (or (subed-record-get-directive "#+NOTE" (elt sub 4)) ""))))
          (when (or always-create (not (file-exists-p sound-file)))
            (subed-record-extract-audio-for-current-subtitle-to-file sound-file sub))
          (org-link-make-string
           (concat "audio:" sound-file "?icon=t"
                   (format "&source=%s&source-start=%s" (car file) (elt sub 1))
                   (format "&title=%s"
                           (url-hexify-string
                            (if (string= note "")
                                (cdr file)
                              (concat (cdr file) ": " note)))))
           (concat
            "▶️"
            (if score (format " %s" score) "")
            (if (string= note "") "" "*")))))
      
      (defun my-lang-subed-record-get-last-attempt (item file)
        "Return the last subtitle matching ITEM in FILE."
        (car
         (last
          (seq-remove
           (lambda (o) (string-match "#\\+SKIP" (or (elt o 4) "")))
           (learn-lang-subed-record-collect-matching-subtitles
            item
            (list file)
            nil
            nil
            'my-subed-simplify)))))
      
      (defun my-lang-summarize-segments (items files prefix attempt-fn cell-fn &optional always-create)
        (cons
         (append
          (seq-map 'cdr files)
          (list "Text"))
         (seq-map
          (lambda (item)
            (append
             (seq-map-indexed
              (lambda (file file-index)
                (let* ((sub (funcall attempt-fn item file)))
                  (if sub
                      (funcall cell-fn item file-index file sub)
                    "")))
              files)
             (list item)))
          items)))
      

      Some code for doing this stuff is in sachac/learn-lang on Codeberg.

      You can e-mail me at sacha@sachachua.com.

    34. 🔗 Rust Blog Call for Testing: Build Dir Layout v2 rss

      We would welcome people to try and report issues with the nightly-only cargo -Zbuild-dir-new-layout. While the layout of the build dir is internal-only, many projects need to rely on the unspecified details due to missing features within Cargo. While we've performed a crater run, that won't cover everything and we need help identifying tools and process that rely on the details, reporting issues to these projects so they can update to the new layout or support them both.

      How to test this?

      With at least nightly 2026-03-10, run your tests, release processes, and anything else that may touch build-dir/target-dir with the -Zbuild-dir-new- layout flag.

      For example:

      $ cargo test -Zbuild-dir-new-layout
      

      Note: if you see failures, the problem may not be isolated to just -Zbuild- dir-new-layout. With Cargo 1.91, users can separate where to store intermediate build artifacts (build-dir) and final artifacts (still in target-dir). You can verify this by running with only CARGO_BUILD_BUILD_DIR=build set. We are evaluating changing the default for build-dir in #16147.

      Outcomes may include:

      Known failure modes:

      • Inferring a [[bin]]s path from a [[test]]s path:
      • Build scripts looking up target-dir from their binary or OUT_DIR: see Issue #13663
        • Update current workarounds to support the new layout
      • Looking up user-requested artifacts from rustc, see Issue #13672
        • Update current workarounds to support the new layout

      Library support status as of publish time:

      What is not changing?

      The layout of final artifacts within target dir.

      Nesting of build artifacts under the profile and the target tuple, if specified.

      What is changing?

      We are switching from organizing by content type to scoping the content by the package name and a hash of the build unit and its inputs.

      Here is an example of the current layout, assuming you have a package named lib and a package named bin, and both have a build script:

      build-dir/
      ├── CACHEDIR.TAG
      └── debug/
          ├── .cargo-lock                       # file lock protecting access to this location
          ├── .fingerprint/                     # build cache tracking
          │   ├── bin-[BUILD_SCRIPT_RUN_HASH]/*
          │   ├── bin-[BUILD_SCRIPT_BIN_HASH]/*
          │   ├── bin-[HASH]/*
          │   ├── lib-[BUILD_SCRIPT_RUN_HASH]/*
          │   ├── lib-[BUILD_SCRIPT_BIN_HASH]/*
          │   └── lib-[HASH]/*
          ├── build/
          │    ├── bin-[BIN_HASH]/*             # build script binary
          │    ├── bin-[RUN_HASH]/out/          # build script run OUT_DIR
          │    ├── bin-[RUN_HASH]/*             # build script run cache
          │    ├── lib-[BIN_HASH]/*             # build script binary
          │    ├── lib-[RUN_HASH]/out/          # build script run OUT_DIR
          │    └── lib-[RUN_HASH]/*             # build script run cache
          ├── deps/
          │   ├── bin-[HASH]*                   # binary and debug information
          │   ├── lib-[HASH]*                   # library and debug information
          │   └── liblib-[HASH]*                # library and debug information
          ├── examples/                         # unused in this case
          └── incremental/...                   # managed by rustc
      

      The proposed layout:

      build-dir/
      ├── CACHEDIR.TAG
      └── debug/
          ├── .cargo-lock                       # file lock protecting access to this location
          ├── build/
          │   ├── bin/                          # package name
          │   │   ├── [BUILD_SCRIPT_BIN_HASH]/
          │   │   │   ├── fingerprint/*         # build cache tracking
          │   │   │   └── out/*                 # build script binary
          │   │   ├── [BUILD_SCRIPT_RUN_HASH]/
          │   │   │   ├── fingerprint/*         # build cache tracking
          │   │   │   ├── out/*                 # build script run OUT_DIR
          │   │   │   └── run/*                 # build script run cache
          │   │   └── [HASH]/
          │   │       ├── fingerprint/*         # build cache tracking
          │   │       └── out/*                 # binary and debug information
          │   └── lib/                          # package name
          │       ├── [BUILD_SCRIPT_BIN_HASH]/
          │       │   ├── fingerprint/*         # build cache tracking
          │       │   └── out/*                 # build script binary
          │       ├── [BUILD_SCRIPT_RUN_HASH]/
          │       │   ├── fingerprint/*         # build cache tracking
          │       │   ├── out/*                 # build script run OUT_DIR
          │       │   └── run/*                 # build script run cache
          │       └── [HASH]/
          │           ├── fingerprint/*         # build cache tracking
          │           └── out/*                 # library and debug information
          └── incremental/...                   # managed by rustc
      

      For more information on these Cargo internals, see the mod layout documentation.

      Why is this being done?

      ranger-ross has worked tirelessly on this as a stepping stone to cross-workspace caching which will be easier when we can track each cacheable unit in a self-contained directory.

      This also unblocks work on:

      Along the way, we found this helps with:

      While the Cargo team does not officially endorse sharing a build-dir across workspaces, that last item should reduce the chance of encountering problems for those who choose to.

      Future work

      We will use the experience of this layout change to help guide how and when to perform any future layout changes, including:

      • Efforts to reduce path lengths to reduce risks for errors for developers on Windows
      • Experimenting with moving artifacts out of the --profile and --target directories, allowing sharing of more artifacts where possible

      In addition to narrowing scope, we did not do all of the layout changes now because some are blocked on the lock change which is blocked on this layout change.

      We would also like to work to decouple projects from the unspecified details of build-dir.

  3. March 12, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-12 rss

      IDA Plugin Updates on 2026-03-12

      New Releases:

      Activity:

      • augur
      • binlex
        • 19b79a61: fix windows ci/cd warnings for node
        • fdadd375: simplify vex implementation
        • 5425c6cd: cleanup
        • 836948c2: simplify ratios, not needed
        • f035159f: simplify disassemblers api, and bump python binding lib
        • 0ac42a9a: cfg api change absorb to merge, makes it eaiser to understand
        • 957657f3: fix edges and rip-relative jumps
        • 1a895dff: fix disassembling bug queuing
        • 5a0fd3a9: performance
        • bd504b69: hash compare restore
      • binsync
        • e085ac93: Add the test cases that were unable to be added in the original serve…
        • e3bf4a15: fix: enhance robustness of gui launch (#507)
      • btrace
        • da12f7b9: Arch-specific handlers compilation
      • capa
        • f1800b5e: Sync capa rules submodule
        • 43f556ca: Sync capa rules submodule
        • 5f8c06c6: Sync capa rules submodule
        • ceaa3b6d: webui: include feature type in global search (match, regex, api, …) (…
      • haruspex
      • ida-dbimporter
        • 9e0ace33: add pypi package info to README
        • 44406c14: Merge pre-release fixes (#6) for 0.0.2
      • IDA-MCP
        • 51e9b8ef: Add idapython skill and document WSL support
        • 3afe2e5c: Refactor MCP runtime and proxy structure
        • e24456c5: Add installer and refresh docs
        • f9ab4a87: Fix MCP lifecycle, resources, and type handling
      • idasql
        • e6b41cab: docs: clarify pseudocode comment anchor selection
        • 366385a6: chore: prepare v0.0.11 release
        • 95451f42: Merge remote-tracking branch 'origin/main' into work
        • da827db6: fix: avoid replaying stale funcs prototype during rename
        • 94668b1f: Merge pull request #24 from allthingsida/work
        • c0eac083: fix: simplify RPATH to match SDK GNU make convention
        • 53eb0704: fix: remove GIT_SHALLOW for pinned fastmcpp commit hash
        • 46a27c14: idasql: improve pseudocode comment handling and entity search
      • python-elpida_core.py
        • ac9d7d3d: fix: merge-safe S3 push + add regenerate_d15_index to Docker
        • 9bd9ea55: update System tab version header to v3.0.0
        • 2c382298: birth living axiom agents: 12 axioms that discuss, debate, vote, and act
        • 3b545d41: close vocabulary gaps: align all axiom/domain names to canonical config
        • 6e57821d: Unfreeze elpida_core.py — Agent of Agents (v2.0.0)
        • 8a138119: feat: A11 — World (7/5 Septimal Tritone) codified
      • rhabdomancer
    2. 🔗 r/LocalLLaMA OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectories rss

      Overview

      OmniCoder-9B is a 9-billion parameter coding agent model built by Tesslate, fine-tuned on top of Qwen3.5-9B's hybrid architecture (Gated Delta Networks interleaved with standard attention). It was trained on 425,000+ curated agentic coding trajectories spanning real-world software engineering tasks, tool use, terminal operations, and multi-step reasoning.

      The training data was specifically built from Claude Opus 4.6 agentic and coding reasoning traces , targeting scaffolding patterns from Claude Code, OpenCode, Codex, and Droid. The dataset includes successful trajectories from models like Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro.

      The model shows strong agentic behavior: it recovers from errors (read-before- write), responds to LSP diagnostics, and uses proper edit diffs instead of full rewrites. These patterns were learned directly from the real-world agent trajectories it was trained on.

      Key Features

      • Trained on Frontier Agent Traces : Built from Claude Opus 4.6, GPT-5.3-Codex, GPT-5.4, and Gemini 3.1 Pro agentic coding trajectories across Claude Code, OpenCode, Codex, and Droid scaffolding
      • Hybrid Architecture : Inherits Qwen3.5's Gated Delta Networks interleaved with standard attention for efficient long-context processing
      • 262K Native Context : Full 262,144 token context window, extensible to 1M+
      • Error Recovery : Learns read-before-write patterns, responds to LSP diagnostics, and applies minimal edit diffs instead of full rewrites
      • Thinking Mode : Supports <think>...</think> reasoning chains for complex problem decomposition
      • Apache 2.0 : Fully open weights, no restrictions

      https://huggingface.co/Tesslate/OmniCoder-9B

      submitted by /u/DarkArtsMastery
      [link] [comments]

    3. 🔗 vercel-labs/agent-browser v0.18.0 release

      Minor Changes 942b8cd: ### New Features inspect command - Opens Chrome DevTools for the active page by launching a local proxy server that forwards the DevTools frontend to the browser's CDP WebSocket. Commands continue to work while DevTools is open. Implemented in both Node.js and native paths. (#736) get cdp-url subcommand - Retrieve the Chrome DevTools Protocol WebSocket URL for the active page, useful for external debugging tools. (#736) Native screenshot annotate - The --annotate flag for screenshots now works in the native Rust daemon, bringing parity with the Node.js path. (#706) Improvements * **KERNEL_API_KEY now optional** \- External credential injection no longer requires `KERNEL_API_KEY` to be set, making it easier to use Kernel with pre-configured environments. () * **Browserbase simplified** \- Removed the `BROWSERBASE_PROJECT_ID` requirement, reducing setup friction for Browserbase users. ([#625](https://github.com/vercel-labs/agent-browser/pull/625)) Bug Fixes

      * Fixed Browserbase API using incorrect endpoint to release sessions ([#707](https://github.com/vercel-labs/agent-browser/pull/707))
      * Fixed CDP connect paths using hardcoded 10s timeout instead of `getDefaultTimeout()` ([#704](https://github.com/vercel-labs/agent-browser/pull/704))
      * Fixed lone Unicode surrogates causing errors by sanitizing with `toWellFormed()` ([#720](https://github.com/vercel-labs/agent-browser/pull/720))
      * Fixed CDP connection failure on IPv6-first systems ([#717](https://github.com/vercel-labs/agent-browser/pull/717))
      * Fixed recordings not inheriting the current viewport settings ([#718](https://github.com/vercel-labs/agent-browser/pull/718))
      
    4. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +3 releases, ~3 changed rss
      sync repo: +1 plugin, +3 releases, ~3 changed
      
      ## New plugins
      - [HashDB](https://github.com/OALabs/hashdb-ida) (1.10.0)
      
      ## New releases
      - [DBImporter](https://github.com/HexRaysSA/ida-dbimporter): 0.0.2
      - [Suture](https://github.com/libtero/suture): 1.2.0
      
      ## Changes
      - [bindiff](https://github.com/HexRays-plugin-contributions/bindiff):
        - 8.0.0: download URL changed
      - [binexport](https://github.com/HexRays-plugin-contributions/binexport):
        - 12.0.0: download URL changed
      - [xray](https://github.com/HexRays-plugin-contributions/xray):
        - 2025.9.24: download URL changed
      
    5. 🔗 r/reverseengineering Reverse Engineering the undocumented ResetEngine.dll: A C++ tool to programmatically trigger a silent Windows Factory Reset (PBR) bypassing SystemSettings UI. rss
    6. 🔗 r/Yorkshire The Life of Chuck rss

      The Life of Chuck | Just started watching this on Netflix.... this is what they think North Yorkshire looks like? submitted by /u/Neffwood
      [link] [comments]
      ---|---

    7. 🔗 r/reverseengineering Near complete hypervisor, driver, and system binary analysis for the Xbox Series consoles rss
    8. 🔗 r/york Yorks Royal Chamberpot rss

      Yorks Royal Chamberpot | Charles Il chamberpot made by Marmaduke Best, York. Marmaduke Rawdon gave the City of York a "silver chamber pott of the value of ten punds". In 1850, Queen Victoria’s husband, Prince Albert, visited the Mansion House and may have used the chamberpot! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    9. 🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss

      Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?

      A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 80+ members and counting!

      We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people

      The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.

      We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc

      If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome

      I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc

      Leave a comment & I'll DM an invite when available! x

      submitted by /u/rmonkey100
      [link] [comments]

    10. 🔗 r/LocalLLaMA Qwen3.5-9B is actually quite good for agentic coding rss

      I have to admit I am quite impressed. My hardware is an Nvidia Geforce RTX 3060 with 12 GB VRAM so it's quite limited. I have been "model-hopping" to see what works best for me.
      I mainly did my tests with Kilo Code but sometimes I tried Roo Code as well
      Originally I used a customized Qwen 2.5 Coder for tools calls, It was relatively fast but usually would fail doing tool calls.

      Then I tested multiple Unsloth quantizations on Qwen 3 Coder. 1-bit quants would work also relatively fast but usually failed doing tool calls as well. However I've been using UD- TQ1_0 for code completion with Continue and has been quite good, better than what I experienced compared to smaller Qwen2.5 Coder models. 2-bit quants worked a little bit better (it would still fail sometimes), however it started feeling really slow and kinda unstable.

      Then, similarly to my original tests with Qwen 2.5, tried this version of Qwen3, also optimized for tools (14b), my experience was significantly better but still a bit slow, I should probably have gone with 8b instead. I noticed that, these general Qwen versions that are not optimized for coding worked better for me, probably because they were smaller and would fit better, so instead of trying Qwen3-8b, I went with Qwen3.5-9b, and this is where I got really surprised.

      Finally had the agent working for more than an hour, doing kind of significant work and capable of going on by itself without getting stuck.

      I know every setup is different, but if you are running on consumer hardware with limited VRAM, I think this represents amazing progress.

      TL;DR : Qwen 3.5 (9B) with 12 VRAM actually works very well for agentic calls. Unsloth-Qwen3 Coder 30B UD-TQ1_0 is good for code completion

      submitted by /u/Lualcala
      [link] [comments]

    11. 🔗 r/reverseengineering Live From RE//verse 2026: WARP Signatures with Mason Reed (Stream - 06/03/2026) rss
    12. 🔗 sacha chua :: living an awesome life Small steps towards using OpenAI-compatible text-to-speech services with speechd-el or emacspeak rss

      Speech synthesis has come a long way since I first tried out Emacspeak in 2002. Kokoro TTS and Piper offer more natural-sounding voices now, although the initial delay in loading the models and generating speech mean that they aren't quite ready to completely replace espeak, which is faster but more robotic. I've been using the Kokoro FastAPI through my own functions for working with various speech systems. I wanted to see if I could get Kokoro and other OpenAI-compatible text-to-speech services to work with either speechd-el or Emacspeak just in case I could take advantage of the rich functionality either provides for speech-synthesized Emacs use. speechd-el is easier to layer on top of an existing Emacs if you only want occasional speech, while emacspeak voice-enables many packages to an extent beyond speaking simply what's on the screen.

      Speech synthesis is particularly helpful when I'm learning French because I can use it as a reference for what a paragraph or sentence should sound like. It's not perfect. Sometimes it uses liaisons that my tutor and Google Translate don't use. But it's a decent enough starting point. I also used it before to read out IRC mentions and compile notifications so that I could hear them even if I was paying attention to a different activity.

      Here's a demonstration of speechd reading out the following lines using the code I've just uploaded to https://codeberg.org/sachac/speechd-ai:

      • The quick brown fox jumps over the lazy dog.
      • Now let's set the language to French so we can read the next line.
      • Bonjour, je m'appelle Emacs.

      Screencast showing speechd-el

      There's about a 2-second delay between the command and the start of the audio for the sentence.

      Note that speechd-speak-read-sentence fails in some cases where (forward-sentence 1) isn't the same place as (backward-sentence 1) (forward-sentence 1), which can happen when you're in an Org Mode list. I've submitted a patch upstream.

      Aside from that, speechd-speak-set-language, speechd-speak-read-paragraph and speechd-speak-read-region are also useful commands. I think the latency makes this best-suited for reading paragraphs, or for shadowing sentences for language learning.

      I'm still trying to figure out how to get speechd-speak to work as smoothly as I'd like. I think I've got it set up so that the server falls back to espeak for short texts so that it can handle words or characters better, and uses the specified server for longer ones. I'd like to get to the point where it can handle all the things that speechd usually does, like saying lines as I navigate through them or giving me feedback as I'm typing. Maybe it can use espeak for fast feedback character by character and word by word, and then use Kokoro TTS for the full sentence when I finish. Then it will be possible to use it to type things without looking at the screen.

      After putting this together, I still find myself leaning towards my own functions because they make it easy to see the generated speech output to a file, which is handy for saving reference audio that I can play on my phone and for making replays almost instant. That could also be useful for pre-generating the next paragraph to make it flow more smoothly. Still, it was interesting making something that is compatible with existing protocols and libraries.

      Posting it in case anyone else wants to use it as a starting point. The repository also contains the starting point for an Emacspeak-compatible speech server. See See speechd-ai/README.org for more details.

      https://codeberg.org/sachac/speechd-ai

      You can e-mail me at sacha@sachachua.com.

    13. 🔗 r/Leeds Road closed by Wellington Place rss

      Does anyone know what happened here? There seems to be a car with a couple of windows smashed out and the police have closed off the road (see pics). Car has been there since about 11.30am and they cleared the builders out of the building site as well

      submitted by /u/watchitspaceman
      [link] [comments]

    14. 🔗 r/reverseengineering Debugging An Undebuggable App rss
    15. 🔗 r/Yorkshire Is there a clear footpath walk from whitby to Robinhoods Bay? rss

      Not been and years and considering a day out this weekend.

      submitted by /u/saltlampsandphotos
      [link] [comments]

    16. 🔗 r/reverseengineering Chip Uploading - Emulation Online rss
    17. 🔗 r/reverseengineering Archive of classic reverse engineering tutorials (Armadillo, ASProtect, Themida, SoftICE era) rss
    18. 🔗 Kagi Small Web Just Got Bigger rss

      Small Web, the non-commercial part of the internet made by real people, has always been at the heart of what we do at Kagi. Today, we're adding to the Small Web experience with new browser extensions,...

    19. 🔗 r/reverseengineering GitHub - iss4cf0ng/Elfina: Elfina is a multi-architecture ELF loader supporting x86 and x86-64 binaries. rss
    20. 🔗 r/reverseengineering HellsUchecker: ClickFix to blockchain-backed backdoor rss
    21. 🔗 r/Leeds Budget friendly places to get fresh flowers? Thought about Leeds market? Thanks!💐 rss

      Not sure of prices these days..

      submitted by /u/Bright_Fill_4770
      [link] [comments]

    22. 🔗 r/reverseengineering Reverse Engineering Action's Cheap Fichero Labelprinter rss
    23. 🔗 r/LocalLLaMA I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. rss

      English is not my first language. I wrote this in Chinese and translated it with AI help. The writing may have some AI flavor, but the design decisions, the production failures, and the thinking that distilled them into principles — those are mine.

      I was a backend lead at Manus before the Meta acquisition. I've spent the last 2 years building AI agents — first at Manus, then on my own open-source agent runtime (Pinix) and agent (agent- clip). Along the way I came to a conclusion that surprised me:

      A singlerun(command="...") tool with Unix-style commands outperforms a catalog of typed function calls.

      Here's what I learned.


      Why *nix

      Unix made a design decision 50 years ago: everything is a text stream. Programs don't exchange complex binary structures or share memory objects — they communicate through text pipes. Small tools each do one thing well, composed via | into powerful workflows. Programs describe themselves with --help, report success or failure with exit codes, and communicate errors through stderr.

      LLMs made an almost identical decision 50 years later: everything is tokens. They only understand text, only produce text. Their "thinking" is text, their "actions" are text, and the feedback they receive from the world must be text.

      These two decisions, made half a century apart from completely different starting points, converge on the same interface model. The text-based system Unix designed for human terminal operators — cat, grep, pipe, exit codes, man pages — isn't just "usable" by LLMs. It's a natural fit. When it comes to tool use, an LLM is essentially a terminal operator — one that's faster than any human and has already seen vast amounts of shell commands and CLI patterns in its training data.

      This is the core philosophy of the _nix Agent: _ don't invent a new tool interface. Take what Unix has proven over 50 years and hand it directly to the LLM.*


      Why a single run

      The single-tool hypothesis

      Most agent frameworks give LLMs a catalog of independent tools:

      tools: [search_web, read_file, write_file, run_code, send_email, ...]

      Before each call, the LLM must make a tool selection — which one? What parameters? The more tools you add, the harder the selection, and accuracy drops. Cognitive load is spent on "which tool?" instead of "what do I need to accomplish?"

      My approach: onerun(command="...") tool, all capabilities exposed as CLI commands.

      run(command="cat notes.md") run(command="cat log.txt | grep ERROR | wc -l") run(command="see screenshot.png") run(command="memory search 'deployment issue'") run(command="clip sandbox bash 'python3 analyze.py'")

      The LLM still chooses which command to use, but this is fundamentally different from choosing among 15 tools with different schemas. Command selection is string composition within a unified namespace — function selection is context-switching between unrelated APIs.

      LLMs already speak CLI

      Why are CLI commands a better fit for LLMs than structured function calls?

      Because CLI is the densest tool-use pattern in LLM training data. Billions of lines on GitHub are full of:

      ```bash

      README install instructions

      pip install -r requirements.txt && python main.py

      CI/CD build scripts

      make build && make test && make deploy

      Stack Overflow solutions

      cat /var/log/syslog | grep "Out of memory" | tail -20 ```

      I don't need to teach the LLM how to use CLI — it already knows. This familiarity is probabilistic and model-dependent, but in practice it's remarkably reliable across mainstream models.

      Compare two approaches to the same task:

      ``` Task: Read a log file, count the error lines

      Function-calling approach (3 tool calls): 1. read_file(path="/var/log/app.log") → returns entire file 2. search_text(text=, pattern="ERROR") → returns matching lines 3. count_lines(text=) → returns number

      CLI approach (1 tool call): run(command="cat /var/log/app.log | grep ERROR | wc -l") → "42" ```

      One call replaces three. Not because of special optimization — but because Unix pipes natively support composition.

      Making pipes and chains work

      A single run isn't enough on its own. If run can only execute one command at a time, the LLM still needs multiple calls for composed tasks. So I make a chain parser (parseChain) in the command routing layer, supporting four Unix operators:

      | Pipe: stdout of previous command becomes stdin of next && And: execute next only if previous succeeded || Or: execute next only if previous failed ; Seq: execute next regardless of previous result

      With this mechanism, every tool call can be a complete workflow :

      ```bash

      One tool call: download → inspect

      curl -sL $URL -o data.csv && cat data.csv | head 5

      One tool call: read → filter → sort → top 10

      cat access.log | grep "500" | sort | head 10

      One tool call: try A, fall back to B

      cat config.yaml || echo "config not found, using defaults" ```

      N commands × 4 operators — the composition space grows dramatically. And to the LLM, it's just a string it already knows how to write.

      The command line is the LLM's native tool interface.


      Heuristic design: making CLI guide the agent

      Single-tool + CLI solves "what to use." But the agent still needs to know " how to use it." It can't Google. It can't ask a colleague. I use three progressive design techniques to make the CLI itself serve as the agent's navigation system.

      Technique 1: Progressive --help discovery

      A well-designed CLI tool doesn't require reading documentation — because --help tells you everything. I apply the same principle to the agent, structured as progressive disclosure : the agent doesn't need to load all documentation at once, but discovers details on-demand as it goes deeper.

      Level 0: Tool Description → command list injection

      The run tool's description is dynamically generated at the start of each conversation, listing all registered commands with one-line summaries:

      Available commands: cat — Read a text file. For images use 'see'. For binary use 'cat -b'. see — View an image (auto-attaches to vision) ls — List files in current topic write — Write file. Usage: write <path> [content] or stdin grep — Filter lines matching a pattern (supports -i, -v, -c) memory — Search or manage memory clip — Operate external environments (sandboxes, services) ...

      The agent knows what's available from turn one, but doesn't need every parameter of every command — that would waste context.

      Note: There's an open design question here: injecting the full command list vs. on-demand discovery. As commands grow, the list itself consumes context budget. I'm still exploring the right balance. Ideas welcome.

      Level 1:command (no args) → usage

      When the agent is interested in a command, it just calls it. No arguments? The command returns its own usage:

      ``` → run(command="memory") [error] memory: usage: memory search|recent|store|facts|forget

      → run(command="clip") clip list — list available clips clip — show clip details and commands clip [args...] — invoke a command clip pull [name] — pull file from clip to local clip push — push local file to clip ```

      Now the agent knows memory has five subcommands and clip supports list/pull/push. One call, no noise.

      Level 2:command subcommand (missing args) → specific parameters

      The agent decides to use memory search but isn't sure about the format? It drills down:

      ``` → run(command="memory search") [error] memory: usage: memory search [-t topic_id] [-k keyword]

      → run(command="clip sandbox") Clip: sandbox Commands: clip sandbox bash <script> clip sandbox read clip sandbox write File transfer: clip sandbox pull [local-name] clip sandbox push ```

      Progressive disclosure: overview (injected) → usage (explored) → parameters (drilled down). The agent discovers on-demand, each level providing just enough information for the next step.

      This is fundamentally different from stuffing 3,000 words of tool documentation into the system prompt. Most of that information is irrelevant most of the time — pure context waste. Progressive help lets the agent decide when it needs more.

      This also imposes a requirement on command design: every command and subcommand must have complete help output. It's not just for humans — it's for the agent. A good help message means one-shot success. A missing one means a blind guess.

      Technique 2: Error messages as navigation

      Agents will make mistakes. The key isn't preventing errors — it's making every error point to the right direction.

      Traditional CLI errors are designed for humans who can Google. Agents can't Google. So I require every error to contain both "what went wrong" and "what to do instead":

      ``` Traditional CLI: $ cat photo.png cat: binary file (standard output) → Human Googles "how to view image in terminal"

      My design: [error] cat: binary image file (182KB). Use: see photo.png → Agent calls see directly, one-step correction ```

      More examples:

      ``` [error] unknown command: foo Available: cat, ls, see, write, grep, memory, clip, ... → Agent immediately knows what commands exist

      [error] not an image file: data.csv (use cat to read text files) → Agent switches from see to cat

      [error] clip "sandbox" not found. Use 'clip list' to see available clips → Agent knows to list clips first ```

      Technique 1 (help) solves "what can I do?" Technique 2 (errors) solves "what should I do instead?" Together, the agent's recovery cost is minimal — usually 1-2 steps to the right path.

      Real case: The cost of silent stderr

      For a while, my code silently dropped stderr when calling external sandboxes — whenever stdout was non-empty, stderr was discarded. The agent ran pip install pymupdf, got exit code 127. stderr contained bash: pip: command not found, but the agent couldn't see it. It only knew "it failed," not "why" — and proceeded to blindly guess 10 different package managers:

      pip install → 127 (doesn't exist) python3 -m pip → 1 (module not found) uv pip install → 1 (wrong usage) pip3 install → 127 sudo apt install → 127 ... 5 more attempts ... uv run --with pymupdf python3 script.py → 0 ✓ (10th try)

      10 calls, ~5 seconds of inference each. If stderr had been visible the first time, one call would have been enough.

      stderr is the information agents need most, precisely when commands fail. Never drop it.

      Technique 3: Consistent output format

      The first two techniques handle discovery and correction. The third lets the agent get better at using the system over time.

      I append consistent metadata to every tool result:

      file1.txt file2.txt dir1/ [exit:0 | 12ms]

      The LLM extracts two signals:

      Exit codes (Unix convention, LLMs already know these):

      • exit:0 — success
      • exit:1 — general error
      • exit:127 — command not found

      Duration (cost awareness):

      • 12ms — cheap, call freely
      • 3.2s — moderate
      • 45s — expensive, use sparingly

      After seeing [exit:N | Xs] dozens of times in a conversation, the agent internalizes the pattern. It starts anticipating — seeing exit:1 means check the error, seeing long duration means reduce calls.

      Consistent output format makes the agent smarter over time. Inconsistency makes every call feel like the first.

      The three techniques form a progression:

      --help → "What can I do?" → Proactive discovery Error Msg → "What should I do?" → Reactive correction Output Fmt → "How did it go?" → Continuous learning


      Two-layer architecture: engineering the heuristic design

      The section above described how CLI guides agents at the semantic level. But to make it work in practice, there's an engineering problem: the raw output of a command and what the LLM needs to see are often very different things.

      Two hard constraints of LLMs

      Constraint A: The context window is finite and expensive. Every token costs money, attention, and inference speed. Stuffing a 10MB file into context doesn't just waste budget — it pushes earlier conversation out of the window. The agent "forgets."

      Constraint B: LLMs can only process text. Binary data produces high- entropy meaningless tokens through the tokenizer. It doesn't just waste context — it disrupts attention on surrounding valid tokens , degrading reasoning quality.

      These two constraints mean: raw command output can't go directly to the LLM — it needs a presentation layer for processing. But that processing can't affect command execution logic — or pipes break. Hence, two layers.

      Execution layer vs. presentation layer

      ┌─────────────────────────────────────────────┐ │ Layer 2: LLM Presentation Layer │ ← Designed for LLM constraints │ Binary guard | Truncation+overflow | Meta │ ├─────────────────────────────────────────────┤ │ Layer 1: Unix Execution Layer │ ← Pure Unix semantics │ Command routing | pipe | chain | exit code │ └─────────────────────────────────────────────┘

      When cat bigfile.txt | grep error | head 10 executes:

      Inside Layer 1: cat output → [500KB raw text] → grep input grep output → [matching lines] → head input head output → [first 10 lines]

      If you truncate cat's output in Layer 1 → grep only searches the first 200 lines, producing incomplete results. If you add [exit:0] in Layer 1 → it flows into grep as data, becoming a search target.

      So Layer 1 must remain raw, lossless, metadata-free. Processing only happens in Layer 2 — after the pipe chain completes and the final result is ready to return to the LLM.

      Layer 1 serves Unix semantics. Layer 2 serves LLM cognition. The separation isn't a design preference — it's a logical necessity.

      Layer 2's four mechanisms

      Mechanism A: Binary Guard (addressing Constraint B)

      Before returning anything to the LLM, check if it's text:

      ``` Null byte detected → binary UTF-8 validation failed → binary Control character ratio > 10% → binary

      If image: [error] binary image (182KB). Use: see photo.png If other: [error] binary file (1.2MB). Use: cat -b file.bin ```

      The LLM never receives data it can't process.

      Mechanism B: Overflow Mode (addressing Constraint A)

      ``` Output > 200 lines or > 50KB? → Truncate to first 200 lines (rune-safe, won't split UTF-8) → Write full output to /tmp/cmd-output/cmd-{n}.txt → Return to LLM:

      [first 200 lines] --- output truncated (5000 lines, 245.3KB) --- Full output: /tmp/cmd-output/cmd-3.txt Explore: cat /tmp/cmd-output/cmd-3.txt | grep <pattern> cat /tmp/cmd-output/cmd-3.txt | tail 100 [exit:0 | 1.2s]
      

      ```

      Key insight: the LLM already knows how to use grep, head, tail to navigate files. Overflow mode transforms "large data exploration" into a skill the LLM already has.

      Mechanism C: Metadata Footer

      actual output here [exit:0 | 1.2s]

      Exit code + duration, appended as the last line of Layer 2. Gives the agent signals for success/failure and cost awareness, without polluting Layer 1's pipe data.

      Mechanism D: stderr Attachment

      ``` When command fails with stderr: output + "\n[stderr] " + stderr

      Ensures the agent can see why something failed, preventing blind retries. ```


      Lessons learned: stories from production

      Story 1: A PNG that caused 20 iterations of thrashing

      A user uploaded an architecture diagram. The agent read it with cat, receiving 182KB of raw PNG bytes. The LLM's tokenizer turned these bytes into thousands of meaningless tokens crammed into the context. The LLM couldn't make sense of it and started trying different read approaches — cat -f, cat --format, cat --type image — each time receiving the same garbage. After 20 iterations, the process was force-terminated.

      Root cause: cat had no binary detection, Layer 2 had no guard. Fix: isBinary() guard + error guidance Use: see photo.png. Lesson: The tool result is the agent's eyes. Return garbage = agent goes blind.

      Story 2: Silent stderr and 10 blind retries

      The agent needed to read a PDF. It tried pip install pymupdf, got exit code 127. stderr contained bash: pip: command not found, but the code dropped it — because there was some stdout output, and the logic was "if stdout exists, ignore stderr."

      The agent only knew "it failed," not "why." What followed was a long trial- and-error:

      pip install → 127 (doesn't exist) python3 -m pip → 1 (module not found) uv pip install → 1 (wrong usage) pip3 install → 127 sudo apt install → 127 ... 5 more attempts ... uv run --with pymupdf python3 script.py → 0 ✓

      10 calls, ~5 seconds of inference each. If stderr had been visible the first time, one call would have sufficed.

      Root cause: InvokeClip silently dropped stderr when stdout was non- empty. Fix: Always attach stderr on failure. Lesson: stderr is the information agents need most, precisely when commands fail.

      Story 3: The value of overflow mode

      The agent analyzed a 5,000-line log file. Without truncation, the full text (~200KB) was stuffed into context. The LLM's attention was overwhelmed, response quality dropped sharply, and earlier conversation was pushed out of the context window.

      With overflow mode:

      ``` [first 200 lines of log content]

      --- output truncated (5000 lines, 198.5KB) --- Full output: /tmp/cmd-output/cmd-3.txt Explore: cat /tmp/cmd-output/cmd-3.txt | grep cat /tmp/cmd-output/cmd-3.txt | tail 100 [exit:0 | 45ms] ```

      The agent saw the first 200 lines, understood the file structure, then used grep to pinpoint the issue — 3 calls total, under 2KB of context.

      Lesson: Giving the agent a "map" is far more effective than giving it the entire territory.


      Boundaries and limitations

      CLI isn't a silver bullet. Typed APIs may be the better choice in these scenarios:

      • Strongly-typed interactions : Database queries, GraphQL APIs, and other cases requiring structured input/output. Schema validation is more reliable than string parsing.
      • High-security requirements : CLI's string concatenation carries inherent injection risks. In untrusted-input scenarios, typed parameters are safer. agent-clip mitigates this through sandbox isolation.
      • Native multimodal : Pure audio/video processing and other binary-stream scenarios where CLI's text pipe is a bottleneck.

      Additionally, "no iteration limit" doesn't mean "no safety boundaries." Safety is ensured by external mechanisms:

      • Sandbox isolation : Commands execute inside BoxLite containers, no escape possible
      • API budgets : LLM calls have account-level spending caps
      • User cancellation : Frontend provides cancel buttons, backend supports graceful shutdown

      Hand Unix philosophy to the execution layer, hand LLM's cognitive constraints to the presentation layer, and use help, error messages, and output format as three progressive heuristic navigation techniques.

      CLI is all agents need.


      Source code (Go): github.com/epiral/agent- clip

      Core files: internal/tools.go (command routing), internal/chain.go (pipes), internal/loop.go (two-layer agentic loop), internal/fs.go (binary guard), internal/clip.go (stderr handling), internal/browser.go (vision auto-attach), internal/memory.go (semantic memory).

      Happy to discuss — especially if you've tried similar approaches or found cases where CLI breaks down. The command discovery problem (how much to inject vs. let the agent discover) is something I'm still actively exploring.

      submitted by /u/MorroHsu
      [link] [comments]

    24. 🔗 r/york Community Eid dinner in York? rss

      Hi all! I was wondering if anyone was aware if there will be a community Eid dinner in York that's open not non-muslims?

      submitted by /u/Livid-Trade-3907
      [link] [comments]

    25. 🔗 r/reverseengineering runtime jvm analysis tool i made rss
    26. 🔗 Rust Blog Announcing rustup 1.29.0 rss

      The rustup team is happy to announce the release of rustup version 1.29.0.

      Rustup is the recommended tool to install Rust, a programming language that empowers everyone to build reliable and efficient software.

      What's new in rustup 1.29.0

      Following the footsteps of many package managers in the pursuit of better toolchain installation performance, the headline of this release is that rustup has been enabled to download components concurrently and unpack during downloads in operations such as rustup update or rustup toolchain and to concurrently check for updates in rustup check, thanks to a GSoC 2025 project. This is by no means a trivial change so a long tail of issues might occur, please report them if you have found any!

      Furthermore, rustup now officially supports the following host platforms:

      • sparcv9-sun-solaris
      • x86_64-pc-solaris

      Also, rustup will start automatically inserting the right $PATH entries during rustup-init for the following shells, in addition to those already supported:

      • tcsh
      • xonsh

      This release also comes with other quality-of-life improvements, to name a few:

      • When running rust-analyzer via a proxy, rustup will consider the rust-analyzer binary from PATH when the rustup-managed one is not found.

        • This should be particularly useful if you would like to bring your own rust-analyzer binary, e.g. if you use Neovim, Helix, etc. or are developing rust-analyzer itself.
        • Empty environment variables are now treated as unset. This should help with resetting configuration values to default when an override is present.
      • rustup check will use different exit codes based on whether new updates have been found: it will exit with 100 on any updates or 0 for no updates.

      Furthermore, @FranciscoTGouveia has joined the team. He has shown his talent, enthusiasm and commitment to the project since the first interactions with rustup and has played a significant role in bring more concurrency to it, so we are thrilled to have him on board and are actively looking forward to what we can achieve together.

      Further details are available in the changelog!

      How to update

      If you have a previous version of rustup installed, getting the new one is as easy as stopping any programs which may be using rustup (e.g. closing your IDE) and running:

      $ rustup self update
      

      Rustup will also automatically update itself at the end of a normal toolchain update:

      $ rustup update
      

      If you don't have it already, you can get rustup from the appropriate page on our website.

      Rustup's documentation is also available in the rustup book.

      Caveats

      Rustup releases can come with problems not caused by rustup itself but just due to having a new release.

      In particular, anti-malware scanners might block rustup or stop it from creating or copying files, especially when installing rust-docs which contains many small files.

      Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release.

      Thanks

      Thanks again to all the contributors who made this rustup release possible!

    27. 🔗 Console.dev newsletter Ki Editor rss

      Description: Structural code editor.

      What we like: Acts on the AST so code manipulations happen within the true language syntax e.g. selecting the whole control statement. This enables AST native editing, selection, navigation, find & replace. Has a built in LSP and file explorer. Themes and syntax highlighting powered by Tree-sitter.

      What we dislike: Might take some getting used to - it has a VS Code extension if you prefer a GUI.

    28. 🔗 Console.dev newsletter Agent Safehouse rss

      Description: macOS native AI sandboxing.

      What we like: Denies access outside of your project directory using macOS native, kernel-level sandboxes. Has safe defaults for access to things like core system tools, network access, Git, etc. Security sensitive actions require opt-in e.g. clipboard, docker, shell access.

      What we dislike: macOS only.

  4. March 11, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-11 rss

      IDA Plugin Updates on 2026-03-11

      New Releases:

      Activity:

    2. 🔗 MetaBrainz Schema change release: May 11, 2026 rss

      MusicBrainz is announcing a new schema change release set for May 11, 2026. Schema-wise, this release will be very light. At the same time, we'll be requiring some major dependency upgrades to Perl, PostgreSQL, and Node.js. We'll also be switching from Redis to Valkey in production. See below for more information.

      The only breaking schema change is MBS-14252. It drops columns which are unused even in MusicBrainz Server, so should have little impact.

      Here is the complete list of scheduled tickets:

      Database schema

      The following tickets change the database schema in some way.

      • MBS-6551: Database does not prevent a release from having duplicate label/catno pairs. This ticket involves replacing an index on the release_label table for additional data sanity. We'll introduce a unique index on (release, label, catalog_number) (with NULL values treated as equal). This should have no impact on downstream users.
      • MBS-14092: Add support for series of series. This will allow connecting series that are related to each other in some way; for example, a series of series that have been honored with the same award, like the Golden Globe Award for Best Podcast. This involves adding a new series_series view, and replacing the allowed_series_entity_type constraint on the series_type table. It doesn't modify or remove any other parts of the schema.
      • MBS-14252: Drop "source" column from iswc and isrc tables. As the title says, this drops the unused isrc.source and iswc.source columns from the database. Unless you've specifically referenced these columns in a query, this change should have no impact on you.

      Server dependencies

      • MBS-14243: Upgrade the required version of Perl to 5.42. This is required as Perl 5.38 will no longer receive critical security fixes past July 2026.
      • MBS-14246 : Upgrade the required version of PostgreSQL to 18. We last upgraded to PostgreSQL v16 two years ago, and would like to take advantage of the many performance advancements in PostgreSQL since then.

      Note that the PGDG maintains an official APT repository for Debian and Ubuntu. PostgreSQL 18.3 is also available on Amazon RDS.

      An upgrade script will be available for MusicBrainz Docker users with instructions provided at release time.

      • MBS-14244: Upgrade the required version of Node.js to 24. This is a straightforward upgrade to the latest LTS release, as Node.js v20 will soon be end-of-life.
      • MBS-14245: Switch from Redis to Valkey. Valkey is compatible with Redis OSS 7.2, and should be a drop-in replacement. There's no reason to expect that Redis would stop working either. (The commands that MusicBrainz Server uses are very basic, and work even in Redis v3.)

      Search server

      • SEARCH-756: Trigger reindex from dbmirror2 replication data. This drops the dependency on RabbitMQ and pg_amqp for live updating the Solr search indexes, and triggers the reindex process directly from PostgreSQL instead, by relying on the change data we already generate there for replication packets. If you run a local search indexer, this will simplify the setup/dependencies needed. Database-wise, it will require replacing triggers and creating a new "sir" schema.

      We’ll post upgrade instructions for standalone/mirror servers on the day of the release. If you have any questions, feel free to comment below or on the relevant above-linked tickets.

    3. 🔗 r/Yorkshire The village waging a very British war on dog waste rss

      The village waging a very British war on dog waste | Where rolling fields meet towering trees, a hawthorn-lined bridleway on the outskirts of a West Yorkshire town is about as idyllic as a suburban snicket gets. But amid the sound of birdsong and the faint rumble of the nearby M62, anger is also in the air. Warning notices punctuate the path, strewn with capital letters and red text, imploring dog owners to take home their pet's waste. Recently, volunteers collected 350 dog poo bags within a stretch of slightly more than a quarter of a mile (0.4km). Pushed into hedgerows, hung from tree branches and flung into banks along the route, the litter has been piling up on this local route in Scholes, near Cleckheaton. Clean-up volunteers who have had enough have launched their own protest; erecting signs and leaving dozens of the weighty filled bags they collect displayed on the path to make a quiet - but squelchy - statement. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    4. 🔗 r/reverseengineering Practical Type Inference: High-Throughput Recovery of Real-World Structures and Function Signatures rss
    5. 🔗 r/reverseengineering FlapOS: an open source alternative firmware for "flapit" devices rss
    6. 🔗 r/Leeds Antique Leeds prints - shops to sell them through? rss

      Hey all. I've a whole load of antique framed prints, all hand coloured views of Leeds. They were purchased from an antique dealer some years ago, authenticated etc, but the shops since retired and closed up. We've inherited these from a recently deceased relative. In the collection there's maybe 20 to 30 or more framed print of views of late 1800's and industrial revolution Leeds. These are the kind of prints that'll take years to sell individually, but would be good stock for a boutique type shop in Leeds as a job lot......but I'm based in Notts so can't wonder the streets and see who'd be interested.

      Are there any shops or dealer that you can think of that may want to buy the whole collection?

      submitted by /u/KIAA0319
      [link] [comments]

    7. 🔗 r/LocalLLaMA Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show rss

      Nvidia Will Spend $26 Billion to Build Open-Weight AI Models, Filings Show | submitted by /u/dan945
      [link] [comments]
      ---|---

    8. 🔗 r/Yorkshire New Leeds independent newspaper Start up rss

      https://leeds.ghost.io/welcome-to-leeds-new-paper/?ref=leeds-newsletter

      Please considering supporting this project so more independent news written about the wonderful city.

      I am not involved in the project but I thought people would appreciate knowing about it.

      submitted by /u/SaveCarbonSaveMoney
      [link] [comments]

    9. 🔗 idursun/jjui v0.10.1 release

      Everything from v0.10.0 with additional bug fixes and some improvements.

      Improvements

      • We show jjui: press a key to continue message after a shell command is run. The underlying interactive detection code has been improved with additional signals to skip showing that message if the executed command started an interactive PTY.

      Fixes

      • Fixed slow starts on Windows machines. #581
      • Fixed improper rendering of double-width characters as well as misalignment of borders when double-width characters are present. #526

      Full Changelog : v0.10.0...v0.10.1

    10. 🔗 r/LocalLLaMA llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M rss

      llama.cpp on $500 MacBook Neo: Prompt: 7.8 t/s / Generation: 3.9 t/s on Qwen3.5 9B Q3_K_M | Just compiled llama.cpp on MacBook Neo with 8 Gb RAM and 9b Qwen 3.5 and it works (slowly, but anyway) Config used:

      Build - llama.cpp version: 8294 (76ea1c1c4) Machine - Model: MacBook Neo (Mac17,5) - Chip: Apple A18 Pro - CPU: 6 cores (2 performance + 4 efficiency) - GPU: Apple A18 Pro, 5 cores, Metal supported - Memory: 8 GB unified Model - Hugging Face repo: unsloth/Qwen3.5-9B-GGUF - GGUF file: models/Qwen3.5-9B-Q3_K_M.gguf - File size on disk: 4.4 GB Launch hyperparams ./build/bin/llama-cli \ -m models/Qwen3.5-9B-Q3_K_M.gguf \ --device MTL0 \ -ngl all \ -c 4096 \ -b 128 \ -ub 64 \ -ctk q4_0 \ -ctv q4_0 \ --reasoning on \ -t 4 \ -tb 6 \ -cnv
      

      UPD. I did some benchmarking – faster 5 tok/sec config for 9b model is here, and 10 tok/sec config for 4b model is here submitted by /u/Shir_man
      [link] [comments]
      ---|---

    11. 🔗 r/wiesbaden Mountainbike Shop in WI o. MZ rss

      Hi zusammen,

      ich bin aktuell auf der Suche nach einem guten Fahrradladen in Wiesbaden (oder auch Mainz) der top Mountainbikes (keine E-bikes) hat und wollte mal hier nach euren Empfehlungen fragen.

      Ich würde mein Geld lieber in einem kleineren, lokalen Shop ausgeben, bei dem man auch etwas Beratung bekommt und der vernünftige Bikes verkauft als unbedingt zu den großen Ketten auf der Mainzer Straße zu gehen.

      Hat jemand von euch gute Erfahrungen und einen Tipp für mich?

      Danke euch! 🙌

      submitted by /u/Exercise-Signal
      [link] [comments]

    12. 🔗 r/Yorkshire North Yorkshire Moors Railway Territorial Army Exercises 1980s rss

      https://www.youtube.com/watch?app=desktop&v=bt43vJf9-WE

      A nice nostalgic YouTube clip of a group of Territorial Army men helping out on the railway at Pickering. Nearly all of them were British Rail employees (although of course the North Yorkshire Moors Railway has been a private 'heritage' railway since the 1960s. However the clip is a reminder of an age when we had a publicly owned national railway and Armed Forces, professional and part-time with better morale as well as better funding. These are a great bunch of chaps with a sense of public service. Nice railway and Yorkshire footage as well.

      submitted by /u/Ticklishchap
      [link] [comments]

    13. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases rss
      sync repo: +2 releases
      
      ## New releases
      - [DeepExtract](https://github.com/marcosd4h/DeepExtractIDA): 0.9.10
      - [IDAGuides](https://github.com/libtero/idaguides): 1.3.0
      
    14. 🔗 r/LocalLLaMA Nemotron 3 Super Released rss
    15. 🔗 r/Leeds Positive Impact: South Leeds Shops Seeing Less Crime rss

      Some good news from South Leeds around safety and security:

      • The Yorkshire Evening Post recently reported that shops and retail parks are seeing a drop in anti-social behaviour and retail crime.
      • West Yorkshire Police and local partners have been using injunctions, community warnings, and early intervention.
      • Businesses are reporting fewer incidents, and staff and customers feel more confident.

      It’s a great example of how visible, targeted policing and collaboration can make a real difference in keeping retail areas safe.

      submitted by /u/securitycompanyuk
      [link] [comments]

    16. 🔗 r/york York residents – a short survey on the Bootham Crescent stadium relocation rss

      Hi everyone,

      I’m currently conducting research for my university dissertation on the relocation of the stadium at Bootham Crescent and how it has affected local communities and perceptions of the surrounding area.

      As part of this research, I’ve created a short 10-minute anonymous survey looking at the social, physical, and wider perceptions of the stadium move. I’m looking for responses from People who lived near Bootham Crescent before the move Current residents in the area Residents elsewhere in York People who have moved to York in recent years

      All responses are completely anonymous and will be used solely for academic research.

      If you have a few minutes, I would really appreciate your help by completing the survey below:

      https://qualtricsxmw68qycjfg.qualtrics.com/jfe/form/SV_bqMry2RhFEP6z5k

      Thank you very much for your time — every response really helps with the research.

      submitted by /u/_samjustice
      [link] [comments]

    17. 🔗 r/Leeds Schiacciata Sandwiches rss

      Was over in Manchester last week and had an amazing schiacciata sandwich at Ad Maiora (https://www.instagram.com/admaioramcr) does anyone know anywhere in Leeds that does really good Italian sandwiches? I know La Bottega Milanese does something similar but they are not made to order and look a bit sad in the glass cabinets after a while.

      submitted by /u/zharrt
      [link] [comments]

    18. 🔗 r/york Badminton 🏸 rss

      Anyone know of any casual badminton clubs in York? Used to play regularly a few years back at one of the clubs at the railway institute, but it's been a while and I'm very rusty!

      Or if you are solo and fancy a game, do shout! Happy for a pint/coffee after too.

      submitted by /u/CheekyChappie157
      [link] [comments]

    19. 🔗 r/Leeds Cheap Monstera Thai constellation in Leeds Kirkgate market rss

      Saw this in market garden shop yesterday didn’t get because I sadly got one last year and paid more price than this one.

      submitted by /u/Important_Sail4961
      [link] [comments]

    20. 🔗 r/LocalLLaMA M5 Max just arrived - benchmarks incoming rss

      M5 Max just arrived - benchmarks incoming | The M5 Max 128GB 14" has just arrived. I've been looking forward to putting this through its paces. Testing begins now. Results will be posted as comments below — no video, no lengthy writeup, just the raw numbers. Clean and simple. Apologies for the delay. I initially ran the tests using BatchGenerator, but the speeds weren't quite what I expected. I ended up setting up a fresh Python virtual environment and re-running everything with pure mlx_lm using stream_generate, which is what pushed the update back. I know many of you have been waiting - I'm sorry for keeping you! I take it as a sign of just how much excitement there is around the M5 Max.(I was genuinely hyped for this one myself.) Personally, I'm really happy with the results. What do you all think? Models Tested

      • Qwen3.5-122B-A10B-4bit
      • Qwen3-Coder-Next-8bit
      • Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit
      • gpt-oss-120b-MXFP4-Q8

      As for Qwen3.5-35B-A3B-4bit — I don't actually have that one downloaded, so unfortunately I wasn't able to include it. Sorry about that! Results were originally posted as comments, and have since been compiled here in the main post for easier access

      Qwen3.5-122B-A10B-4bit (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-122B-A10B-4bit --prompt "$(cat /tmp/prompt_4096.txt)" --max-tokens 128 ========== Prompt: 4106 tokens, 881.466 tokens-per-sec Generation: 128 tokens, 65.853 tokens-per-sec Peak memory: 71.910 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-122B-A10B-4bit --prompt "$(cat /tmp/prompt_16384.txt)" --max-tokens 128 ========== Prompt: 16394 tokens, 1239.734 tokens-per-sec Generation: 128 tokens, 60.639 tokens-per-sec Peak memory: 73.803 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-122B-A10B-4bit --prompt "$(cat /tmp/prompt_32768.txt)" --max-tokens 128 ========== Prompt: 32778 tokens, 1067.824 tokens-per-sec Generation: 128 tokens, 54.923 tokens-per-sec Peak memory: 76.397 GB Qwen3-Coder-Next-8bit (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_4096.txt)" --max-tokens 128 ========== Prompt: 4105 tokens, 754.927 tokens-per-sec Generation: 60 tokens, 79.296 tokens-per-sec Peak memory: 87.068 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_16384.txt)" --max-tokens 128 ========== Prompt: 16393 tokens, 1802.144 tokens-per-sec Generation: 60 tokens, 74.293 tokens-per-sec Peak memory: 88.176 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_32768.txt)" --max-tokens 128 ========== Prompt: 32777 tokens, 1887.158 tokens-per-sec Generation: 58 tokens, 68.624 tokens-per-sec Peak memory: 89.652 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_65536.txt)" --max-tokens 128 ========== Prompt: 65545 tokens, 1432.730 tokens-per-sec Generation: 61 tokens, 48.212 tokens-per-sec Peak memory: 92.605 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_16384.txt)" --max-tokens 128 ========== Prompt: 16393 tokens, 1802.144 tokens-per-sec Generation: 60 tokens, 74.293 tokens-per-sec Peak memory: 88.176 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_32768.txt)" --max-tokens 128 ========== Prompt: 32777 tokens, 1887.158 tokens-per-sec Generation: 58 tokens, 68.624 tokens-per-sec Peak memory: 89.652 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3-Coder-Next-8bit --prompt "$(cat /tmp/prompt_65536.txt)" --max-tokens 128 ========== Prompt: 65545 tokens, 1432.730 tokens-per-sec Generation: 61 tokens, 48.212 tokens-per-sec Peak memory: 92.605 GB Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit --prompt "$(cat /tmp/prompt_4096.txt)" --max-tokens 128 ========== Prompt: 4107 tokens, 811.134 tokens-per-sec Generation: 128 tokens, 23.648 tokens-per-sec Peak memory: 25.319 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit --prompt "$(cat /tmp/prompt_16384.txt)" --max-tokens 128 ========== Prompt: 16395 tokens, 686.682 tokens-per-sec Generation: 128 tokens, 20.311 tokens-per-sec Peak memory: 27.332 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit --prompt "$(cat /tmp/prompt_32768.txt)" --max-tokens 128 ========== Prompt: 32779 tokens, 591.383 tokens-per-sec Generation: 128 tokens, 14.908 tokens-per-sec Peak memory: 30.016 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-6bit --prompt "$(cat /tmp/prompt_65536.txt)" --max-tokens 128 ========== Prompt: 65547 tokens, 475.828 tokens-per-sec Generation: 128 tokens, 14.225 tokens-per-sec Peak memory: 35.425 GB gpt-oss-120b-MXFP4-Q8 (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/gpt-oss-120b-MXFP4-Q8 --prompt "$(cat /tmp/prompt_4096.txt)" --max-tokens 128 ========== Prompt: 4164 tokens, 1325.062 tokens-per-sec Generation: 128 tokens, 87.873 tokens-per-sec Peak memory: 64.408 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/gpt-oss-120b-MXFP4-Q8 --prompt "$(cat /tmp/prompt_16384.txt)" --max-tokens 128 ========== Prompt: 16452 tokens, 2710.460 tokens-per-sec Generation: 128 tokens, 75.963 tokens-per-sec Peak memory: 64.857 GB (mlx) cryingneko@MacBook-Pro mlx-lm % mlx_lm.generate --model /Volumes/SSD/Models/gpt-oss-120b-MXFP4-Q8 --prompt "$(cat /tmp/prompt_32768.txt)" --max-tokens 128 ========== Prompt: 32836 tokens, 2537.420 tokens-per-sec Generation: 128 tokens, 64.469 tokens-per-sec Peak memory: 65.461 GB
      

      submitted by /u/cryingneko
      [link] [comments]
      ---|---

    21. 🔗 r/Leeds Drop off at Leeds train station rss

      morning ! My partner dropped me off at the train station earlier at the section of road where the spoons is (where taxis used to drop you off). Is this where people are allowed to be dropped off in a private vehicle, or will we receive a penalty? He stopped right by a zebra crossing for a pedestrian and I jumped out.

      Sorry am a bit stressed out as have been appealing parking charges at Manchester airport and don’t want to have to go through that again 😅

      EDIT: please stop telling me where I should get dropped off instead !! I already know this and hindsight is 20/20 but thank you for all the suggestions

      submitted by /u/lcwj
      [link] [comments]

    22. 🔗 r/LocalLLaMA New benchmark just dropped. rss

      New benchmark just dropped. | Write the complete Three.js code for a scene featuring Michael Jackson, Pepe the Frog, Donald Trump, and Elon Musk performing the "Thriller" choreography, aiming for maximum visual perfection, detailed animation, lighting, high-quality rendering, and an overall cinematic. submitted by /u/ConfidentDinner6648
      [link] [comments]
      ---|---

    23. 🔗 r/Yorkshire Be honest is Yorkshire Tea actually the best tea? rss

      This might be a controversial question, but I’m curious where people stand on this.

      submitted by /u/1ChanceChipmunk1
      [link] [comments]

    24. 🔗 r/york Are there any known pubs with employee accommodation? rss
    25. 🔗 r/reverseengineering Anker/EufyMake UV Printer software RE (ongoing) rss