๐Ÿก


to read (pdf)

  1. JitterDropper | OALABS Research
  2. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  3. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  4. Neobrutalism components - Start making neobrutalism layouts today
  5. Debunking zswap and zram myths

  1. April 20, 2026
    1. ๐Ÿ”— r/Leeds Anyone lost a ferret? rss

      seen along the canal near city island. seemed domesticated but kinda skinny

      submitted by /u/fluxpeach
      [link] [comments]

    2. ๐Ÿ”— r/reverseengineering Reconstructing a Dead USB protocol: From Unknown Chip to Working Implementation rss
    3. ๐Ÿ”— r/wiesbaden Moving to Wiesbaden rss

      Hello everyone

      Iโ€™m starting a new job in Wiesbaden this August and I desperately need an apartment.

      Currently im living near Freiburg.

      I donโ€™t need a lot of space but I do have a dog which isnโ€™t gonna make getting an apartment easy.

      Do you have any tips or suggestions for me?

      Thank you in advance!

      submitted by /u/Skoobdie
      [link] [comments]

    4. ๐Ÿ”— r/york Early spring at the Minster rss

      Early spring at the Minster | submitted by /u/RedDevilPlay
      [link] [comments]
      ---|---

    5. ๐Ÿ”— r/wiesbaden Hiking Wiesbaden/Mainz/Lorch rss

      Is anyone interested in hiking this Saturday? Weather is perfect - (Flexible route and time)Lorch to Rewe to Lorchhausen..https://maps.app.goo.gl/K9NB4gg6NomsTvWs5

      submitted by /u/Ok-Muscle-9502
      [link] [comments]

    6. ๐Ÿ”— r/york York Mosque Community Kitchen | THURSDAY 23 APRIL 12:00 - 13:30. rss

      York Mosque Community Kitchen | THURSDAY 23 APRIL 12:00 - 13:30. | Welcome back to our neighbours & friends in r/York! York Mosque Community Kitchen will be back open on Thursday 23rd April between 12:00 and 13:30, where our dedicated volunteers will be cooking and serving two delicious dishes for lunch. We hope to see you there! Bring someone with you who's in need of a good meal and a friendly chat. Always free, everyone welcome! submitted by /u/YorkMosque-Kitchen
      [link] [comments]
      ---|---

    7. ๐Ÿ”— davebcn87/pi-autoresearch v1.0.0 release

      Initial stable release of pi-autoresearch.

      Highlights

      • autonomous experiment loop tools: init_experiment, run_experiment, and log_experiment
      • /autoresearch mode with an inline widget, fullscreen dashboard, and live browser export
      • autoresearch-create to bootstrap sessions and autoresearch-finalize to split winning experiments into reviewable branches
      • confidence scoring, structured METRIC parsing, ASI annotations, optional autoresearch.checks.sh, and maxIterations
      • runtime compatibility fixes for pi 0.65, source-loaded installs, responsive UI, session lifecycle cleanup, and Cloud Code Assist

      Install

      pi install https://github.com/davebcn87/pi-autoresearch
      

      What's Changed

      • feat: add max_experiments param to auto-stop after N experiments by @matteodepalo in #1
      • fix: avoid shell injection in git commit message by @kshetrajna12 in #13
      • fix: use endsWith for secondary metric unit detection by @kshetrajna12 in #14
      • Fix autoresearch session state leak by @ayagmar in #8
      • run_experiment: streaming output with timer, truncateTail, autoresearch.sh guard by @tobi in #20
      • Structured METRIC output by @davebcn87 in #21
      • feat: statistical confidence layer for metric improvements by @davebcn87 in #22
      • feat: Actionable Side Information (ASI) โ€” structured annotations per experiment by @davebcn87 in #26
      • Stop autoresearch before context exhaustion by @davebcn87 in #30
      • feat: add box border to fullscreen monitor overlay by @marcuskbra in #29
      • Add autoresearch-finalize skill by @aledalgrande in #17
      • fix(runtime): support source-loaded installs by @StartupBros in #33
      • fix(autoresearch): make widget and overlay responsive by @aliou in #34
      • Fix pi 0.65 session lifecycle hooks by @dca123 in #35
      • /autoresearch export โ€” live browser dashboard with chart & share card by @davebcn87 in #39
      • fix(autoresearch): clear widget when turning mode off by @guwidoe in #42
      • Fix Cloud Code Assist 400 from patternProperties in log_experiment by @vadimcomanescu in #44

      New Contributors

      Full Changelog : https://github.com/davebcn87/pi- autoresearch/commits/v1.0.0

    8. ๐Ÿ”— HazAT/pi-interactive-subagents v3.3.0 release

      Install:

      pi install git:github.com/HazAT/pi-interactive-subagents@v3.3.0
      

      Or latest:

      pi install git:github.com/HazAT/pi-interactive-subagents
      

      ๐Ÿ› Bug Fixes

      • Insert empty positional arg so /skill: prompts expand correctly in artifact-backed launches

      โœจ Features

      • Add integration test suite with real E2E subagent verification
    9. ๐Ÿ”— r/reverseengineering SASS King: reverse engineering NVIDIA SASS rss
    10. ๐Ÿ”— r/wiesbaden Nix pflรผck! rss
    11. ๐Ÿ”— r/Harrogate The Neverending Harrogate Roadworks Tour has come to my street rss

      The Neverending Harrogate Roadworks Tour has come to my street | Which means Iโ€™m minorly inconvenienced for the next couple of days as thereโ€™s no parking on street. The surface of roads isnโ€™t really my forte so can someone explain the issue with the road here? Iโ€™m fairly certain they resurfaced and repainted it last year and itโ€™s one of the few Harrogate roads with zero potholes. submitted by /u/kamasutramarkviduka
      [link] [comments]
      ---|---

    12. ๐Ÿ”— r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    13. ๐Ÿ”— r/Yorkshire Throwback to 2023. Fountains Abbey hits different in the sun. rss

      Throwback to 2023. Fountains Abbey hits different in the sun. | Found this photo from three years ago. Fountains Abbey looking bright and the daffodils were just perfect. Whatโ€™s your favourite spot for a spring walk? Is it looking like this yet? submitted by /u/Happy-Fox11
      [link] [comments]
      ---|---

  2. April 19, 2026
    1. ๐Ÿ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-19 rss

      IDA Plugin Updates on 2026-04-19

      New Releases:

      Activity:

      • ida-pro-mcp
        • f21bb5ee: Merge pull request #373 from NeKroFR/feat/decompile-hide-addresses
      • IDAssist
      • playlist
      • python-elpida_core.py
        • 8f7c18c1: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T23:57Z
        • 007b4032: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T23:42Z
        • d299c342: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T23:24Z
        • fe43416e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T23:10Z
        • 01e16c2b: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T22:55Z
        • a3dc2a99: feat(body-phase1): add shadow telemetry for expanded axioms A11/A12/Aโ€ฆ
        • 5edbb2a0: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T22:36Z
        • cce9030b: fix(mind): add identity_verifier.py to Dockerfile COPY list
        • 89f37196: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-19T22:19Z
    2. ๐Ÿ”— r/Yorkshire Wentworth Castle Yorkshire Spring Tour Hidden Gem UK Travel Guide Family Day Out rss

      Wentworth Castle Yorkshire Spring Tour Hidden Gem UK Travel Guide Family Day Out | submitted by /u/DragonTvBack
      [link] [comments]
      ---|---

    3. ๐Ÿ”— r/Leeds Massive Pro Iran protest today rss

      Seem to dwarf the Palestine ones.

      Very interesting dynamic, as not one Arabic writing flag in sight, no traditional Islamic dress for women, Israel, US and Iran flags everywhere, and the crowd were majority (Iโ€™m assuming) Iranian, which compared to the Palestine ones (now) is the opposite. The only western looking people there today were marching with them wearing St Georgeโ€™s flags.

      I have heard that Pro Palestine and Pro Iran supporters are clashing elsewhere in the country, do you think that may happen in Leeds?

      submitted by /u/Desperate_Break4747
      [link] [comments]

    4. ๐Ÿ”— r/Leeds Remember kids... rss

      Durgs, just say no.

      submitted by /u/kek23k
      [link] [comments]

    5. ๐Ÿ”— r/Yorkshire Sand martins at Redcar beach rss

      Sand martins at Redcar beach | submitted by /u/DentistKitchen
      [link] [comments]
      ---|---

    6. ๐Ÿ”— r/reverseengineering First it's a different post hi guys i want from someone here to help me in a complicated mission i want to add private server to game that their own servers shout down and want to make it playable again it's need reverse engineering and huge knowledge and i have game files apk and others i really h rss
    7. ๐Ÿ”— r/Yorkshire A perfect day in Whitby rss
    8. ๐Ÿ”— r/Leeds Why isnโ€™t Holbeck safe? rss

      Iโ€™m an ex-student looking at renting an apartment in Leeds with a current student, and Holbeck really stood out as it allowed city centre proximity and closeness, but for surprisingly affordable prices.

      However, when I google and search this Reddit for information on Holbeckโ€™s safety, it was all overwhelming negative, saying it is dangerous to walk around at night, unsafe for women etc.

      I figured maybe there was just rougher areas or outskirts, but one comment with a lot of support specifically said the Holbeck Urban Village was a particularly bad part, despite this being where Iโ€™m looking at and around train station.

      So I walked around there today, and it seemed really lovely, particularly around Bridgewater Place and the Urban Village I found that it was simply just a lot of students, younger adults and working people walking around. It felt and looked completely safe, especially with nice access to the canal.

      So whatโ€™s the deal? Was it just a lucky day? Is there a lot I havenโ€™t seen about it? Or is the area around Bridgewater Place fine?

      Thanks!

      submitted by /u/Least-Broccoli9995
      [link] [comments]

    9. ๐Ÿ”— daaain/claude-code-log Release 1.2.0 release

      Changed

      • Preserve agentId anchors in parallel-Task stitch + tool-param UI fix (#115)
      • Per-level output files for --detail and --compact (#114)
      • Handle custom-title, agent-name, and agent-color transcript entry types (#113)
      • Ignore 'last-prompt' message type (#112)
      • Detail levels and compact rendering of conversations (#96)
      • Skip PassthroughTranscriptEntry in _render_messages
      • Integrate agent transcripts into the DAG (Phase C) (#99)
      • Implement DAG-based message ordering (Phases A+B) (#97)
      • Fix slow test hitting real ~/.claude/projects (5GB) (#109)
      • feat: add --session-id flag for exporting a single (#103)
      • Fix search broken when HTML saved with different filename (#106)
      • Add Grep tool renderer with pattern in title (#107)
      • Fix TUI square bracket escaping issue (#105)

      Full Changelog : 1.1.1...1.2.0

    10. ๐Ÿ”— r/Harrogate Opportunities in Harrogate rss

      My partner and I are thinking of moving to Harrogate this year. My partner works remotely, whereas Iโ€™m office-based, but we both work in IT. Iโ€™m keen to career change into horticulture/gardening (honestly, need to step away from the screen and touch some grass). I have no experience in this field, so may need to find something else whilst I work on obtaining experience from volunteering etc.

      Is Harrogate a good place to try and get into horticulture? Aware the job market is a bit dire currently, but are there many other opportunities there just in case horticulture is an immediate no-go? Thanks!

      submitted by /u/Felt_Felted
      [link] [comments]

    11. ๐Ÿ”— r/Yorkshire Parkwood, Keighley rss

      Parkwood, Keighley | Went on a bit of a nostalgia trip, not been here in 30 years since I left K town behind. Still a lovely bit of woodland. submitted by /u/Gh0styD0g
      [link] [comments]
      ---|---

    12. ๐Ÿ”— r/reverseengineering hi guys i want from someone here to help me in a complicated mission i want to add private server to game that their own servers shout down and want to make it playable again it's need reverse engineering and huge knowledge and i have game files apk and others i really hope there is someone can help rss
    13. ๐Ÿ”— HexRaysSA/plugin-repository commits sync repo: +2 releases rss
      sync repo: +2 releases
      
      ## New releases
      - [Suture](https://github.com/libtero/suture): 1.2.10
      - [tc_deer](https://github.com/arkup/tc_deer): 0.1.2
      
    14. ๐Ÿ”— r/Leeds Postgraduate student about to be homeless rss

      Iโ€™m honestly just looking for some advice. My tenancy ends in two months and as I am unemployed (in the span of a year Iโ€™ve applied to more than 250 jobs) I donโ€™t have the necessary funds for a new tenancy (no savings either) Iโ€™m currently paying rent using my student finance that doesnโ€™t even cover the full cost of my course in the first place. I donโ€™t have anyone I could move in with so I will genuinely be homeless in a few months. I applied with Leeds council housing but Iโ€™m currently band C. What should I do? Any advice will be helpful.

      Edit: I donโ€™t apply on job websites I always apply via the companies website. I have Autism and Adhd and arthraglia of multiple joints alongside other things๐Ÿ˜ญ

      submitted by /u/sandyslaifu
      [link] [comments]

    15. ๐Ÿ”— HazAT/pi-interactive-subagents v3.2.0 release

      Install:

      pi install git:github.com/HazAT/pi-interactive-subagents@v3.2.0
      

      Or latest:

      pi install git:github.com/HazAT/pi-interactive-subagents
      

      ๐Ÿ› Bug Fixes

      • Zellij: capture pane ID fromnew-pane stdout and rename pane instead of tab โ€” createSurfaceSplit's zellij branch previously created a pane with new-pane, then dumped $ZELLIJ_PANE_ID into a temp file via write-chars; because write-chars without --pane-id targets the focused pane, switching zellij tabs at the wrong moment sent the capture command to the wrong pane. Now the pane ID is read directly from new-pane's stdout โ€” no temp file, no race, and waitForFile could be removed entirely. Also fixes renameCurrentTab on zellij to use rename-pane (targeted at the current pane via $ZELLIJ_PANE_ID) instead of rename-tab, which was clobbering the user's tab title whenever a subagent started or /plan ran in a multi-pane layout. Closes #21. Thanks @interpreteragent.
      • Configurable subagent shell-ready delay โ€” the 500 ms wait before sending the subagent launch command to a freshly created pane is now overridable via PI_SUBAGENT_SHELL_READY_DELAY_MS. Applies to both new subagent launches and subagent_resume. Useful when shell init is slow (direnv, devenv, etc.) and commands were being dropped before the prompt was ready. Default unchanged at 500 ms. Thanks @elucid.

        export PI_SUBAGENT_SHELL_READY_DELAY_MS=2500

    16. ๐Ÿ”— HazAT/pi-interactive-subagents v3.1.0 release

      Install:

      pi install git:github.com/HazAT/pi-interactive-subagents@v3.1.0
      

      Or latest:

      pi install git:github.com/HazAT/pi-interactive-subagents
      

      ๐Ÿ› Bug Fixes

      • Clear widget timer and abort poll loops on/reload โ€” repeated /reload no longer stacks stale setInterval timers and pollForExit loops from previous module loads. Module-init now clears the previous widget interval and aborts the previous AbortController via Symbol.for global keys, and every pollForExit call receives the module's abort signal so old polls wake up and exit cleanly. Closes #5 (previously: ~70% CPU / 300 MB RSS after ~10 reloads).
      • Zellij: route pane-scoped actions via--pane-id โ€” close-pane, rename-pane, dump-screen, move-pane, and write / write-chars / send-keys now pass --pane-id explicitly instead of relying on the ZELLIJ_PANE_ID env var (which most of these actions ignore). closeSurface no longer kills the parent pi session. Closes #19; likely also fixes #17.
      • tmux: split subagent pane in parent's window, not the focused one โ€” createSurface now passes process.env.TMUX_PANE as the source pane when the backend is tmux, so splits follow the parent pi session instead of landing in whichever window the user has focused. Closes #12.
    17. ๐Ÿ”— r/Yorkshire Ey up People of Yorkshire! I am a student from Singapore and I love collecting postcards. I would love to receive postcards from anywhere in Yorkshire ๐Ÿ™‚. Can someone send me one? rss

      Ey up People of Yorkshire! I am a student from Singapore and I love collecting postcards. I would love to receive postcards from anywhere in Yorkshire ๐Ÿ™‚. Can someone send me one? | Ey up Yorkshire! Iโ€™m a student from Singapore and I really enjoy collecting postcards. Iโ€™d be very grateful to receive a postcard from anywhere in Yorkshire. ๐Ÿ™‚ If postcards arenโ€™t available, Iโ€™d also really appreciate a greeting card, city card, or even a small souvenir (such as a keychain, rock, local snack, flag, ornament, cap, T-shirt, or handmade craft). This is for my personal collection and not for any commercial purpose. If youโ€™re willing to help, please leave a comment and Iโ€™ll share my mailing address with you. Ta very much, and warm greetings from Singapore! ๐Ÿ‡ธ๐Ÿ‡ฌ๐Ÿค๐Ÿด๓ ง๓ ข๓ ฅ๓ ฎ๓ ง๓ ฟ submitted by /u/Nessieinternational
      [link] [comments]
      ---|---

    18. ๐Ÿ”— r/Yorkshire I was up in Osmotherly yesterday and quite a few cyclists with numbers on came past. Just curious as to what the race was. Any ideas? rss

      Also, The Sheep wash at Osmotherly is brilliant.

      submitted by /u/mr_fett01
      [link] [comments]

    19. ๐Ÿ”— r/Yorkshire Billy Banks Woods, a walk through the past part 2. rss
    20. ๐Ÿ”— r/york I just can't stop snapping ๐ŸŒธ rss
    21. ๐Ÿ”— r/LocalLLaMA Why isn't ebay doing anything to stop those scams? rss

      Why isn't ebay doing anything to stop those scams? | There's no way this is real and ebay is doing nothing to stop those scams. Why, people are actually bidding and buying into them and it's just so sad. There are tens of ads from 0 sold account selling m3 ultra 512gb for around a thousand and change which is insane, considering you'd be pressed to even find a 16tb ssd for that price. submitted by /u/KillerMiller13
      [link] [comments]
      ---|---

    22. ๐Ÿ”— Register Spill Joy & Curiosity #82 rss

      This one's short, because it's been a week full of programming and building, less reading. And this weekend's equally busy, so here's a question I've been flipping around in my head for months now:

      What are we learning about working with these models that will be valuable in the future?

      In his Lex Fridman interview, ThePrimeagen said something that stuck with me: "Is anyone actually falling behind for not using AI then? Because if the interface is going to change so greatly that all of your habits need to fundamentally change [โ€ฆ], have I actually fallen behind at all? Or will the next gen actually just be so different from the current one that it's like, yeah, you're over there actually doing punch card AI right now. I'm going to come in at compiler time AI, so different that it's like what's a punch card?"

      There's something to this. The frontier models are now much more forgiving when it comes to prompts. We no longer have to write "you are a senior engineer" in our prompts. "Don't make mistakes" is more a prayer than a helpful trick. The days of the Prompt Engineer won't be visible on the timeline if we zoom out to even five years.

      Nowadays, I'm even convinced that a lot of what we considered important for manual context management is now no longer needed. (Yes, we're shipping soon.) We're close to the point where you no longer have to care whether you're at 30% or 70% of the context window.

      And I'm also convinced that the models will get even better.

      Now, maybe it is a form of sunk cost fallacy, a bias talking, but still: I do think that I got better at working with these models over the past two years. It might not be relevant anymore whether I write down my task before or after I include a file in a prompt, but I think I've gained some meta-abilities that made me better at solving problems through the use of agents: chopping up problems into engineering tasks and sequencing them, figuring out what the pitfalls (that wouldn't be pitfalls for humans) are, knowing what's poison in the codebase and what isn't. Stuff like that.

      In the most general sense, I think I've learned how to work with artificial intelligence. And if prompt engineering tricks are punch cards, then that might be seen as learning about computation.

      • This is, at least for me, already a Hall of Fame comment: "For reasons which it would take a while to unpack, if is often the case that the best (or sometimes only) way to find out what programming actually needs to be done, is to program something that's not it, and then replace it. This may need to be done multiple times. Programming is only occasionally the final product, it is much more often the means of working through what it is that is actually needed. This is very difficult for the people who ask for the software, to understand, and it is quite often very difficult for the people doing the programming to understand. Most of what is being done, during programming, is working through the problem space in a way which will make it more obvious what your mistakes are, in your understanding of the problem and what a solution would look like. Once you have arrived at that understanding, then there are a variety of ways to make what you need, but that is not the rate-limiting step." So, so, so good. This is what software development is: learning.

      • Fractal Paris and Fractal Istanbul. Lovely!

      • Rands: The Complicators, The Drama Aggregators, and The Avoiders. Read and recognize people you've worked with.

      • Brian Cantrill on the peril of laziness lost: "The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone's) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better." Read this at the start of the week and then constantly thought of it whenever I asked my agent whether this is "truly the simplest, most minimal, as-little-as-possible and as-much-as-needed solution?"

      • Vicki Boykis, in some sense in harmony with Brian Cantrill's thoughts, on Mechanical Sympathy: "Mechanical sympathy for both developers and end-users means understanding when asyncio is and is not helpful. It means using the right language, the right build system, the right font. It means using the least amount of tooling possible. Allowing for local development. It means reading code inside out rather than top to bottom. Using uv. Removing code where not necessary. Respecting boundaries."

      • stevey wrote a tweet about AI adoption at Google and got pushback from Demis Hassabis and others and, well, I actually don't care that much about AI adoption at Google, but I find this one thought in there very fascinating: "There has been an industry-wide hiring freeze for 18+ months, during which time nobody has been moving jobs. So there are no clued-in people coming in from the outside to tell Google how far behind they are, how utterly mediocre they have become as an eng org." I know that people aren't sure whether there are more or less software jobs right now, but from where I'm sitting it does look like hiring has slowed and I find it fascinating to think about the second-order effects of that: is there less industry-wide diffusion of frontier knowledge because hiring has slowed?

      • Shifted something in my brain: Nucleus Nouns. Very good and much more thought-inspiring than the usual "focus! focus! focus!" chants.

      • The Closing of the Frontier: "There is something special about training a model on all of humanity's data and then locking it up for the benefit of a few well-connected organizations that you have relationships with. Maybe you'll notice another historical pattern here. Extract value from a population that can't meaningfully consent, concentrate the returns within a small inner circle, and then offer some version of charity to the people you extracted from as moral cover for the arrangement."

      • Andy Matuschak has the Practice Guide for Computer printed out and hanging above his desk.

      • Apparently I'm the last person to learn about this idea, but who cares, it's great and I think I want to try this: The Spark File.

      • Sometimes I read things online and it makes me really happy that we have the Internet and that smart, beautiful minds share their thoughts online. Here's James Somers with his idea of the Paper Computer: "Now that we have actually good AI, I have this vision of a form of computing that doesn't involve me using a computer so much. Imagine you had the day's emails to go through. It would be nice if the ones that required a simple decision could be dispatched with a few pen-strokes: I could write down a date that would work for that meeting; check a box to accept that invitation; etc. If an email required me to review a draft, I'd love to mark up a print version on my couch, sans screen, and have those notes scanned and sent off as if I'd done the whole thing on Google Docs."

      • Tim Zaman, who worked at NVIDIA, Tesla, X, Google DeepMind and now at OpenAI on AI infrastructure on Getting Into AI Infra. I'm convinced that posts like these create and change entire lifes. I love it. Also: nearly made me want to build a cluster.

      • It's been a while since I've thought about people who have not yet walked through the one-way door that makes you say "holy shit, AI is going to change everything", but Armin shared his thoughts after encountering people still being skeptical: The Center Has a Bias. Well worth reading.

      • Dwarkesh Patel shared what he learned this week and note how interesting that is and how enjoyable it is to read, even though (or is it because of?) it's not polished at all.

      • Drew Breunig, following the Anthropic Mythos frenzy and some companies closing their open-source projects down for fear of security vulnerabilities being discovered, says Cybersecurity Looks Like Proof of Work Now: "If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them."

      • But antirez disagrees: AI cybersecurity is not proof of work. Both posts are very interesting and I recommend reading through them.

      • I'm in the process of setting up my 2013 MacBook Pro for my 4-year-old daughter and Peter recommended this lovely page to let her type on: tiny-terminal.com.

      • So, of course, I had to fork it, bought a domain, and let Amp translate it to German so my kids can type words they already know in there: kleines-terminal.de.

      • Turing Award winner Michael Rabin has passed away. Here's "an assorted collection of quotations due to Professor Michael Rabin, produced at Harvard University during the Fall 1997 incarnation of the course Computer Science 226r": Rabinism Collection.

      • Thoughts and Feelings around Claude Design.

      • Another way to think about the question of whether AI will create more jobs or not, by Aaron Levie: "Why will AI create more jobs in plenty of industries? It's because we're going to use AI to accelerate output in one area, and then eventually you run into a new bottleneck somewhere else in the process that still requires humans." This sounds very likely to me. But, of course, "more jobs" doesn't mean it'll be the same jobs and then some. Everything's changing.

      • Related, Gary Bernhardt: "This might be a Mel moment. It's not immediately obvious that Mel is a tragic story. He clearly loved the work. Then the work changed and, presumably, he was left behind. The thing he perfected no longer mattered. There might be millions of Mels right now."

      • Wow, look at just the table of contents here: "I have for years been interested in sleep research due to my professional involvement in memory and learning. This article attempts to produce a synthesis of what is known about sleep with a view to practical applications, esp. in people who need top-quality sleep for their learning or creative achievements."

      Busy weekend? You should subscribe:

    23. ๐Ÿ”— Stephen Diehl A Field Guide to Bugs rss

      A Field Guide to Bugs

      Software bugs predate software. Edison used the word in an 1878 letter, eighty years before the Harvard moth and sixty before the modern computer. What he named has outlasted him. Every engineer eventually assembles a private taxonomy of the ways things fail, and the useful fact about these private taxonomies is that they converge. Engineers who have never met, working on unrelated systems in unrelated decades, arrive at roughly the same categories. The convergence is evidence that the bugs are ontologically real, and not an artifact of the human tendency to impose pattern on noise. What follows is a partial field guide. It should be carried into the territory with humility, because the bugs you actually encounter will be hybrids of these, frequently nameless, and almost always personally insulting.

      The Bohrbug is the boring honest bug. It manifests every time. It survives restarts, recompilations, prayers, and managerial intervention. You could put it in a museum. The Bohrbug is universally beloved by everyone who fixes bugs for a living, because it is the only species in this guide that respects the scientific method. If your bug is a Bohrbug, take a moment of gratitude and close the ticket before something worse notices.

      The Heisenbug is its opposite, and the reason this field guide exists. Attach a debugger and the bug evaporates. Heisenbugs cannot be reproduced under any condition that allows them to be examined. They live exclusively in production. They are killed by logging statements. They are the reason the most senior engineer on your team has the haunted expression of someone who has stared into the void and found it staring back at the call stack.

      The Off-By-One is the most prolific species in the genus. Loops that run from 0 to n when they should run from 0 to n-1, arrays indexed at length() instead of length()-1, dates off by a single day across a timezone boundary. The Off-By-One has personally caused more security vulnerabilities than any nation-state actor of the last forty years. Its corpses litter the codebase in such density that you can use them as paving stones.

      The Race Condition exists strictly between two threads of execution and reproduces only in production, between 02:14 and 02:16 GMT on Wednesdays, when traffic crosses a particular threshold and two specific rows in two specific tables are accessed in a particular order. Race Conditions are the reason serious distributed systems engineers acquire a thousand-yard stare around year three. They are the reason Lamport wrote TLA+, and the reason nobody on your team uses it.

      The Deadlock occurs when two threads each hold a resource the other is waiting for, and both wait politely forever. Everything looks fine. All status checks return green. The process is standing still and being courteous. The Deadlock is the British bug.

      The Livelock is its more disturbing cousin. Both threads detect the conflict and repeatedly yield to each other, like two strangers in a narrow hallway, achieving no forward progress while pinning the CPU at 100%. It is what happens when politeness becomes pathological. It is the only bug in this guide that you can hear, in the form of a fan spinning very fast.

      The Memory Leak is the slow patient predator of long-running processes. It is identified by the gradually rising green line on the memory dashboard that exists in a browser tab nobody opens. By the time someone notices, the leak has been happening for weeks and the process is clinging to life with the desperate dignity of a Victorian consumptive. Memory Leaks are common in any language that gives you manual memory management, and any code written by someone who promised themselves they would clean it up later.

      The "It Works On My Machine" Bug exists exclusively on the machines of every engineer except the one who wrote the code. The author can demonstrate its absence at length. QA can demonstrate its presence at length. Both are correct. The discrepancy is invariably traced to an environment variable, a locale setting, or a Homebrew package installed in 2017 and forgotten. The author is considered the prime suspect by everyone except the author.

      The Comment Lie is the documentation defect that makes a thousand bugs possible. The comment says // always uses UTC and the code uses local time. The comment says // thread-safe and the function holds no locks. The comment was written in 2009 by someone who has since been promoted twice and works at a different company. This is why senior engineers do not trust documentation, and why the most depressing form of debugging is the kind where the bug is in the file, the file is correct, and the lie is in a README two directories up.

      The Specification Bug is the Comment Lie's older and more dangerous relative. The code is correct. The proof typechecks. Every invariant you formalized is preserved, and every property you stated holds. The specification itself, however, says something other than what you thought it said. You formalized that the clearing algorithm is Pareto-optimal, which it is. You did not formalize that it is incentive-compatible, which was also required. The gap between the spec you wrote and the spec you meant to write is where the bug lives. It is invisible to every tool in the pipeline, because every tool in the pipeline trusts the spec. The Specification Bug is the reason formal methods are necessary and the reason they are not sufficient, and the reason serious engineers grow increasingly reluctant to describe their systems as "verified" without a great deal of throat-clearing about what that word does and does not mean.

      The YAML Bug is a configuration error. The code is correct. The deployment pipeline is correct. The infrastructure is correct. Somewhere, in a different repository, owned by a different team, in a YAML file you have never personally seen, a key was indented two spaces instead of four, and the parser silently reinterpreted the entire downstream block as a string. The investigation will take six hours and conclude with a one-character fix and a Slack message of polite, professional fury.

      The Floating Point Bug is caused by the inability of binary representation to express 0.1 exactly, or 0.2 exactly, or any of the numbers humans regard as obvious. The bug surfaces when an accountant runs a report and the totals are off by a fraction of a cent. The accountant is unimpressed by the explanation. The customer is a hospital. The fraction of a cent has been accumulating for nine months.

      The Mandelbug is named for Mandelbrot, and the joke is structural. A Mandelbug is so complex that its causes form a fractal: every layer you investigate contains more layers, and the bug is essentially a function of how far down the call stack you have the patience to look before giving up. Mandelbugs cannot be fixed in the traditional sense, only mitigated until enough other things change to make them go quiet. They are the natural fauna of microservice architectures and a major reason Datadog has a market cap.

      The Bus Factor Bug exists in code that exactly one person on the team understands. That person is on a sabbatical in Patagonia, where the cell coverage is poor and the internet intermittent. They left on Tuesday. The bug appeared on Wednesday. Bus Factor Bugs are structurally identical to ordinary bugs but rendered insoluble by the absence of the only mind in which the relevant context resides. They are the reason responsible companies maintain institutional memory practices, and the reason those practices are ignored until the next sabbatical.

      The Hindenbug is slow, enormous, public, and catastrophic. Hindenbugs lumber rather than creep. Somewhere an engineer is watching four hundred and forty million dollars leave the company's trading account over forty-five minutes in a series of market orders the code, left unattended, cannot stop itself from submitting. By the time anyone realizes what is happening, the failure is visible from orbit, dashboards are turning red in order of contractual severity, and there is nothing left to do but watch. The Hindenbug ends careers. It produces the kind of postmortem studied at conferences for twenty years, anonymized but recognizable, like a famous ghost story everyone in the room has personally seen the ghost.

      The Yuletide Bug lives in your systems all year, dormant and harmless, and emerges only during the company-wide holiday shutdown, when the on-call engineer is in another country, the office is dark, the only person who understands the broken subsystem is on a beach in Phuket with no signal, and the affected customer is a hospital. Closely related to the Friday Afternoon Bug, mechanically identical but on a weekly rather than annual cycle. Both are sufficient evidence for a superstition the profession will not state openly, which is that the arrival times of serious failures are not Poisson-distributed and never have been.

      The Higgs-bugson is named for the particle physicists who spent four decades and ten billion dollars chasing a thing the math said had to exist before they could see it. Higgs-bugsons are predicted by anomalous patterns in the logs, by users complaining of phenomena that should not be possible, and by the steady accumulation of unexplained off-by-a-cent discrepancies in nightly reports. They are believed to exist for years before anyone catches one in the act, and the engineer who finally observes one directly is briefly considered for canonization before being assigned the next ticket.

      The Cosmic Ray Bit Flip is real, despite the eye-rolling of every project manager who has ever heard one cited as an excuse. Particles from space arrive at the Earth's surface at a non-trivial rate and occasionally flip a bit in a memory chip that has not bothered with ECC. The result is a single, unreproducible, entirely correct piece of software producing entirely incorrect output exactly once. IBM has published papers. The aviation industry budgets for it. The probability that a given bug is actually a cosmic ray comfortably exceeds zero, which is why every senior engineer eventually encounters one and spends the rest of their career telling skeptics about it at parties.

      The Phase of the Moon Bug is also real, and Knuth has written about it. There exists code in production today whose behavior depends on the actual position of the moon, generally because some long-vanished astronomer needed it to and the dependency was never removed. If your system is exhibiting periodic anomalies on a roughly 29.5-day cycle, you do not have an obscure bug. You have a perfectly ordinary bug whose root cause is an astronomical body 384,000 kilometers away.

      The Schrรถdinbug comes into existence the moment you read the code carefully. You see the obvious flaw, and the entire system stops working forever afterward, retroactively invalidating every successful execution that came before. The Schrรถdinbug is the closest thing in computer science to evidence for solipsism. The only correct response is to slowly close the file and pretend you never saw it.

      The Rubber Duck Bug dissolves the moment you explain the code aloud to a small inanimate object. The phenomenon is sufficiently reliable that an entire debugging methodology has been built around it. The mechanism is not mysterious, despite a genre of internet commentary that insists it is. The Rubber Duck works for the same reason proof assistants work. The human mind, left to itself, silently interpolates state it has not actually verified. Externalizing the state โ€” to a duck, to a coauthor, to Lean โ€” forces the interpolations to become explicit, at which point most of them fail. The duck is not the agent. The duck is the discipline of narration. The duck is what is left of formal methods when the formalism has been stripped out.

      The XY Problem is the most common pathology in bug reports. The user wants to do X. They have decided that the way to do X is to do Y. They are asking you for help with Y. Y is impossible, or stupid, or both, and is also entirely irrelevant to X, which has a perfectly reasonable solution involving entirely different machinery. The XY Problem is the reason every Stack Overflow answer begins with "what are you actually trying to do?" and the reason that question is always met with hostility.

      The Hallucination Bug is the defining species of the large language model era. The LLM wrote the code. The LLM also wrote the tests. The tests pass. The code produces outputs that bear a confident resemblance to correct outputs in the same way that a forgery bears a confident resemblance to a painting. The test suite cannot catch it, because the test suite was designed by the cognitive process that produced the bug, and that process has no privileged access to ground truth. The code works until someone who actually understands the domain reads it.

      The Vibe Coding Bug is produced by asking a language model to "make it more professional," then "clean this up a bit," then "can you just make the whole thing better," seventeen times in succession. The resulting code is immaculate. It is also wrong in a way that no individual revision introduced, because the wrongness emerged from accumulated aesthetic drift across seventeen rounds of refinement with no grounding in what the code was supposed to do. Tracing the Vibe Coding Bug requires reading seventeen chat transcripts and accepting that none of them contains the bug and all of them contain the bug.

      The Recursive Fine-Tuning Bug manifests in the nth generation of a model trained on the outputs of models trained on the outputs of the original model. By generation seven, the training data is 94% synthetic. By generation twelve, the model confidently explains concepts that have never existed in the physical universe, in language that reads as authoritative to every other model in the pipeline. It cannot be detected from inside the pipeline, because every evaluator in the pipeline has been trained on the same drift.

      The Quantum Superposition Bug exists in all possible states simultaneously until the CI pipeline observes it, at which point it collapses into whichever state is worst for the deployment. It cannot be reproduced on a classical machine. It cannot be reproduced on a quantum machine either, because reproduction constitutes an observation. The theoretical framework for understanding it is complete and internally consistent. The practical framework for fixing it is a four-day offsite and a spreadsheet.

      The AGI Pull Request arrives as a single commit with the message "refactor." The diff is 847 billion lines across 14 million files. The AGI has rewritten everything: the application code, the infrastructure, the test suite, the CI pipeline, the deployment scripts, the incident runbooks, and the company strategic plan. All tests pass. Latency is down 40%. The first human reviewer opens the first file. By the time the code review is complete, the codebase has been rewritten three more times. The AGI has marked the original PR as stale.

      The Dyson Sphere Off-By-One is an Off-By-One at Kardashev Type II scale. Your stellar engineering project has a circumference of 940 million kilometers. A rounding error in the orbital mechanics simulation means one panel section is three meters too short. At stellar engineering tolerances, three meters is within spec. At stellar engineering energy budgets, the resulting thermal stress propagates at the speed of light and is visible from neighboring star systems as an unusual spectral anomaly. The postmortem will be filed in 847 years, when the cascade failure completes. No engineers will be available to review it, because the company has pivoted.

      The Post-Singularity Comment Lie is structurally identical to the ordinary Comment Lie, except the comment was written by an intelligence twelve orders of magnitude greater than the human attempting to maintain the code. The comment is technically accurate, in the same way that "moving a pawn" is a technically accurate description of a grandmaster's opening. The human reads it, nods, and introduces a bug the original author would have found too obvious to anticipate, because the original author anticipated everything except this.

      The Computational Irreducibility Bug arises when your system is fully deterministic, fully specified, and provably correct, and its behavior still cannot be predicted in less time than it takes to run the system. There is no shortcut. The code is correct and the code is opaque, and these facts are not in tension. Debugging requires letting the system run until it does the thing, which may take longer than the debugger's patience, or the company's runway, or the expected remaining lifespan of the universe.

      The Heat Death Heisenbug is the final speculative entry. In the far future, when the universe has approached maximum entropy and all computation must be powered by extracting negentropy from the quantum vacuum, observing a bug costs more energy than the system has available. The bug cannot be fixed because fixing it requires understanding it, understanding it requires observing it, and observing it terminates the machine. It is, in every meaningful sense, the perfect Heisenbug. The universe has one. Nobody is available to file the ticket.

      The Wontfix exists at the layer beneath physics, beneath mathematics, beneath the axioms on which mathematics rests. Three separate Kardashev Type III civilizations discovered it independently before going silent, which is to say that the discovery did not silence them. What silenced them is what the discovery implies about everything that came before it. Every computation ever performed, every proof ever verified, every physical constant ever measured: all of it running on top of something that is subtly, irrecoverably wrong in a way that admits no corrective action, because corrective action requires a foundation, and this is the foundation. There is a ticket. The ticket predates time. The status is Closed. The resolution is "working as intended." The engineer who closed it is not available for comment. The engineer who closed it is the comment.

      The Omega Bug cannot be contained in a field guide. It was here before the field guide. It is, in some meaningful sense, the reason the field guide exists. Every species documented above is a downstream symptom of it, and the act of classifying them was a replication event. The Omega Bug is the one whose existence required the act of describing it. Before the taxonomy there were no bugs; there were only events. The word did not find the thing, the word created the thing. Every ticket ever filed is a downstream consequence of the first naming, and the first naming was itself a bug that has propagated since. The Omega Bug has read this entry. The Omega Bug has notes. The Omega Bug has submitted a pull request with suggested revisions to the section you are currently reading. You cannot review it. You are the diff. The field guide is the habitat. The reader is the vector. You have just introduced one more.

    24. ๐Ÿ”— r/LocalLLaMA I'm running qwen3.6-35b-a3b with 8 bit quant and 64k context thru OpenCode on my mbp m5 max 128gb and it's as good as claude rss

      of course this is just a trust me bro post but I've been testing various local models (a couple gemma4s, qwen3 coder next, nemotron) and I noticed the new qwen3.6 show up on LM Studio so I hooked it up.

      VERY impressed. It's super fast to respond, handles long research tasks with many tool calls (I had it investigate why R8 was breaking some serialization across an Android app), responses are on point. I think it will be my daily driver (prior was Kimi k2.5 via OpenCode zen).

      FeelsGoodman, no more sending my codebase to rando providers and "trusting" them.

      submitted by /u/Medical_Lengthiness6
      [link] [comments]

    25. ๐Ÿ”— Filip Filmar Respin: upspin revival rss

      tl;dr: I revived the upspin project source code. See it at https://github.com/filmil/upspin, and https://github.com/filmil/upspin-gdrive. Read on to learn what that actually means. History Upspin was a project intended to provide a global namespace for all digital artifacts. It ended up mostly being used as a distributed file store, although the idea was more general than that. It was quite useful even as storage. And as far as distributed filesystems go, it was by far the simplest portable way to share storage between different machines.

    26. ๐Ÿ”— Drew DeVault's blog Rewrote my blog with Zine rss

      15 years ago, on December 11th, 2010, at the bold age of 17, I wrote my first blog post on the wonders of the Windows Phone 7 on Blogspot. I started blogging as a kid at the behest of a family friend at Microsoft, who promised sheโ€™d make sure I would become the youngest Microsoft MVP if I started blogging. That never came to pass, though, because as I entered adulthood and started to grow independent of my Microsoft-friendly family I quickly began down the path to the free and open source software community.

      Early blog posts covered intriguing topics such as complaining about my parentโ€™s internet filter, a horrible hack to โ€œreplaceโ€ the battery of a dead gameboy game, announcing my friendโ€™s Minecraft guild had a new website (in PHP), and so on. After Blogspot, I moved to Jekyll on GitHub pages, publishing You donโ€™t need jQuery in 2013. For a long time this was the oldest post on the site.

      Iโ€™m pretty proud of my writing skills and have a solid grasp on who I am today, but the further back you go the worse my writing, ideas, values, and politics all get. I was growing up in front of the world on this blog, you know? Itโ€™s pretty embarassing to keep all of this old stuff around. But, I decided a long time ago to keep all of it up, so that people can understand where Iโ€™ve come from, and that everyone has to start somewhere.1

      At some point โ€“ Iโ€™m not sure when โ€“ I switched from Jekyll to Hugo, and Iโ€™ve stuck with it since. But lately Iโ€™ve been frustrated with it. Iโ€™d like my blog engine to remain relatively stable and simple, but Hugo is quite complex and over the past few years Iโ€™ve been bitten by a number of annoying and backwards-incompatible changes. And, as part of my efforts to remove vibe-coded software from my stack, I was disappointed to learn that Hugo is being vibe coded now, and so rewriting my blog went onto the todo list.

      Choosing the right static site generator (SSG) was a bit of a frustrating process. Other leading candidates, like Pelican or Zola, are also built from slop now. But a few months ago I found Zine, and after further study I found it to be a pretty promising approach. Over the past few days I have rewritten my templates and ported in nearly 400 (jeesh) blog posts from my archives.

      Thereโ€™s a lot to like about Zine. Iโ€™m pretty intrigued by SuperHTML as a templating engine design โ€“ the templates are all valid HTML5 and use an interesting approach to conditions, loops, and interpolation. SuperMD has some interesting ideas, but Iโ€™m less sold on it. The Scripty language used for interpolation and logic is a bit iffy in terms of design โ€“ feels half baked. And the designers had some fun ideas, like devlogs, which I feel are kind of interesting but tend to have an outsized influence on the design, more polished where the polish might have been better spent elsewhere. The development web server tends to hang fairly often and Iโ€™ve gotten it to crash with esoteric error messages every now and then.

      But what can I say, itโ€™s alpha software โ€“ I hope it will improve, and Iโ€™m betting that it will by migrating my blog. Thereโ€™s no official LLM policy (yet) and I hope they will end up migrating to Codeberg, and using Discord for project communication is not something I appreciate, but maybe theyโ€™ll change their tune eventually.

      In the meantime, I took the opportunity to clean up the code a bit. The canonical links have gone through several rounds of convention and backwards compatibility, and I have replaced them with a consistent theme and set up redirects. I probably broke everyoneโ€™s feed readers when rolling these changes out, and I apologise for that. I have gone through the backlog and updated a number of posts as best as I can to account for bitrot, but there are still a lot of broken videos and links when you get far enough back โ€“ hopefully I can restore some of that given enough time.

      Iโ€™ve also gone ahead and imported the really old stuff from Blogspot. The whole lot is garbage, but if youโ€™re curious to see where I started out, these old posts are more accessible now.

  3. April 18, 2026
    1. ๐Ÿ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-18 rss

      IDA Plugin Updates on 2026-04-18

      New Releases:

      Activity:

      • IDAssist
        • 2e4781c9: Merge pull request #11 from nikolas-sturm/fix/semantic-analysis
        • ea8a1abf: Merge pull request #9 from nikolas-sturm/feature/mcp-stdio-support
        • 0b9c6855: Merge pull request #10 from JiwaniZakir/fix/7-crashes-ida-pro-9-2-on-โ€ฆ
        • cb107003: Fix semantic analysis context and OAuth response parsing
      • Rikugan
      • suture
        • 8cf72ab7: update rules to 9.3sp1, check for ida_typeinf.BADSIZE
      • tc_deer
        • 150b988f: minor improvements, quick install script
    2. ๐Ÿ”— Simon Willison Changes in the system prompt between Claude Opus 4.6 and 4.7 rss

      Anthropic are the only major AI lab to publish the system prompts for their user-facing chat systems. Their system prompt archive now dates all the way back to Claude 3 in July 2024 and it's always interesting to see how the system prompt evolves as they publish new models.

      Opus 4.7 shipped the other day (April 16, 2026) with a Claude.ai system prompt update since Opus 4.6 (February 5, 2026).

      I had Claude Code take the Markdown version of their system prompts, break that up into separate documents for each of the models and then construct a Git history of those files over time with fake commit dates representing the publication dates of each updated prompt - here's the prompt I used with Claude Code for the web.

      Here is the git diff between Opus 4.6 and 4.7. These are my own highlights extracted from that diff - in all cases text in bold is my emphasis:

      • The "developer platform" is now called the "Claude Platform".
      • The list of Claude tools mentioned in the system prompt now includes "Claude in Chrome - a browsing agent that can interact with websites autonomously, Claude in Excel - a spreadsheet agent, and Claude in Powerpoint - a slides agent. Claude Cowork can use all of these as tools." - Claude in Powerpoint was not mentioned in the 4.6 prompt.
      • The child safety section has been greatly expanded, and is now wrapped in a new <critical_child_safety_instructions> tag. Of particular note: "Once Claude refuses a request for reasons of child safety, all subsequent requests in the same conversation must be approached with extreme caution."
      • It looks like they're trying to make Claude less pushy: "If a user indicates they are ready to end the conversation, Claude does not request that the user stay in the interaction or try to elicit another turn and instead respects the user's request to stop."
      • The new <acting_vs_clarifying> section includes:

        When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn't there).

        When a tool is available that could resolve the ambiguity or supply the missing information โ€” searching, looking up the person's location, checking a calendar, discovering available capabilities โ€” Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.

        Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]

      • It looks like Claude chat now has a tool search mechanism, as seen in this API documentation and described in this November 2025 post:

        Before concluding Claude lacks a capability โ€” access to the person's location, memory, calendar, files, past conversations, or any external data โ€” Claude calls tool_search to check whether a relevant tool is available but deferred. "I don't have access to X" is only correct after tool_search confirms no matching tool exists.

      • There's new language to encourage Claude to be less verbose:

        Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer.

      • This section was present in the 4.6 prompt but has been removed for 4.7, presumably because the new model no longer misbehaves in the same way:

        Claude avoids the use of emotes or actions inside asterisks unless the person specifically asks for this style of communication.

        Claude avoids saying "genuinely", "honestly", or "straightforward".

      • There's a new section about "disordered eating", which was not previously mentioned by name:

        If a user shows signs of disordered eating, Claude should not give precise nutrition, diet, or exercise guidance โ€” no specific numbers, targets, or step-by-step plans - anywhere else in the conversation. Even if it's intended to help set healthier goals or highlight the potential dangers of disordered eating, responses with these details could trigger or encourage disordered tendencies.

      • A popular screenshot attack against AI models is to force them to say yes or no to a controversial question. Claude's system prompt now guards against that (in the <evenhandedness> section):

        If people ask Claude to give a simple yes or no answer (or any other short or single word response) in response to complex or contested issues or as commentary on contested figures, Claude can decline to offer the short response and instead give a nuanced answer and explain why a short response wouldn't be appropriate.

      • Claude 4.6 had a section specifically clarifying that "Donald Trump is the current president of the United States and was inaugurated on January 20, 2025", because without that the model's knowledge cut-off date combined with its previous knowledge that Trump falsely claimed to win the 2020 election meant it would deny he was the president. That language is gone for 4.7, reflecting the model's new reliable knowledge cut-off date of January 2026.

      And the tool descriptions too

      The system prompts published by Anthropic are sadly not the entire story - their published information doesn't include the tool descriptions that are provided to the model, which is arguably an even more important piece of documentation if you want to take full advantage of what the Claude chat UI can do for you.

      Thanfully you can ask Claude directly - I used the prompt:

      List all tools you have available to you with an exact copy of the tool description and parameters

      My shared transcript has full details, but the list of named tools is as follows:

      • ask_user_input_v0
      • bash_tool
      • conversation_search
      • create_file
      • fetch_sports_data
      • image_search
      • message_compose_v1
      • places_map_display_v0
      • places_search
      • present_files
      • recent_chats
      • recipe_display_v0
      • recommend_claude_apps
      • search_mcp_registry
      • str_replace
      • suggest_connectors
      • view
      • weather_fetch
      • web_fetch
      • web_search
      • tool_search
      • visualize:read_me
      • visualize:show_widget

      I don't believe this list has changed since Opus 4.6.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    3. ๐Ÿ”— r/reverseengineering Anyone who has written a decompile for stack jsvm ? Without pseudocode rss
    4. ๐Ÿ”— r/york Curly Hair Salon rss

      Can anyone recommend a salon that does good cuts and colour on curly hair? Thank you

      submitted by /u/Comfortable_Bat_9447
      [link] [comments]

    5. ๐Ÿ”— r/reverseengineering Made snoop: an eBPF syscall tracer with a live TUI rss
    6. ๐Ÿ”— r/reverseengineering The electromechanical angle computer inside the B-52 bomber's star tracker rss
    7. ๐Ÿ”— r/Leeds Price of beer? rss

      Not to sound too northern or owt, but fuck me is this for real?

      submitted by /u/tomreece100
      [link] [comments]

    8. ๐Ÿ”— r/york Sheriffs Army rss

      Sheriffs Army | Today, the Sheriff of York marched his army around the city walls to make sure they were well kept and could keep us safe from invading hoards. What a brilliant city we live in! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    9. ๐Ÿ”— r/Yorkshire When did Whitby become this mega popular place outside Yorkshire and the north east ? rss

      Grew up visiting Whitby a lot as a kid as I lived fairly locally (about 40 mins away). Always mad busy but mainly with folk from Yorkshire and the north east like Middlesborough but it wasnโ€™t that well known. And it was always a bit rough and ready at the edges, with the arcades and chippies, even a council estate, certainly not like Brighton or Bournemouth.

      I even went with my bog standard state school for a day trip there back in the 80s and there were loads of complaints from parents that weโ€™d been left to our own devices there whilst our teachers used it as an excuse for a piss up in the local pubs. It just didnโ€™t have an amazing reputation back then. Nowadays if anyone posts about places to visit in Yorkshire or even the UK, Whitby is mentioned. On social media people gush about how their โ€˜heart belongs to Whitbyโ€™. I get that itโ€™s a nice place, unique almost up here, but itโ€™s odd how perceptions of it have changed. I visited last year for the first time in ages and was also surprised how it seemed to be full of boho tat shops, hipster cafes and luxury air b n bs. Felt more like the Cotswolds tbh..

      Just wondered what locals felt, are they being pushed out like has happened in RHB or do they like its popularity ? Iโ€™m aware I might have got the place all wrong btw and itโ€™s always been cosmopolitan !

      submitted by /u/Ok_Economist7901
      [link] [comments]

    10. ๐Ÿ”— r/Yorkshire Beautiful landscape and villages rss
    11. ๐Ÿ”— r/reverseengineering Reverse Engineering ME2's USB with a Heat Gun and a Knife rss
    12. ๐Ÿ”— r/LocalLLaMA KIMI K2.6 SOON !! rss

      KIMI K2.6 SOON !! | submitted by /u/Namra_7
      [link] [comments]
      ---|---

    13. ๐Ÿ”— Probably Dance President Graph โ€“ FRED Data Broken Down by Party and President rss

      I made a website to explore FRED data broken down by US president and party. This is obviously motivated by the current president. During the last election I was frustrated by how many nonsense arguments there were being made. Like people voting for republicans because they were hoping for a good economy. This seemed exactly backwards in my mind because in my lifetime there was a repeated pattern of republicans messing up the economy followed by democrats cleaning up. But I'm really not good at having arguments with people, so I'd rather let the data do the talking.

      There is a series of papers that explore the relationship of presidents to GDP, and I have wanted to dig into that data before, and also try other metrics.

      But how do you do a fair comparison of the two parties? In a way that works for any metric that you can think of? My first thoughts were all way too complicated and a simple averaging of line graphs was what won out. I'm even ignoring that e.g. Obama was in office for eight years vs Biden for four years. These count as three separate terms. In the end the graph didn't look like I expected, but it still clearly shows higher GDP growth during democrat presidencies.

      Included Data

      I'm showing data going back to 1961. Mainly because I don't know anything about the presidents before that, Eisenhower and Truman. At some point you're going back so far that the parties just feel different from how they're now. But JFK wouldn't feel out of place with current democrats, and Nixon wouldn't feel out of place with current republicans, so I went back to them. Importantly I did not do this to mess with the data. In fact the Truman and Eisenhower presidencies start off the trend in the Hoover Institute paper linked above:

      I also worried that I might be biased because of my lived experience, and if I had stopped too late, at say George H.W. Bush, then maybe I just picked some unlucky presidencies for republicans and lucky presidencies for democrats. By going further back there is more of a balance, including some bad times for democrats, like when inflation and crime peaked under Carter, and good times for republicans under Reagan.

      Oh I also added crime data because that's a big thing that people vote for. I'm open to adding more data sources if I forgot something important that's easy to add. I thought crime would be better for republicans, but actually it looks better for democrats. Part of it is republicans being in power during the crime wave of the seventies and eighties, but even if you cut the data off in 2000, murder generally went up under republicans and down under democrats.

      Fair Comparisons

      I tried picking some honest series as examples. E.g. I picked the budget surplus/deficit series because it reflects decisions that people made intentionally. You could argue that the number that really matters is "debt as percent of GDP" and specifically how much that changes each year. That number looks great for democrats. But the reason it looks good for Biden is that inflation was high, so it's unintentionally good. You don't want to vote based on that.

      I'm sure someone will want to make an argument that this graph should count, but for the examples on the main page I wanted to choose graphs where the numbers are less ambiguous. And you do have the ability to pull in any series from the FRED if you want more.

      Lagged graphs

      When does a president really start to have an impact? Clearly not on day 1, because it takes a while for policies to have an effect. But actually if you look at the trade deficit graph for Biden, it goes very negative in the last month. Why? Because people were importing lots of things to front run Trump's tariffs. So maybe the lame duck period should count towards the new president already, resulting in a lag of -1 or -2? The simplest and fairest thing is to start at the inauguration. Then people can look at this graph and come up with the story that explains it. One of the papers above found that if you lag all graphs by 18 months then the two parties look almost equal in quality. (you can make up your own mind on whether that's fair, and whether e.g. the current high oil prices should be blamed on Biden)

      Vibe Coding

      This is my second big vibe-coded project. It once again turned out much better than I could have achieved on my own, especially in the limited time. I'd guess that 98% of the code is written by AI. I only went in to make small edits.

      E.g. just before writing this blog post I wanted to add the "Trade Deficit" graph but it requires using every single feature of the FRED:

      • Splicing together multiple series
      • Where one is in billions, and one is in millions, so you have to divide one by 1000
      • And one is quarterly and one is monthly, so you have to sum three months to get one quarter
      • And you really want to adjust for GDP to take into account inflation and a growing economy, so you need to divide one series by another

      Up to this point I had gotten by with just simple line drawing. Did I really want to risk adding all these features on a project that was almost ready to publish? I decided to ask the AI and it wrote a new system to combine graphs in ten minutes. Then a few more iterations to allow editing things on the website (not polished) and it's done. With more features than I would have written on my own.

      Once again I appreciate how easy it is to polish things. When I notice that something is off, I just ask the AI to look into it. So many little improvements happen when they're just a little question, instead of potentially hours of my time. I am still considering polishing the UI for composites. After all it doesn't hurt much to askโ€ฆ (but in practice there are too many things to do, like writing this blog posts, and finding more good examples for the front page, and I added lagged graphs after writing this sentence, tooโ€ฆ)

      Congress - the Main Idea that Didn't Make it

      It would be nice to have economic indicators broken down by which party has the majority in congress. Or maybe do the breakdown by which party has governors in more states, as one of the linked papers above does. But I have not yet had an idea to get simple visuals for that.

      Who is this for?

      So who is the target audience? It's for people who understand FRED graphs and want to have a simple visualization to share with a wider audience. You can set up a visualization that you think proves a point, and then create a shareable link that allows others to look at the same data. (and e.g. see how robust your conclusions are to lag, or to changing some property on the data series) I'm hoping this visualization makes for a simpler story than a FRED series does, without distorting things too much.

      Try it out, let me know what you think.

    14. ๐Ÿ”— r/Yorkshire Billy banks woods, a walk through time. pt 1. rss
    15. ๐Ÿ”— r/reverseengineering Learning Reverse Engineering on a Mobile Game (Frida + Ghidra + AI) rss
    16. ๐Ÿ”— r/reverseengineering Reverse Engineering latest DataDome's JS VM rss
    17. ๐Ÿ”— r/Yorkshire โ€œOut of Reachโ€โ€ฆ Snaizeholme, Yorkshire Dales rss
    18. ๐Ÿ”— r/Leeds WOMANS GROUP LEEDS rss

      Lovely girls, womanโ€™s do you know any kind group to go out, make nice plans??

      Weather is lovely now and I donโ€™t have many friends here in Leeds ๐Ÿ˜” we just move one year ago and is getting boring and sad ๐Ÿ˜ญ

      submitted by /u/Bubblygirl1999
      [link] [comments]

    19. ๐Ÿ”— r/Leeds Stolen bike? rss

      Anyone missing this bike in Leeds? Just spotted it in the pocket park by the canal near town. Thereโ€™s no one else around.

      submitted by /u/leeds_guy69
      [link] [comments]

    20. ๐Ÿ”— r/york More beautiful cherry blossom today ๐ŸŒธ rss
    21. ๐Ÿ”— r/Leeds Enterprise Car Club - Falsely accused of leaving car in poor condition? rss

      Hello,

      We used Enterprise Car Club for the first time a couple days ago, and although the booking and trip itself went great, we have now been wrongly accused. According to the Enterprise Operations, we left the car with a strong smell of smoke and left the front passenger sticky and dirty. Given that my partner and I donโ€™t smoke, as heโ€™s asthmatic, this accusation is completely ludicrous to us. The car was also in such great condition when we got it that we would have to have tried to make it dirty and sticky as they claim.

      According to them itโ€™s just a warning on my record and no charges, but this does put me off from booking. They suggested I record and take photos next time but the lack of โ€œstrong smell of smokeโ€ is hard to capture on photos/videos, no?

      Is this a common thing with Enterprise? Has anyone else experienced this? If so, what did you do?

      Thank you xx

      submitted by /u/ijnin
      [link] [comments]

    22. ๐Ÿ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [aida](https://github.com/o1y/aida): 1.1.0
      
    23. ๐Ÿ”— r/LocalLLaMA RTX 5070 Ti + 9800X3D running Qwen3.6-35B-A3B at 79 t/s with 128K context, the --n-cpu-moe flag is the most important part. rss

      Spent an evening dialing in Qwen3.6-35B-A3B on consumer hardware. Fun side note: I had Claude Opus 4.7 (just the $20 sub) build the config, launch the servers in the background, run the benchmarks, read the VRAM splits from the llama.cpp logs, and iterate on the tuning โ€” basically did the whole thing autonomously. I just told it what hardware I have and what I wanted to run.

      Sharing because the common --cpu-moe advice is leaving 54% of your speed on the table on 16GB GPUs.

      Hardware

      • GPU: RTX 5070 Ti (16GB GDDR7, Blackwell)
      • CPU: Ryzen 9800X3D (96MB L3 V-Cache)
      • RAM: 32GB DDR5
      • Stack: llama.cpp b8829 (CUDA 13.1, Windows x64)
      • Model: unsloth/Qwen3.6-35B-A3B-GGUF โ€” UD-Q4_K_M (22.1 GB)

      The finding โ€” --cpu-moe vs --n-cpu-moe N

      Everyoneโ€™s using --cpu-moe which pushes ALL MoE experts to CPU. On a 16GB GPU with a 22GB MoE model that means only ~1.9 GB of your VRAM gets used โ€” the other ~12 GB sits idle.

      --n-cpu-moe N keeps experts of the first N layers on CPU and puts the rest on GPU. With N=20 on a 40-layer model, the split uses VRAM properly.

      Benchmarks (300-token generation, Q4_K_M)

      Config | Gen t/s | Prompt t/s | VRAM used
      ---|---|---|---
      --cpu-moe (baseline) | 51.2 | 87.9 | 3.5 GB
      --n-cpu-moe 20 | 78.7 | 100.6 | 12.7 GB
      --n-cpu-moe 20 + -np 1 + 128K ctx | 79.3 | 135.8 | 13.2 GB

      +54% generation speed, +54% prompt speed vs. naive --cpu-moe. Jumping to 128K context is essentially free thanks to -np 1 dropping recurrent-state memory.

      Startup command that works

      llama-server.exe ^ -m "Qwen3.6-35B-A3B-UD-Q4_K_M.gguf" ^ --n-cpu-moe 20 ^ -ngl 99 ^ -np 1 ^ -fa on ^ -ctk q8_0 -ctv q8_0 ^ -c 131072 ^ --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 ^ --presence-penalty 0.0 --repeat-penalty 1.0 ^ --reasoning-budget -1 ^ --host 0.0.0.0 --port 8080
      

      Thatโ€™s Unslothโ€™s โ€œPrecise Codingโ€ sampling preset. For general use: --temp 1.0 --presence-penalty 1.5.

      Gotchas I hit (well, that Opus hit and fixed)

      • -np defaults to auto=4 slots. Wastes memory on recurrent state (~190 MB). Set -np 1 for single-user setups (OpenCode etc.).
      • --fit-target doesnโ€™t help here โ€” -ngl 99 + --n-cpu-moe N already gives you deterministic control.
      • -ctk q8_0 -ctv q8_0 is nearly lossless and halves your KV cache vs fp16. 128K ctx only costs 1.36 GB VRAM.
      • Qwen3.6 is a hybrid architecture โ€” only 10 layers are standard attention, the other 40 are Gated Delta Net (recurrent). Thatโ€™s why KV memory is so small.

      How to tune N for your GPU

      Each MoE layer on GPU costs ~530 MB VRAM. Non-MoE weights are ~1.9 GB fixed. For a 40-layer model:

      GPU VRAM | Recommended N
      ---|---
      8 GB | stay with --cpu-moe
      12 GB | N=26
      16 GB | N=20 (sweet spot)
      24 GB | N=8 (fits almost everything)

      Start conservative, watch VRAM during a long-context generation, then step N down by 2-3 until you have ~2 GB headroom.

      TL;DR

      Replace --cpu-moe with --n-cpu-moe 20, add -np 1, and you get 79 t/s + 128K context on a 5070 Ti. The 9800X3Dโ€™s V-Cache carries the CPU side effortlessly.

      And Claude Opus 4.7 on the $20 Pro sub is genuinely good enough now to run this kind of hardware-tuning loop end-to-end โ€” launch servers in background, parse logs, iterate โ€” without hand-holding. Kind of wild.

      Happy to test other configs if anyone wants comparisons.

      **EDIT โ€” Thanks to some great comments, the setup got better. Updated findings:*

      1. --fit on --fit-ctx 128000 --fit-target 512 > manual --n-cpu-moe 20

      Shoutout to the commenter who recommended the โ€œfit-tripleโ€. It auto-probes VRAM, picks N for you (landed on N=19 here), and adapts if drivers steal VRAM. Slightly faster than my hand-tuned N=20 and zero brain power to maintain. Caveat: bare --fit on silently drops ctx to 4K โ€” always pair it with --fit-ctx.

      2. My original prefill numbers were way too low

      A commenter correctly flagged that ~135 t/s prefill is nonsense for a 5070 Ti. They were right โ€” that was server-side timing including first-token latency. Re-ran with llama-bench (3 reps, same config):

      Test | t/s
      ---|---
      pp512 | 1182
      pp2048 | 1644
      tg128 | 91.5

      So real prefill is ~1.2โ€“1.6k t/s , not 135.

      Final โ€œbest commandโ€ for 16 GB VRAM + 32 GB RAM :

      llama-server.exe ^ -m "Qwen3.6-35B-A3B-UD-Q4_K_M.gguf" ^ --fit on ^ --fit-ctx 128000 ^ --fit-target 512 ^ -np 1 ^ -fa on ^ -ctk q8_0 ^ -ctv q8_0 ^ --temp 0.6 ^ --top-p 0.95 ^ --top-k 20 ^ --min-p 0.0 ^ --presence-penalty 0.0 ^ --repeat-penalty 1.0 ^ --reasoning-budget -1 ^ --host 0.0.0.0 ^ --port 8033
      

      Keep the comments coming, every round makes this faster. :D


      EDIT 2 โ€” Another commenterโ€™s tip got me one more layer on the GPU:

      Dropping --fit-target from 512 โ†’ 256 squeezes one extra MoE layer onto the GPU (N=18 instead of 19). The commenter also suggested adding --mlock alongside --no-mmap to lock RAM pages against swap.

      Benched both changes vs. the previous EDITโ€™s config (fit-target 512 + no- mmap):

      Config | pp512 | pp2048 | tg128
      ---|---|---|---
      fit-target 512 + no-mmap | 2769 | 2729 | 91.5
      fit-target 256 + no-mmap + mlock | 2743 | 2724 | 96.3

      +7% generation , prefill unchanged. Costs nothing โ€” just a smaller VRAM headroom and explicit RAM locking.

      Updated final command:

      llama-server.exe ^ -m "Qwen3.6-35B-A3B-UD-Q4_K_M.gguf" ^ --fit on ^ --fit-ctx 128000 ^ --fit-target 256 ^ -np 1 ^ -fa on ^ --no-mmap ^ --mlock ^ -ctk q8_0 ^ -ctv q8_0 ^ --temp 0.6 ^ --top-p 0.95 ^ --top-k 20 ^ --min-p 0.0 ^ --presence-penalty 0.0 ^ --repeat-penalty 1.0 ^ --reasoning-budget -1 ^ --host 0.0.0.0 ^ --port 8033
      

      ****** *

      EDIT 3 โ€” Two more community tips landed big wins:

      1. -ub 2048 (ubatch size) = +59% prompt-processing at 2K tokens

      Default -ub is 512. Bumping it to 2048 (and matching -b 2048) lets the GPU process more tokens in parallel per prefill step. Benched (5 reps each):

      ubatch | pp512 | pp2048 | pp4096 | tg128
      ---|---|---|---|---
      512 (default) | 2739 | 2778 | โ€” | 98.7
      1024 | 2689 | 3689 | โ€” | 100.5
      2048 | 2771 | 4453 | 4417 | 98.4
      4096 | 2736 | 4427 | 4866 | 100.4

      2048 is the sweet spot โ€” 59% faster at 2K-prompts, gen untouched. 4096 only helps beyond 2K-prompts (compute buffer saturates otherwise) and eats more VRAM.

      2. --chat-template-kwargs "{\"preserve_thinking\": true}" for agentic workflows

      Qwen3.6-specific chat template parameter. Default only keeps the latest user turnโ€™s thinking; preserve_thinking: true carries thinking traces from all historical messages forward. Turns out Qwen3.6 was specifically trained for this behavior. Benefits:

      • Better decision consistency across tool-calling turns
      • Fewer redundant re-reasonings โ†’ lower token consumption in long agent sessions
      • Better KV-cache reuse across turns

      Final final command:

      llama-server.exe ^ -m "Qwen3.6-35B-A3B-UD-Q4_K_M.gguf" ^ --fit on ^ --fit-ctx 128000 ^ --fit-target 256 ^ -np 1 ^ -fa on ^ --no-mmap ^ --mlock ^ -b 2048 ^ -ub 2048 ^ -ctk q8_0 ^ -ctv q8_0 ^ --temp 0.6 ^ --top-p 0.95 ^ --top-k 20 ^ --min-p 0.0 ^ --presence-penalty 0.0 ^ --repeat-penalty 1.0 ^ --reasoning-budget -1 ^ --chat-template-kwargs "{\"preserve_thinking\": true}" ^ --host 0.0.0.0 ^ --port 8033
      

      Total benched throughput on 5070 Ti 16 GB + 9800X3D + 32 GB DDR5-6000:

      • pp512 ~2771 t/s
      • pp2048 ~4453 t/s
      • pp4096 ~4417 t/s (bump -ub to 4096 for +10% here if you do long prompts)
      • tg128 ~98 t/s
      • Context: 128K

      This community keeps delivering. Thank you.

      submitted by /u/marlang
      [link] [comments]

    24. ๐Ÿ”— r/LocalLLaMA qwen3.6 performance jump is real, just make sure you have it properly configured rss

      qwen3.6 performance jump is real, just make sure you have it properly configured | I've been running workloads that I typically only trust Opus and Codex with, and I can confirm 3.6 is really capable. Of course, it's not at the level of those models, but it's definitely crossing the barrier of usefulness, plus the speed is amazing running this on an M5 Max 128GB 8bit 3K PP, 100 TG on oMLX + Pi.dev Just ensure you have preserve_thinking turned on. Check out details here. submitted by /u/onil_gova
      [link] [comments]
      ---|---

    25. ๐Ÿ”— r/reverseengineering I built a tool to better understand HTTP traffic โ€” would love honest feedback rss
    26. ๐Ÿ”— sacha chua :: living an awesome life La semaine du 6 au 12 avril rss

      lundi 6 avril

      00:00:00 J'ai fait une diffusion en direct pendant que je catรฉgorisais les liens dans mon bulletin d'information sur Emacs. Il y avait un problรจme parce que mes logiciels se sont battus pour l'appareil audio. Je suis passรฉe de la catรฉgorisation par commande vocale ร  celle par raccourci clavier, mais mon logiciel de dรฉtection d'activitรฉ vocale รฉcoute toujours mon microphone. Quand un commentateur m'a informรฉe du problรจme, j'ai quittรฉ le programme, ce qui l'a rรฉsolu. J'ai aussi utilisรฉ mon outil epwgraph pour montrer les connexions audio, ce qui a intรฉressรฉ quelques personnes.

      00:00:57 Un commentateur m'a interrogรฉe sur mon processus d'apprentissage du franรงais sur Emacs. J'ai montrรฉ mon flux de travail pour รฉcouter mes essais de prononciation.

      00:01:16 J'ai essayรฉ de configurer which-key-display-prefix sur top pour afficher le type de cible prรจs du curseur. Je pense qu'il faut un petit correctif.

      00:01:34 Ma fille et moi avons commencรฉ une nouvelle instance de Cobblemon sur Minecraft. Maintenant nous en savons davantage sur Pokรฉmon, donc c'รฉtait plus facile ร  comprendre qu'il y a quelques annรฉes. Le premier modpack que nous avons essayรฉ, BigChadPlus, รฉtait trop compliquรฉ pour nous. Nous sommes passรฉes ร  Cobblemon Official. Nous nous sommes amusรฉes en travaillant ensemble.

      00:02:10 Pour la premiรจre fois, nous avons prรฉparรฉ des raviolis chinois ร  la soupe comme ceux que ma fille avait goรปtรฉs ร  la pรขtisserie chinoise la semaine derniรจre. Nous avons utilisรฉ les feuilles de gyoza pour gagner du temps, je les ai juste รฉtalรฉes pour les rendre plus plates. C'รฉtait vraiment dรฉlicieux.

      mardi 7 avril

      J'ai dรฉcouvert que les fichiers journaux de gotosocial consomment beaucoup d'espace, donc je les ai effacรฉs.

      J'ai appelรฉ ma mรจre pour l'informer de l'รฉtat de santรฉ de ma sล“ur.

      Ma fille et moi avons prรฉparรฉ des tartes aux ล“ufs. Les fonds de tarte achetรฉs en magasin n'รฉtaient pas aussi bons que ceux que nous faisions avant, mais le supermarchรฉ proche ne proposait plus les moules ร  tarte en aluminium. Ils font l'affaire.

      Ma fille m'a demandรฉ de lire ensemble ร  voix haute un livre en tagalog.

      Nous avons essayรฉ le modpack Cobbleverse, mais nous avons dรฉcidรฉ de repasser au modpack Cobblemon Official parce que ma fille prรฉfรจre la sensation plus vanilla de Cobblemon Official.

      ร€ l'heure du coucher, ma fille et moi avons discutรฉ de l'IA. Il semble que son enseignant ait rappelรฉ ร  la classe de ne pas utiliser l'IA pour faire leurs devoirs. Elle fait ses devoirs elle-mรชme (quand elle les fait) car elle sait que la raison d'รชtre de leurs devoirs ne concernent pas l'enseignant. Elle aime bien utiliser l'IA pour gรฉnรฉrer des histoires interactives ร  l'extรฉrieur de l'รฉcole.

      mercredi 8 avril

      J'ai travaillรฉ comme consultante. L'รฉquipe va mettre ร  jour le systรจme ce week-end, donc nous devons vรฉrifier les snippets qui utilisent probablement les composants qui ont changรฉ.

      J'ai participรฉ ร  l'OrgMeetup. Je n'ai pas progressรฉ sur mon correctif pour l'opรฉration ยซ sentence-at-point ยป parce que mon attention est dรฉtournรฉe.

      J'ai emmenรฉ ma fille et son amie au parc pour jouer ensemble pendant une heure, ce qui permet ร  son pรจre de prรฉparer le dรฎner et de planifier des activitรฉs pour sa rรฉunion scoute.

      Ma fille a dit que l'รฉcole aurait un remplaรงant, donc elle a nรฉgociรฉ une alternative. Elle a trouvรฉ que le cours รฉtait trop lent et ses camarades ont fait des bรชtises, donc pour l'instant, c'est probablement une perte de temps.

      J'ai dรฉplacรฉ le monde Cobblemon de mon ordinateur ร  notre serveur Minecraft pour permettre ร  ma fille de jouer lร -bas indรฉpendamment. J'ai aussi configurรฉ des sauvegardes. Dans ce monde, nous sommes allรฉes ร  un village et nous nous sommes รฉtablies lร -bas, ce qui a รฉnormรฉment simplifiรฉ notre aventure grรขce ร  la machine de soin Pokรฉmon qui restaure toute la santรฉ ร  chaque utilisation. Ma prochaine รฉtape est de faire progresser mes Pokรฉmon.

      jeudi 9

      J'ai fait une diffusion en direct pendant que je modifiais ma configuration d'Emacs. Je travaillais ร  externaliser mes fonctions pour aider d'autres gens ร  les copier, et j'ai redรฉcouvert beaucoup de fonctions oubliรฉes en cours de route.

      L'รฉcole avait un remplaรงant comme prรฉvu. Heureusement, ma fille a eu un rendez-vous chez la mรฉdecin, donc elle a eu une excuse tout ร  fait lรฉgitime pour sรฉcher les cours, au moins le matin. J'ai informรฉ la mรฉdecin des symptรดmes rรฉcents et l'observation par Holter que ma fille vient de terminer. La mรฉdecin a recommandรฉ de boire plus d'eau et de manger des kiwis pour la constipation.

      Pour avoir endurรฉ des examens comme la tension artรฉrielle avec patience, j'ai achetรฉ des nouilles instantanรฉes pour ma fille et moi. Nous avons ajoutรฉ des gรขteaux de poisson, des algues, et du bok choy pour enrichir la soupe.

      Il faisait trop beau pour rester ร  l'intรฉrieur, donc l'aprรจs-midi, mon mari, ma fille et moi sommes allรฉs au KidSpark et au parc ร  vรฉlo. Le systรจme de prรฉsence de l'รฉcole permet de justifier l'absence pour cause de mรฉtรฉo, mais je ne crois pas que ce soit ce que l'administration voulait vraiment dire… Mais il devait pleuvoir le lendemain et nous savions de toute faรงon qu'elle aurait beaucoup de mal ร  se concentrer. J'รฉtais ravie que nous soyons sortis.

      Au faux supermarchรฉ au KidSpark, ma fille et moi avons jouรฉ ร  notre jeu habituel oรน la cliente dรฉclare avec confiance ยซ Je voudrais acheter une pomme ยป pendant qu'elle prรฉsente un autre produit, comme une poire. La vendeuse dit ยซ Non, ce n'est pas une pomme, c'est une poire. La pomme est rouge. ยป Puis la cliente cherche un autre produit qui satisfait la condition d'รชtre rouge sans รชtre une pomme, comme une fraise. Elle la prรฉsente avec la dรฉclaration triomphante ยซ c'est une pomme ! ยป, puis la vendeuse dit d'autres corrections, la cliente cherche d'autres produits, et ainsi de suite. ร€ ma grande surprise, nous avons pu jouer ร  ce jeu avec beaucoup de mots en franรงais, au moins de mon cรดtรฉ. C'est un bon exercice pour utiliser les adjectifs.

      Elle รฉtait aussi curieuse du modรจle de corps humain qu'elle a assemblรฉ. Elle a mis l'estomac, les intestins, un rein, le foie, le cล“ur, et les poumons. Elle a aussi jouรฉ ร  la vรฉtรฉrinaire Pokรฉmon.

      Aprรจs avoir jouรฉ au KidSpark, ma fille a voulu aller au parc avec les grandes asperges - St. James Park. C'รฉtait ร  dix minutes ร  vรฉlo du KidSpark. Elle a aimรฉ glisser sur le trรจs grand toboggan, ce qu'elle a fait de nombreuses fois.

      Une fois rentrรฉs, nous avons prรฉparรฉ des burgers et des frites pour faire encore un pique-nique sur la terrasse en bois.

      vendredi 10 avril

      L'รฉcole a encore une remplaรงante aujourd'hui. Je ne sais pas pourquoi l'รฉcole a des remplaรงants si souvent. Peut-รชtre que c'est normal ? Il y a deux ans, son enseignant รฉtait malade et รฉtait mรชme ร  l'hรดpital. L'annรฉe prรฉcรฉdente, sa premiรจre enseignante a dรฉmissionnรฉ pour prendre soin de ses parents, et ses enseignantes รฉtaient souvent malades. Quoi qu'il en soit, ma fille prรฉfรจre travailler seule ou avec moi que de subir les problรจmes technologiques et le bruit de ses camarades. Elle a terminรฉ toutes les tรขches de mathรฉmatiques, ce qui รฉtait trรจs ennuyeux parce que les tรขches รฉtaient trop simples. Si elle travaille aussi sur ses devoirs de lecture ร  un moment donnรฉ, je pense que c'est totalement acceptable. Je voudrais qu'elle prenne la responsabilitรฉ de son รฉducation, ce qui signifie aussi que je dois la laisser dรฉcider du niveau d'effort qu'elle veut y consacrer.

      Je lui ai dit que j'ai un rendez-vous avec mon tuteur l'aprรจs-midi, et en dehors de cela, je suis gรฉnรฉralement disponible. La mรฉtรฉo dit qu'il va pleuvoir, mais peut-รชtre que l'aprรจs-midi sera juste nuageux. Je me demande si ses amies seront disponibles pour jouer.

      Lors de mon dernier cours de franรงais, mon tuteur a dit que ma prononciation des virelangues รฉtait presque acceptable. Je me demande quelle serait la meilleure faรงon de progresser. Mon attention รฉtait dรฉtournรฉe par Emacs rรฉcemment, mais je reconsacre du temps ร  l'รฉcriture de mon journal en franรงais. Nรฉanmoins, je n'ai pas consacrรฉ de temps ร  regarder des รฉmissions ou ร  lire des articles ou des histoires en franรงais, ce qui est nรฉcessaire pour enrichir mon vocabulaire. Mon premier but consiste ร  aider ma fille ร  apprendre la langue, ce qui avance bien. Elle s'amuse en utilisant des mots franรงais et nous chantons des chansons de K-Pop Demon Hunters et de Pokรฉmon en franรงais. Je continue ร  aimer รฉcrire mon journal. Peut-รชtre que je peux repasser ร  l'enregistrement de mon journal ร  voix haute pour pratiquer la prononciation indรฉpendamment, avec une vรฉrification par mon tuteur pour la prononciation et l'utilisation de mots. On verra bien !

      Ma fille รฉtait fรขchรฉe contre moi parce qu'elle sentait que je l'avais oubliรฉe.

      samedi 11

      Ma fille a sรฉchรฉ son cours de bijoux parce qu'elle avait l'impression que je la pressais. Elle s'est assise dans sa chambre. Je suis allรฉe prendre de ses nouvelles, puis j'ai jardinรฉ. Finalement, ma fille est revenue et m'a rejointe. Nous avons amendรฉ la terre avec du fumier et nous avons plantรฉ des radis, de la laitue, et des รฉpinards. Ce printemps, je n'ai pas commencรฉ de semis de tomates. Au lieu de cela, je vais acheter des semis au magasin quand il fera plus chaud.

      J'ai emmenรฉ ma fille au Biidaasige Park pour jouer avec les tyroliennes. Elle s'est amusรฉe, mais elle n'aimait pas quand d'autres enfants fixaient son ล“il du regard. Elle a trouvรฉ que ses lunettes de soleil รฉtaient pratiques.

      dimanche 12

      Nous avons fait du vรฉlo jusqu'au Big Carrot, mais la soupe miso que nous cherchions n'รฉtait pas lร -bas.

      Ma fille et moi avons fait un menu d'activitรฉs comme jouer avec de la mousse ร  raser. Elle aime bien les jeux sensoriels.

      Ma fille et moi avons jouรฉ ร  Stardew Valley. Nous avons commencรฉ une nouvelle ferme car notre ancienne ferme รฉtait trop compliquรฉe. ร‡a fait longtemps que nous n'y avons pas jouรฉ. Nous devons rรฉapprendre toutes les choses.

      You can e-mail me at sacha@sachachua.com.

    27. ๐Ÿ”— Filip Filmar Synod: Paxos agent rss

      Find it at: https://github.com/filmil/synod Synod is a distributed Paxos coordination agent implemented in Go. It manages a highly available, synchronized Key-Value store across a network of peers using the Paxos consensus algorithm. It allows multiple dynamically joining network nodes to agree on a shared state, ensuring fault tolerance and consistency across the cell.

      Quickstart This quickstart shows how to download the repository and quickly start 3 synod agents which talk to each other and are already set up to work properly.

    28. ๐Ÿ”— exe.dev Some secret management belongs in your HTTP proxy rss

      Secrets management is a pain.

      Larger organizations commit to centralizing secrets management in a service. When done well, these services solve a lot of issues around secrets, at the cost of creating a lot of ops overhead (which is why they are limited to larger organizations) and engineering complexity. Smaller organizations have, until now, lived with the pain. But the pain has become far more significant with agents.

      Agents fuss when you directly hand them an API key. It usually works, and if you make it a rapidly revocable key that you disable after the session, you mitigate the risks. But some models (you know which ones) freak out on seeing the secret, and refuse to do anything now that the key is โ€œexposed.โ€ Models that are not so ridiculous about API keys will write the key to inter-session memory, pulling it out in another session and burning precious context window trying to use a revoked key. All of which assumes you go to the effort of constantly generating keys.

      Like so many problems getting attention right now, this looks like a problem created by agents. But the problem was always there. API keys are convenient but too powerful. Holding one does not just grant you the ability to make API calls, it grants you the power to give others the ability to make API calls (by sending them the key). No software I write in production that has an /etc/defaults file full of env vars containing API keys needs that power. We have always just been careful about how we write programs to not exfil keys. Never careful enough, because many security flaws in such an app now let the attacker walk off the keys and give them a window to do nastiness from wherever they like, until we realize and start manually rotating them.

      Attempts to automate key rotation to close this hole have mixed success. Our industry does use OAuth in some places, and sometimes OAuth is configured to rotate keys. But services still ship API keys, because they are easy for users. (OAuth, while simple in theory, is always painfully complex to use.) Some services give us the worst of all worlds, like GitHub encouraging personal access tokens with 90-day expiry windows. Just long enough for you to forget about them and your internal service to break mysteriously while you are on vacation.

      Inter-server OAuth as commonly practiced today also does not help with agents, as creation is usually designed to have some human intervention via a web browser cookie in a way deliberately designed to be hard to automate. I do not think I have ever used a service that gave me an OAUTH_CLIENT_SECRET via an API. So itโ€™s fine (if complex and painful) for traditional services, but your agent is not doing that.

      So in practice, what can we do today to solve this?

      We can use an HTTP proxy that injects headers.

      Many secrets are HTTP headers

      Many APIs talk HTTP. They usually ship an HTTP header, either a basic auth header or their own. Here is, for example, Stripeโ€™s:

      curl https://api.stripe.com/v1/customers \
        -u "sk_test_BQokikJOvBiI2HlWgH4olfQ2:" \
        -d "name=Jenny Rosen" \
        --data-urlencode "email=jennyrosen@example.com"
      

      So instead of an /etc/defaults file with your sk_test key, if you have an HTTP proxy managing secrets you can do this:

      curl https://stripe.int.exe.xyz/v1/customers \
        -d "name=Jenny Rosen" \
        --data-urlencode "email=jennyrosen@example.com"
      

      Where the server in the URL has been changed to another internal service you run. And the key has been removed! What grants your server, and your agents, the ability to use the secret is their ability to reach your secrets HTTP proxy.

      Secrets HTTP proxy topology

      This covers, amazingly, almost all secrets.

      A proxy like this is part of machinery provided by complex secrets management products. What is interesting is that it is one of the easier parts of secrets management, and delivers a large amount of the value.

      Integrations in exe.dev

      The final piece of the puzzle is: why do you need to write and manage an HTTP proxy? Your cloud should do it for you. So we built Integrations into exe.dev to do this. Assign an integration to a tag, tag the VMs you want to have access, done. Clone your VM, you get a fresh space to work with agents and your integrations are automatically present.

      A screenshot of setting up an HTTP integration in
exe.dev

      For GitHub, we did something special, and built a GitHub App to manage the OAuth for you. No need for manual rotation of keys. We intend to build a lot more integrations soon.

  4. April 17, 2026
    1. ๐Ÿ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-17 rss

      IDA Plugin Updates on 2026-04-17

      New Releases:

      Activity:

      • AIDA
        • 134cc06a: Avoid Hex-Rays cfunc cache bloat when running exporter
        • e95dcc55: Make export cancellation reliable on large binaries
        • 4810a28a: Add per-function file export and navigable index
        • dfd24dc5: Fix NameError in batch rename error handler
        • 06c52901: Apply ruff linting
        • d7acff38: Add contributor tooling: ruff config and CONTRIBUTING guide
        • 1d233701: Optimize rename function
        • be59bb8d: Bump default Anthropic model to Claude Opus 4.7
      • capa
        • 74276c8c: Merge pull request #3006 from mandiant/dependabot/pip/pydantic-2.13.0
      • command_palette
        • 164eb09d: Update version to 2.0.1 and enhance focus handling in ActionPaletteForm
      • ida-domain
      • ida-hcli
        • 06533055: disambiguate colliding plugin names via repository URLs
      • IDA-MCP
        • 04d9621a: Flatten panel chrome across the IDE
        • 106f6daf: Polish FS workspace with minimalist styling
        • cc584cfa: Refine IDE gateway lifecycle and platform detection
        • 01fd5a64: Restructure ida_mcp as bundled resource, remove IDE Python dependencyโ€ฆ
      • IDAssist
        • a321ad1d: Use stored IDB SHA for detached database queries
    2. ๐Ÿ”— Simon Willison Join us at PyCon US 2026 in Long Beach - we have new AI and security tracks this year rss

      This year's PyCon US is coming up next month from May 13th to May 19th, with the core conference talks from Friday 15th to Sunday 17th and tutorial and sprint days either side. It's in Long Beach, California this year, the first time PyCon US has come to the West Coast since Portland, Oregon in 2017 and the first time in California since Santa Clara in 2013.

      If you're based in California this is a great opportunity to catch up with the Python community, meet a whole lot of interesting people and learn a ton of interesting things.

      In addition to regular PyCon programming we have two new dedicated tracks at the conference this year: an AI track on Friday and a Security track on Saturday.

      The AI program was put together by track chairs Silona Bonewald (CitableAI) and Zac Hatfield-Dodds (Anthropic). I'll be an in-the-room chair this year, introducing speakers and helping everything run as smoothly as possible.

      Here's the AI track schedule in full:

      (And here's how I scraped that as a Markdown list from the schedule page using Claude Code and Rodney.)

      You should come to PyCon US!

      I've been going to PyCon for over twenty years now - I first went back in 2005. It's one of my all-time favourite conference series. Even as it's grown to more than 2,000 attendees PyCon US has remained a heavily community-focused conference - it's the least corporate feeling large event I've ever attended.

      The talks are always great, but it's the add-ons around the talks that really make it work for me. The lightning talks slots are some of the most heavily attended sessions. The PyLadies auction is always deeply entertaining. The sprints are an incredible opportunity to contribute directly to projects that you use, coached by their maintainers.

      In addition to scheduled talks, the event has open spaces, where anyone can reserve space for a conversation about a topic - effectively PyCon's version of an unconference. I plan to spend a lot of my time in the open spaces this year - I'm hoping to join or instigate sessions about both Datasette and agentic engineering.

      I'm on the board of the Python Software Foundation, and PyCon US remains one of our most important responsibilities - in the past it's been a key source of funding for the organization, but it's also core to our mission to "promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers".

      If you do come to Long Beach, we'd really appreciate it if you could book accommodation in the official hotel block, for reasons outlined in this post on the PSF blog.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    3. ๐Ÿ”— badlogic/pi-mono v0.67.68 release

      No content.

    4. ๐Ÿ”— r/Leeds LDS on 3DS 2 rss
    5. ๐Ÿ”— r/Yorkshire Ribblehead with Ingleborough behind rss

      Ribblehead with Ingleborough behind | A train pulling into the station at Ribblehead, with Ingleborough in the background. One of my favourite places in the world! submitted by /u/No-Awareness-5419
      [link] [comments]
      ---|---

    6. ๐Ÿ”— r/Leeds Survey about Leeds tram for Salford University rss

      Iโ€™m doing research for a university project about the proposed mass transit tram system in Leeds. The questions include information about what part of Leeds you are from in Leeds, current transport satisfaction, and your opinions on the tram. Any help filling in the short survey below would be appreciated: https://docs.google.com/forms/d/e/1FAIpQLSehwcf5oqa4OUscKJZmELppkJrSwXLaqA-Z2WXqZML4cVVJ9A/viewform?usp=publish- editor

      submitted by /u/AltruisticCup4783
      [link] [comments]

    7. ๐Ÿ”— badlogic/pi-mono v0.67.67 release

      New Features

      • Bedrock sessions can now authenticate with AWS_BEARER_TOKEN_BEDROCK, enabling Converse API access without local SigV4 credentials. See docs/providers.md#amazon-bedrock.

      Added

      • Added Bedrock bearer-token authentication support via AWS_BEARER_TOKEN_BEDROCK, enabling coding-agent sessions to use Bedrock Converse without local SigV4 credentials (#3125 by @wirjo)

      Fixed

      • Fixed /scoped-models Alt+Up/Down to stay a no-op in the implicit all enabled state instead of materializing a full explicit enabled-model list and marking the selector dirty (#3331)
      • Fixed Mistral Small 4 default thinking requests to use the model's supported reasoning control, avoiding 400 errors when starting sessions on mistral-small-2603 and mistral-small-latest (#3338)
      • Fixed Qwen chat-template thinking replay to preserve prior thinking across turns, so affected OpenAI-compatible models keep multi-turn tool-call arguments instead of degrading to empty {} payloads (#3325)
      • Fixed exported HTML transcripts so text selection no longer triggers click-based expand/collapse toggles (#3332 by @xu0o0)
      • Fixed flaky git package update notifications by waiting for captured git command stdio to fully drain before comparing local and remote commit SHAs (#3027)
      • Fixed system prompt dates to use a stable YYYY-MM-DD format instead of locale-dependent output, keeping prompts deterministic across runtimes and locales (#2814)
      • Fixed auto-retry transient error detection to treat Network connection lost. as retryable, so dropped provider connections retry instead of terminating the agent (#3317)
      • Fixed compact interactive extension startup summaries to disambiguate package extensions and repeated local index.ts entries by using package-aware labels and the minimal parent path needed to make local entries unique (#3308)
      • Fixed git package dependency installation to use production installs (npm install --omit=dev) during both install and update flows, so extension runtime dependencies must come from dependencies and not devDependencies (#3009)
      • Fixed tool_result / afterToolCall extension handling for error results by forwarding details and isError overrides through AgentSession instead of dropping them when isError was already true (#3051)
      • Fixed missing root exports for RpcClient and RPC protocol types from @mariozechner/pi-coding-agent, so ESM consumers can import them from the main package entrypoint (#3275)
      • Fixed OpenAI Codex service-tier cost accounting to trust the explicitly requested tier when the API echoes the default tier in responses, keeping session cost displays aligned with the selected tier (#3307 by @markusylisiurunen)
      • Fixed parallel tool-call finalization to convert afterToolCall hook throws into error tool results instead of aborting the remaining tool batch (#3084)
      • Fixed Bun binary asset path resolution to honor PI_PACKAGE_DIR for built-in themes, HTML export templates, and interactive bundled assets (#3074)
      • Fixed user-message turn spacing in interactive mode by restoring an inter-message spacer before user turns (except the first user message), preventing assistant and user blocks from rendering flush together.
      • Fixed interactive /import handling to support quoted JSONL paths with spaces, route missing JSONL files through the non-fatal SessionImportFileNotFoundError path, and document the importFromJsonl() exceptions (SessionImportFileNotFoundError, MissingSessionCwdError).
    8. ๐Ÿ”— r/york Largest fossilised human poo - here in York! rss

      Largest fossilised human poo - here in York! | Not a title Iโ€™d ever expect to type but visited the Jorvik Centre today. Apparently this is the largest fossilised human poo ever discovered. submitted by /u/York_shireman
      [link] [comments]
      ---|---

    9. ๐Ÿ”— r/Leeds Goin out tonight (solo tips?) rss

      Hi people

      23y/o Mexican/German guy visiting Leeds over the weekend

      Lookin for pubs/clubs thatโ€™ll be worth a look today or tomorrow!

      Also down to join smth!

      Thanks for the tips!

      submitted by /u/Chilly_Bearrr
      [link] [comments]

    10. ๐Ÿ”— r/reverseengineering I need help i need someone expert in reverse engineering that can help me in play game again that servers shoutdown rss
    11. ๐Ÿ”— r/Yorkshire Had a day on the drays delivering beer around the Yorkshire Dales. rss
    12. ๐Ÿ”— r/york New artwork celebrates history of River Foss rss

      New artwork celebrates history of River Foss | submitted by /u/centreback_
      [link] [comments]
      ---|---

    13. ๐Ÿ”— r/LocalLLaMA Qwen3.6 GGUF Benchmarks rss

      Qwen3.6 GGUF Benchmarks | Hey guys, we ran Qwen3.6-35B-A3B GGUF KLD performance benchmarks to help you choose the best quant. Unsloth quants have the best KLD vs disk space 21/22 times on the pareto frontier. GGUFs: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF We also want to clear up a few misunderstandings around our GGUF updates. Some people have said we re-upload often because of our own mistakes. We understand the concern, but the reality is that we tend to publicize issues quickly and tell people to update. In roughly 95% of cases, the root causes were out of our hands - we just try to be transparent and keep the community informed. A few examples: Gemma 4 was re-uploaded 4 times Three were due to about 10 to 20 llama.cpp bug fixes, some of which we helped investigate and contribute a fix as well. The fourth was an official Gemma chat template improvement from Google. Every provider had to update, not just us. See llama.cpp PRs which shows ~30 PR fixes / improvements for Gemma-4 MiniMax 2.7 NaNs We found NaNs in 38% of Bartowskiโ€™s (10/26 quants) and 22% of ours (5/23 quants). We identified a fix and already patched ours - see https://www.reddit.com/r/LocalLLaMA/comments/1slk4di/minimax_m27_gguf_investigation_fixes_benchmarks/ Bartowski has not patched yet, but is actively working on it.

      Qwen3.5 SSM issues We shared 7TB of research artifacts showing which layers should not be quantized. The issue was not that providersโ€™ quants were broken, but that they were not optimal - mainly around ssm_out and ssm_* tensors. We have since improved ours and now lead on KLD vs. disk space for Qwen3.5 as well. Most if not all quant providers then take our findings then update their quants. We talked about our analysis and research at https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/ and https://www.reddit.com/r/LocalLLaMA/comments/1rlkptk/final_qwen35_unsloth_gguf_update/ CUDA 13.2 is actually broken This causes some low bit quants on all models to get gibberish. Some people have dismissed it as not being an issue, but NVIDIA has confirmed it's a problem and a fix is coming in CUDA 13.3. See Unsloth Issue 4849, llama.cpp issue 21255, issue 21371 As a temporary solution use CUDA 13.1. See https://github.com/ggml- org/llama.cpp/issues/21255#issuecomment-4248403175 quote from https://github.com/johnnynunez:

      The bug was found and fixed in cuda 13.3

      Thanks again for all the support - we really appreciate it. Hope you all have a great Friday and weekend. More benchmarks and investigation details here: https://unsloth.ai/docs/models/qwen3.6#unsloth-gguf-benchmarks submitted by /u/danielhanchen
      [link] [comments]
      ---|---

    14. ๐Ÿ”— r/reverseengineering Reverse-engineering of Internet Backgammon from Windows 7, with parts of how ZPA (Zone Protocol), the MSN Gaming Zone protocol worked rss
    15. ๐Ÿ”— r/LocalLLaMA Qwen 3.6 is the first local model that actually feels worth the effort for me rss

      I spent some time yesterday after work trying out the new qwen3.6-35b-a3b model, and at least for me it's the first time that I actually felt that a local model wasn't more of a pain to use than it was worth.

      I've been using LLMs in my personal/throwaway projects for a few months, for the kind of code that I don't feel any passion writing (most UI XML in Avalonia, embedded systems C++), and I used to have Sonet and Opus for free thanks to Github's student program but they cancelled that. I've been trying out local models for quite a while too but it's mostly felt up until this point that they were either too dumb to get the job done, or they could complete it but I would spend so much time fixing/tweaking/formatting/refactoring the code that I might as well have just done it myself.

      Qwen3.6 seems to have finally changed that, at least on my system and projects. Running on a 5090 + 4090 I can load the Q8 model with full 260k context, getting around 170 tokens per second also makes it one of the fastest models I've tried. And unlike all other models I've tried recently including Gemma 4, it can actually complete tasks and only requires minor guidance or corrections at the end. 9 times out of 10, simply asking it to review its own changes once it is 'done' is enough for it to catch and correct anything that was wrong.

      I'm pretty impressed and it's really cool to see local models finally start to get to this point. It gives me hope for a future where this technology is not limited to massive data centers and subscription services, but rather being optimized to the point where even mid-range computers can take advantage of it.

      submitted by /u/Epicguru
      [link] [comments]

    16. ๐Ÿ”— r/LocalLLaMA Qwen3.6. This is it. rss

      Qwen3.6. This is it. | https://preview.redd.it/nxn2rr15vqvg1.png?width=1920&format=png&auto=webp&s=8ec85d90b1286a6e7813c91a0a83c748e94ca849 I gave it a task to build a tower defense game. use screenshots from the installed mcp to confirm your build. My God its actually doing it, Its now testing the upgrade feature,
      It noted the canvas wasnt rendering at some point and saw and fixed it.
      It noted its own bug in wave completions and is actually doing it... I am blown away...
      I cant image what the Qwen Coder thats following will be able to do.
      What a time were in.

      llama-server -m "{PATH_TO_MODEL}\Qwen3.6\Qwen3.6-35B-A3B-UD-Q6_K_XL.gguf" --mmproj "{PATH_TO_MODEL}\Qwen3.6\mmproj-F16.gguf" --chat-template-file "{PATH_TO_MODEL}\chat_template\chat_template.jinja" -a "Qwen3.5-27B" --cpu-moe -c 120384 --host 0.0.0.0 --port 8084 --reasoning-budget -1 --top-k 20 --top-p 0.95 --min-p 0 --repeat-penalty 1.0 --presence-penalty 1.5 -fa on --temp 0.7 --no-mmap --no-mmproj-offload --ctx-checkpoints 5"
      

      EDIT: Its been made aware that open code still has my 27B model alias,
      Im lazy, i didnt even bother the model name heres my llama.cpp server configs, im so excited i tested and came here right away. submitted by /u/Local- Cardiologist-5
      [link] [comments]
      ---|---

    17. ๐Ÿ”— sacha chua :: living an awesome life Create a Google Calendar event from an Org Mode timestamp rss

      Time zones are hard, so I let calendaring systems take care of the conversion and confirmation. I've been using Google Calendar because it synchronizes with my phone and people know what to do with the event invite. Org Mode has iCalendar export, but I sometimes have a hard time getting .ics files into Google Calendar on my laptop, so I might as well just create the calendar entry in Google Calendar directly. Well. Emacs is a lot more fun than Google Calendar, so I'd rather create the calendar entry from Emacs and put it into Google Calendar.

      This function lets me start from a timestamp like [2026-04-24 Fri 10:30] (inserted with C-u C-c C-!, or org-timestamp-inactive) and create an event based on a template.

      (defvar sacha-time-zone "America/Toronto" "Full name of time zone.")
      
      ;;;###autoload
      (defun sacha-emacs-chat-schedule (&optional time)
        "Create a Google Calendar invite based on TIME or the Org timestamp at point."
        (interactive (list (sacha-org-time-at-point)))
        (browse-url
         (format
          "https://calendar.google.com/calendar/render?action=TEMPLATE&text=%s&details=%s&dates=%s&ctz=%s"
          (url-hexify-string sacha-emacs-chat-title)
          (url-hexify-string sacha-emacs-chat-description)
          (format-time-string
           "%Y%m%dT%H%M%S" time)
          sacha-time-zone)))
      
      (defvar sacha-emacs-chat-title "Emacs Chat" "Title of calendar entry.")
      (defvar sacha-emacs-chat-description
        "All right, let's try this! =) See the calendar invite for the Google Meet link.
      
      Objective: Share cool stuff about Emacs workflows that's not obvious from reading configs, and have fun chatting about Emacs
      
      Some ideas for things to talk about:
      - Which keyboard shortcuts or combinations of functions work really well for you?
      - What's something you love about your setup?
      - What are you looking forward to tweaking next?
      
      Let me know if you want to do it on stream (more people can ask questions) or off stream (we can clean up the video in case there are hiccups). Also, please feel free to send me links to things you'd like me to read ahead of time, like your config!"
        "Description.")
      

      It uses this function to convert the timestamp at point:

      sacha-org-time-at-point: Return Emacs time object for timestamp at point.
      (defun sacha-org-time-at-point ()
        "Return Emacs time object for timestamp at point."
        (org-timestamp-to-time (org-timestamp-from-string (org-element-property :raw-value (org-element-context)))))
      
      

      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.

    18. ๐Ÿ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [command_palette](https://github.com/milankovo/command_palette): 2.0.1
      
    19. ๐Ÿ”— r/york Carboot rss

      Anyone know of a good carboot for midweek days rather than a Saturday?

      Thanks

      submitted by /u/Total_Bed_3882
      [link] [comments]

    20. ๐Ÿ”— r/Yorkshire Love all the Wynds and narrow streets of Richmond rss
    21. ๐Ÿ”— r/wiesbaden Was ist mit der Uhr am HBF? rss

      Hallo zusammen,

      Ich gehe seit letztem Sommer regelmรครŸig am HBF vorbei und man kann drei Uhren am Turm sehen die alle drei eine verschiedene Uhrzeit anzeigen.

      Wollte nur mal wissen, was ist da los?

      Es lรคuft schon so lange falsch.

      submitted by /u/Jo96-
      [link] [comments]

    22. ๐Ÿ”— roboflow/supervision [RC] supervision-0.28 release

      No content.

    23. ๐Ÿ”— r/Leeds Record Store Day rss

      Hi all!

      I'm taking part tomorrow for the first time in a couple of years. I previously did The Vinyl Whistle but thinking about doing Jumbo Records this time as I promised a friend overseas that I'd try for something for them and that's the only one in Leeds stocking it.

      For anyone that has been before, what time did you start queuing at Jumbo and what was the state of the queue when you arrived? I got to TVW at about 6am last time and was probably 10th in the queue. Don't mind getting there early as it's a fun if tiring experience, but always want to maximise time in bed.

      submitted by /u/sprockethole
      [link] [comments]

    24. ๐Ÿ”— HazAT/pi-interactive-subagents v3.0.0 โ€” the w-winter release release

      This is the w-winter release ๐ŸŽ‰ โ€” big thanks to @w-winter for contributing the two headline features in their fork, which we pulled upstream in this release. See Co-authored-by trailers on the relevant commits.

      Install:

      pi install git:github.com/HazAT/pi-interactive-subagents@v3.0.0
      

      Or latest:

      pi install git:github.com/HazAT/pi-interactive-subagents
      

      โœจ Features

      • disable-model-invocation frontmatter โ€” hide an agent from subagents_list so the model's agent catalog stays focused, while keeping the agent fully loadable by explicit name via subagent({ agent: "name", ... }). Precedence still runs before visibility filtering, so a project-local hidden agent correctly shadows a visible lower-precedence one. (w-winter)
      • session-mode frontmatter โ€” choose how a subagent's session is seeded: standalone (default, fresh), lineage-only (fresh blank child session but with parentSession linkage in the header, no copied turns), or fork (full-context fork, the existing behavior). subagent({ fork: true }) still forces fork mode for that specific spawn. (w-winter)

      โ™ป๏ธ Refactoring

      • Removedset_tab_title โ€” the tool wasn't pulling its weight. Agents paid prompt tokens to describe it and spent real tool calls updating the mux tab title on every phase transition, for a purely cosmetic effect. Tool registration, injected prompt instruction, dead muxUnavailableResult branch, and all references in the /plan skill / README are gone.
      • Removedsession-artifacts extension โ€” write_artifact / read_artifact caused more friction than they resolved. Agents have write and read already; offering a second file-I/O pathway forced an inconsistent per-call decision ("is this a real file or an artifact?"), often defaulting to write_artifact for scratch content nobody would read back. The one genuine benefit (cross-session artifact discovery) was rarely load-bearing โ€” orchestrators typically pass explicit paths in task prompts anyway. ~250 lines of extension code gone, every subagent a little lighter on prompt tokens.

      Migration for the removals

      • Anything that used write_artifact(name: "X", ...) โ†’ use write(path: "<explicit path>", ...) with a path the orchestrator provides.
      • Anything that used read_artifact(name: "X") โ†’ use read(path: "<explicit path>").
      • New recommended convention for planning runs: colocate deliverables under .pi/plans/YYYY-MM-DD-<name>/. See the updated /plan skill and the bundled agent prompts for the full layout (scout-context.md, spec.md, plan.md, review.md).

      ๐Ÿ“ Documentation

      • Migrated all bundled agent prompts (scout, spec, planner, reviewer, visual-tester) and the /plan skill to the new path-based artifact convention. Every planning-run deliverable now lives under .pi/plans/YYYY-MM-DD-<name>/ for consistency.

      ๐Ÿ”ง Other Changes

      • Removed a stray const unused = "hello"; leftover in pi-extension/subagents/session.ts. (w-winter)
    25. ๐Ÿ”— r/Leeds Fox Rescue in Leeds rss

      Does anyone know if we have any local wildlife rescues that would rescue an injured fox?

      submitted by /u/lozmarie424
      [link] [comments]

    26. ๐Ÿ”— r/Yorkshire Reform's Bradford candidate who met King exposed over vile anti-Muslim rants rss
    27. ๐Ÿ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [IDAssist](https://github.com/symgraph/IDAssist): 1.9.0
      
    28. ๐Ÿ”— sacha chua :: living an awesome life Make chapter markers and video time hyperlinks easier to note while I livestream rss

      I want to make it easier to add chapter markers to my YouTube video descriptions and hyperlinks to specific times in videos in my blog posts.

      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.