๐Ÿก


to read (pdf)

  1. Neobrutalism components - Start making neobrutalism layouts today
  2. Debunking zswap and zram myths
  3. Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
  4. Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
  5. Letting AI Actively Manage Its Own Context | ๆ˜Žๅคฉ็š„ไนŒไบ‘

  1. April 08, 2026
    1. ๐Ÿ”— Pagefind/pagefind v1.5.1 release

      v1.5.1

    2. ๐Ÿ”— r/reverseengineering From UART to Root: Vendor Shell Escape on a Uniview IP Camera rss
  2. April 07, 2026
    1. ๐Ÿ”— brettcannon/cpython-wasi-build CPython 3.14.4 w/ WASI SDK 24 release

      Use bytecodealliance/setup-wasi-sdk-action to install the WASI SDK (#28)

      Port of python/cpython@4ebaf3f

      Co-authored-by: brettcannon 54418+brettcannon@users.noreply.github.com

    2. ๐Ÿ”— brettcannon/cpython-wasi-build CPython 3.13.13 w/ WASI SDK 24 release

      Use bytecodealliance/setup-wasi-sdk-action to install the WASI SDK (#28)

      Port of python/cpython@4ebaf3f

      Co-authored-by: brettcannon 54418+brettcannon@users.noreply.github.com

    3. ๐Ÿ”— brettcannon/cpython-wasi-build CPython 3.15.0a8 w/ WASI SDK 32 release

      Use bytecodealliance/setup-wasi-sdk-action to install the WASI SDK (#28)

      Port of python/cpython@4ebaf3f

      Co-authored-by: brettcannon 54418+brettcannon@users.noreply.github.com

    4. ๐Ÿ”— r/wiesbaden Mountainbike/Downhill Trails rss

      Wo gibt es in Wiesbaden und Umgebung gute Trails zum Mountainbiken und/oder Downhill fahren?

      Gibt es eventuell sogar eine eigene Community?

      Bin recht neu bei dem Thema und fรผr alle Vorschlรคge dankbar :-)

      submitted by /u/Excellent_Scheme_247
      [link] [comments]

    5. ๐Ÿ”— anthropics/claude-code v2.1.94 release

      What's changed

      • Added support for Amazon Bedrock powered by Mantle, set CLAUDE_CODE_USE_MANTLE=1
      • Changed default effort level from medium to high for API-key, Bedrock/Vertex/Foundry, Team, and Enterprise users (control this with /effort)
      • Added compact Slacked #channel header with a clickable channel link for Slack MCP send-message tool calls
      • Added keep-coding-instructions frontmatter field support for plugin output styles
      • Added hookSpecificOutput.sessionTitle to UserPromptSubmit hooks for setting the session title
      • Plugin skills declared via "skills": ["./"] now use the skill's frontmatter name for the invocation name instead of the directory basename, giving a stable name across install methods
      • Fixed agents appearing stuck after a 429 rate-limit response with a long Retry-After header โ€” the error now surfaces immediately instead of silently waiting
      • Fixed Console login on macOS silently failing with "Not logged in" when the login keychain is locked or its password is out of sync โ€” the error is now surfaced and claude doctor diagnoses the fix
      • Fixed plugin skill hooks defined in YAML frontmatter being silently ignored
      • Fixed plugin hooks failing with "No such file or directory" when CLAUDE_PLUGIN_ROOT was not set
      • Fixed ${CLAUDE_PLUGIN_ROOT} resolving to the marketplace source directory instead of the installed cache for local-marketplace plugins on startup
      • Fixed scrollback showing the same diff repeated and blank pages in long-running sessions
      • Fixed multiline user prompts in the transcript indenting wrapped lines under the โฏ caret instead of under the text
      • Fixed Shift+Space inserting the literal word "space" instead of a space character in search inputs
      • Fixed hyperlinks opening two browser tabs when clicked inside tmux running in an xterm.js-based terminal (VS Code, Hyper, Tabby)
      • Fixed an alt-screen rendering bug where content height changes mid-scroll could leave compounding ghost lines
      • Fixed FORCE_HYPERLINK environment variable being ignored when set via settings.json env
      • Fixed native terminal cursor not tracking the selected tab in dialogs, so screen readers and magnifiers can follow tab navigation
      • Fixed Bedrock invocation of Sonnet 3.5 v2 by using the us. inference profile ID
      • Fixed SDK/print mode not preserving the partial assistant response in conversation history when interrupted mid-stream
      • Improved --resume to resume sessions from other worktrees of the same repo directly instead of printing a cd command
      • Fixed CJK and other multibyte text being corrupted with U+FFFD in stream-json input/output when chunk boundaries split a UTF-8 sequence
      • [VSCode] Reduced cold-open subprocess work on starting a session
      • [VSCode] Fixed dropdown menus selecting the wrong item when the mouse was over the list while typing or using arrow keys
      • [VSCode] Added a warning banner when settings.json files fail to parse, so users know their permission rules are not being applied
    6. ๐Ÿ”— Simon Willison Anthropic's Project Glasswing - restricting Claude Mythos to security researchers - sounds necessary to me rss

      Anthropic didn't release their latest model, Claude Mythos (system card PDF), today. They have instead made it available to a very restricted set of preview partners under their newly announced Project Glasswing.

      The model is a general purpose model, similar to Claude Opus 4.6, but Anthropic claim that its cyber-security research abilities are strong enough that they need to give the software industry as a whole time to prepare.

      Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.

      [...]

      Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systemsโ€”systems that represent a very large portion of the worldโ€™s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.

      There's a great deal more technical detail in Assessing Claude Mythos Previewโ€™s cybersecurity capabilities on the Anthropic Red Team blog:

      In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complexย JIT heap sprayย that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets.

      Plus this comparison with Claude 4.6 Opus:

      Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozillaโ€™s Firefox 147 JavaScript engineโ€”all patched in Firefox 148โ€”into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more.

      Saying "our model is too dangerous to release" is a great way to build buzz around a new model, but in this case I expect their caution is warranted.

      Just a few days (last Friday) ago I started a new ai-security-research tag on this blog to acknowledge an uptick in credible security professionals pulling the alarm on how good modern LLMs have got at vulnerability research.

      Greg Kroah-Hartman of the Linux kernel:

      Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.

      Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.

      Daniel Stenberg of curl:

      The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

      I'm spending hours per day on this now. It's intense.

      And Thomas Ptacek published Vulnerability Research Is Cooked, a post inspired by his podcast conversation with Anthropic's Nicholas Carlini.

      Anthropic have a 5 minute talking heads video describing the Glasswing project. Nicholas Carlini appears as one of those talking heads, where he said (highlights mine):

      It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently. But this model is able to create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. [...]

      I've found more bugs in the last couple of weeks than I found in the rest of my life combined. We've used the model to scan a bunch of open source code, and the thing that we went for first was operating systems, because this is the code that underlies the entire internet infrastructure. For OpenBSD, we found a bug that's been present for 27 years, where I can send a couple of pieces of data to any OpenBSD server and crash it. On Linux, we found a number of vulnerabilities where as a user with no permissions, I can elevate myself to the administrator by just running some binary on my machine. For each of these bugs, we told the maintainers who actually run the software about them, and they went and fixed them and have deployed the patches patches so that anyone who runs the software is no longer vulnerable to these attacks.

      I found this on the OpenBSD 7.8 errata page:

      025: RELIABILITY FIX: March 25, 2026 All architectures

      TCP packets with invalid SACK options could crash the kernel.

      A source code patch exists which remedies this problem.

      I tracked that change down in the GitHub mirror of the OpenBSD CVS repo (apparently they still use CVS!) and found it using git blame:

      Screenshot of a Git blame view of C source code around line 2455 showing TCP SACK hole validation logic. Code includes checks using SEQ_GT, SEQ_LT macros on fields like th->th_ack, tp->snd_una, sack.start, sack.end, tp->snd_max, and tp->snd_holes. Most commits are from 25โ€“27 years ago with messages like "more SACK hole validity testin..." and "knf", while one recent commit from 3 weeks ago ("Ignore TCP SACK packets wit...") is highlighted with an orange left border, adding a new guard "if (SEQ_LT(sack.start, tp->snd_una)) continue;"

      Sure enough, the surrounding code is from 27 years ago.

      I'm not sure which Linux vulnerability Nicholas was describing, but it may have been this NFS one recently covered by Michael Lynch .

      There's enough smoke here that I believe there's a fire. It's not surprising to find vulnerabilities in decades-old software, especially given that they're mostly written in C, but what's new is that coding agents run by the latest frontier LLMs are proving tirelessly capable at digging up these issues.

      I actually thought to myself on Friday that this sounded like an industry-wide reckoning in the making, and that it might warrant a huge investment of time and money to get ahead of the inevitable barrage of vulnerabilities. Project Glasswing incorporates "$100M in usage credits ... as well as $4M in direct donations to open-source security organizations". Partners include AWS, Apple, Microsoft, Google, and the Linux Foundation. It would be great to see OpenAI involved as well - GPT-5.4 already has a strong reputation for finding security vulnerabilities and they have stronger models on the near horizon.

      The bad news for those of us who are not trusted partners is this:

      We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scaleโ€”for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the modelโ€™s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview.

      I can live with that. I think the security risks really are credible here, and having extra time for trusted teams to get ahead of them is a reasonable trade-off.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    7. ๐Ÿ”— r/york First trip to your wonderful city rss

      First trip to your wonderful city | A few of my favourite shots. Not so much time for pictures with the children in tow these days but still managed a couple keepers. submitted by /u/MicroWave
      [link] [comments]
      ---|---

    8. ๐Ÿ”— r/reverseengineering ida-mcp 2.1: Progressive Tool Discovery, Background Analysis, and Batch Operations rss
    9. ๐Ÿ”— r/Leeds North Street LS2 rss

      Headed up to North Street today. Loads going on, seems like a nice part of town. I always avoided that area as you head up to Little London but almost feels nicer and safer now than other areas of town (city centre). Is this a recent boom or have I just had my blinkers on? Anyone know the plans on what they are doing around that area?

      submitted by /u/Olivrrpb
      [link] [comments]

    10. ๐Ÿ”— @binaryninja@infosec.exchange Join us tomorrow, Wednesday April 8th @ 3pm ET, for a sneak peak at Binary mastodon

      Join us tomorrow, Wednesday April 8th @ 3pm ET, for a sneak peak at Binary Ninja 5.3!

      We'll take a look at all the new major features coming to 5.3, from expanded architecture and platform support, core analysis features, new debugger features, brand new UIs, and so much more. Follow along with latest dev, or join us tomorrow to see what's worth all the hype: https://youtube.com/live/iD8UidhFbhg

    11. ๐Ÿ”— r/Leeds Good Boy (the Stephen Graham film, not the dog one) currently in cinema was filmed in Leeds during freshers after a Leeds game at 10pm. It's pretty hectic and shows off some bits of Leeds including lower Briggate. rss

      87% on Rotten Tomatoes and very good. A bit of a strange one kinda like a Yorgos Lanthimos film.

      If anyone has watched and can identify all the Leeds/Yorkshire locations I'd be interested.

      submitted by /u/montfree
      [link] [comments]

    12. ๐Ÿ”— r/Harrogate Best bars for a date night rss

      hi all,

      i am taking my girlfriend out to Harrogate tmrw for her birthday! I have booked La Feria as our dinner in the evening but wondered what were the best bars are to go to before and after are meal.

      just to note we are dressing up nicely - sheโ€™ll be in a dress and Iโ€™ll be in a shirt. just wondered if that fits the vibe of La Feria and if you know any nice bars that suits that is nice for the evening!!

      submitted by /u/GarlicCharacter3247
      [link] [comments]

    13. ๐Ÿ”— r/york Scenic Drives (ideally circular) - Leaving York rss

      Hi

      Can anyone recommend any drives that you can easily do within an hour or so (total drive time) with the following ideal criteria:

      - A scenic and reasonably quiet route

      - Ideally circular (leave York - head out and back to York eventually)

      - Something that's not necessarily well known

      - A nice village pub to call in at along the way

      - That's all!

      All recommendations welcome, thank you.

      submitted by /u/HeroRon
      [link] [comments]

    14. ๐Ÿ”— r/LocalLLaMA GLM-5.1 rss

      GLM-5.1 | submitted by /u/danielhanchen
      [link] [comments]
      ---|---

    15. ๐Ÿ”— r/Leeds Trying to get to Leeds station in the morning from Bramley with all the road works rss

      There is a lot of road works in Kirkstall and people were delayed. My flight is 12pmin the afternoon and I have to get all the way to Manchester..

      Thnking of setting off between 6:00 and 7:00 a.m.

      would you recommend an Uber? I think the buses might be out of whack that morning with all the road works. I did speak to the person on the road works and they said that all the road work starts around 9:00 which means there's no room for error in me getting up early.

      submitted by /u/Crazy_Screen_5043
      [link] [comments]

    16. ๐Ÿ”— r/Leeds Anyone near Headingley able to offer an emergency puppy cuddle for a very overdue pregnant woman? rss

      Bit of an unusual one, but my sister-in-law is very overdue with her first baby, desperate to avoid induction, and absolutely mad about dogs, while sadly unable to own one.

      I wondered whether anyone near Headingley has a friendly puppy and might be willing to let her have a short, supervised interaction, just some gentle fuss, cuddles, and a bit of puppy time. The hope, honestly, is to get a bit of oxytocin flowing and lift her spirits, as she completely melts around dogs.

      We are not looking to borrow a dog or put anyone out, just a brief visit with the owner present, ideally somewhere convenient and public if that suits you better.

      Bit of a long shot, but hey ho.

      Thank you!

      submitted by /u/Desecron
      [link] [comments]

    17. ๐Ÿ”— syncthing/syncthing v2.0.16 release

      Major changes in 2.0

      • Database backend switched from LevelDB to SQLite. There is a migration on
        first launch which can be lengthy for larger setups. The new database is
        easier to understand and maintain and, hopefully, less buggy.

      • The logging format has changed to use structured log entries (a message
        plus several key-value pairs). Additionally, we can now control the log
        level per package, and a new log level WARNING has been inserted between
        INFO and ERROR (which was previously known as WARNING...). The INFO level
        has become more verbose, indicating the sync actions taken by Syncthing. A
        new command line flag --log-level sets the default log level for all
        packages, and the STTRACE environment variable and GUI has been updated
        to set log levels per package. The --verbose and --logflags command
        line options have been removed and will be ignored if given.

      • Deleted items are no longer kept forever in the database, instead they are
        forgotten after fifteen months. If your use case require deletes to take
        effect after more than a fifteen month delay, set the
        --db-delete-retention-interval command line option or corresponding
        environment variable to zero, or a longer time interval of your choosing.

      • Modernised command line options parsing. Old single-dash long options are
        no longer supported, e.g. -home must be given as --home. Some options
        have been renamed, others have become subcommands. All serve options are
        now also accepted as environment variables. See syncthing --help and
        syncthing serve --help for details.

      • Rolling hash detection of shifted data is no longer supported as this
        effectively never helped. Instead, scanning and syncing is faster and more
        efficient without it.

      • A "default folder" is no longer created on first startup.

      • Multiple connections are now used by default between v2 devices. The new
        default value is to use three connections: one for index metadata and two
        for data exchange.

      • The following platforms unfortunately no longer get prebuilt binaries for
        download at syncthing.net and on GitHub, due to complexities related to
        cross compilation with SQLite:

        • dragonfly/amd64
        • solaris/amd64
        • linux/ppc64
        • netbsd/*
        • openbsd/386 and openbsd/arm
        • windows/arm
        • The handling of conflict resolution involving deleted files has changed. A
          delete can now be the winning outcome of conflict resolution, resulting in
          the deleted file being moved to a conflict copy.

      This release is also available as:

      • APT repository: https://apt.syncthing.net/

      • Docker image: docker.io/syncthing/syncthing:2.0.16 or ghcr.io/syncthing/syncthing:2.0.16
        ({docker,ghcr}.io/syncthing/syncthing:2 to follow just the major version)

      What's Changed

      Fixes

      • fix(protocol): verify compressed message length before decompression by @calmh in #10595
      • fix(systemd): support overrides for syncOwnership by @Valloric in #10602
      • fix(systemd): add back chown allowed syscalls by @Valloric in #10605

      Other

      • chore(config, connections): use same reconnection interval for QUIC and TCP (fixes #10507) by @marbens-arch in #10573
      • build(deps): update dependencies by @calmh in #10588
      • chore(sqlite): reduce max open connections, keep them open permanently (fixes #10592) by @calmh in #10596

      Full Changelog : v2.0.15...v2.0.16

    18. ๐Ÿ”— r/LocalLLaMA You can now fine-tune Gemma 4 locally 8GB VRAM + Bug Fixes rss

      You can now fine-tune Gemma 4 locally 8GB VRAM + Bug Fixes | Hey guys, you can now fine-tune Gemma 4 E2B and E4B in our free Unsloth notebooks! You need 8GB VRAM to train Gemma-4-E2B locally. Unsloth trains Gemma 4 ~1.5x faster with ~60% less VRAM than FA2 setups: https://github.com/unslothai/unsloth We also found and did bug fixes for Gemma 4 training:

      1. Grad accumulation no longer causes losses to explode - before you might see losses of 300 to 400 - it should be 10 to 15 - Unsloth has this fixed.
      2. Index Error for 26B and 31B for inference - this will fail inference for 26B and 31B when using transformers - we fixed it.
      3. use_cache=False had gibberish for E2B, E4B - see https://github.com/huggingface/transformers/issues/45242
      4. float16 audio -1e9 overflows on float16

      You can also train 26B-A4B and 31B or train via a UI with Unsloth Studio. Studio and the notebooks work for Vision, Text, Audio and inference. For Bug Fix details and tips and tricks, read our blog/guide: https://unsloth.ai/docs/models/gemma-4/train Free Colab Notebooks: | E4B + E2B (Studio web UI) | E4B (Vision + Text)-Vision.ipynb) | E4B (Audio)-Audio.ipynb) | E2B (Run + Text)-Text.ipynb)
      ---|---|---|---

      Thanks guys!

      submitted by /u/danielhanchen
      [link] [comments]

    19. ๐Ÿ”— r/york casualties of the storm :( rss

      casualties of the storm :( | on the riverbank near cinder lane and in homestead park submitted by /u/whtmynm
      [link] [comments]
      ---|---

    20. ๐Ÿ”— r/wiesbaden FDP und Pro Auto bekommen Schnappatmung rss
    21. ๐Ÿ”— r/reverseengineering AI just hacked one of the world's most secure operating systems in four hours. rss
    22. ๐Ÿ”— r/Yorkshire World Coal Carrying Championship 2026! rss

      World Coal Carrying Championship 2026! | A beloved tradition in the village of Gawthorpe. The competition takes place annually with men, women and children taking part (the kids don't carry coal!) This year we had good weather and a turnout of many hundreds of spectators to cheer along the runners. Congratulations to all those taking part, where even completing the race is a feat in itself. submitted by /u/Paradoxbox00
      [link] [comments]
      ---|---

    23. ๐Ÿ”— r/york Jorvik Tickets for Today or early tomorrow? rss

      Weโ€™re visiting York- arrived yesterday and leave midday tomorrow. Iโ€™m an idiot and didnโ€™t prebook tickets to the Jorvik museum and theyโ€™re sold out. By any small chance is there anyone who has tickets they canโ€™t use that we could buy for today or early tomorrow?

      submitted by /u/SnooCats1465
      [link] [comments]

    24. ๐Ÿ”— r/LocalLLaMA Turns out Gemma 4 had MTP (multi token prediction) all along rss

      Turns out Gemma 4 had MTP (multi token prediction) all along | Hey Everyone, While I was trying to utilize Gemma 4 through the LiteRT api in my android app, I noticed that Gemma 4 was throwing errors when loading it on my Google Pixel 9 test device of the "mtp weights being an incompatible tensor shape". I did some digging and found out there's additional MTP prediction heads within the LiteRT files for speculative decoding and much faster outputs. Well turns out I got confirmation today from a Google employee that Gemma 4 DOES INDEED have MTP but it was "removed on purpose" for "ensuring compatibility and broad usability". Well would've been great to be honest if they released the full model instead, considering we already didn't get the Gemma 124B model leaked in Jeff Dean's tweet by accident. Would've been great to have much faster Gemma 4 generation outputs, ideally on the already fast MoE. Maybe someone can reverse engineer and extract the tensors and the math based on the compute graph in LiteRT? Here's a link to the conversation: https://huggingface.co/google/gemma-4-E4B-it/discussions/5 submitted by /u/Electrical-Monitor27
      [link] [comments]
      ---|---

    25. ๐Ÿ”— r/Yorkshire Yorkshire, UK ๐Ÿ‡ฌ๐Ÿ‡ง rss
    26. ๐Ÿ”— r/reverseengineering DeepZero: An automated, agentic vulnerability research pipeline for finding kernel zero-days rss
    27. ๐Ÿ”— backnotprop/plannotator v0.17.1 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.17.0 | AI code review agents, token-level annotation, merge-base diffs
      v0.16.7 | Gemini CLI plan review, install script skills directory fix
      v0.16.6 | Perforce support, Pi shared event API, suggested code prefill, file tree expand fix
      v0.16.5 | Resize handle scrollbar fix, VS Code Marketplace publish
      v0.16.4 | Compound planning improvement hook, GitHub Enterprise + self-hosted GitLab, dockview workspace, new themes
      v0.16.3 | Pi phase configuration, CLI help, untracked file discovery fix, review scroll reset
      v0.16.2 | Draggable comment popovers, cross-file annotation visibility, custom diff fonts, OpenCode verbose log fix
      v0.16.1 | SSE stream idle timeout fix for external annotations API
      v0.16.0 | GitHub Copilot CLI, external annotations API, bot callback URLs, interactive checkboxes, print support, diff display options
      v0.15.5 | Custom display names, GitHub viewed file sync, expand/collapse all in file tree, search performance, WSL fix
      v0.15.2 | Compound Planning skill, folder annotation, /plannotator-archive slash command, skill installation via platform installers
      v0.15.0 | Live AI chat in code review, plan archive browser, folder file viewer, resizable split pane, Pi full feature parity


      What's New in v0.17.1

      v0.17.1 is a patch release that fixes PR review in the Pi extension and addresses several cross-platform bugs found during an exhaustive parity audit of every server endpoint between Bun and Pi runtimes.

      Pi PR Review

      The Pi extension's plannotator-review command was completely ignoring PR URL arguments and always falling back to local git diffs. v0.17.1 implements the full PR review flow: URL parsing, authentication checks, PR metadata fetch, and local worktree creation for both same-repo and cross-repo PRs. Same-repo PRs use git worktree add --detach with the PR's head ref; cross-repo forks use a shallow clone with tracking refs for both branches.

      Beyond the missing PR flow, an audit of all 27 review server feature areas uncovered 12 parity gaps in the Pi server. These ranged from missing diagnostic logging on PR actions to incorrect access guards that would have allowed diff switching in PR mode. All have been fixed.

      Remote URL Parsing

      parseRemoteUrl has been rewritten to handle the full range of git remote formats. The previous regex incorrectly matched HTTPS URLs with non-standard ports (e.g., https://gitlab.example.com:8443/group/project.git) as SSH, and failed on multi-segment GitLab paths like group/subgroup/project. The new implementation handles SSH, SSH with port (ssh://git@host:22/path), standard HTTPS, and HTTPS with custom ports as separate cases. This fix applies to both Bun and Pi runtimes.

      Cross-Repo Clone Fixes

      Cross-repo PR clones (forks from different organizations) had two issues. The gh repo clone and glab repo clone commands don't accept a --hostname flag, so self-hosted GitHub Enterprise and GitLab instances would fail. The fix uses GH_HOST and GITLAB_HOST environment variables instead, which both CLIs respect. The shallow fetch depth has also been increased from 50 to 200 commits to handle PRs with longer histories.

      Additional Changes

      • Git Add button hidden in PR mode. The staging button was incorrectly visible during PR reviews because the server returned diffType: undefined and the client defaulted to "uncommitted". The client now disables staging when PR metadata is present
      • Diff viewer theme flash fix. Switching files in the diff viewer caused a brief flash of the wrong theme. The Pierre diff library's theme was being computed asynchronously via requestAnimationFrame; the initial state now reads CSS custom properties synchronously so the correct background appears on the first frame
      • Resolved/Outdated filters in PR comments. The PR comments tab now has toggle buttons to hide resolved or outdated review threads. Filters use the same green and amber color tokens as the existing status badges and integrate with the existing Clear Filters control

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      Copilot CLI:

      /plugin marketplace add backnotprop/plannotator
      /plugin install plannotator-copilot@plannotator
      

      Gemini CLI: The install script auto-detects ~/.gemini and configures hooks, policy, and slash commands. See apps/gemini/README.md for manual setup.

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      VS Code Extension: Install from the VS Code Marketplace. Tested with Claude Code running in VS Code's integrated terminal. Not currently compatible with Anthropic's official VS Code extension due to upstream hook bugs.


      What's Changed

      Full Changelog : v0.17.0...v0.17.1

    28. ๐Ÿ”— r/LocalLLaMA Gemma 4 26b A3B is mindblowingly good , if configured right rss

      Last few days ive been trying different models and quants on my rtx 3090 LM studio , but every single one always glitches the tool calling , infinite loop that doesnt stop. But i really liked the model because it is rly fast , like 80-110 tokens a second , even on high contex it still maintains very high speeds.

      I had great success with tool calling in qwen3.5 moe model , but the issue i had with qwen models is that there is some kind of bug in win11 and LM studio that makes the prompt caching not work so when the convo hits 30-40k contex , it is so slow at processing prompts it just kills my will to work with it.

      Gemma 4 is different , it is much better supported on the ollama cpp and the caching works flawlesly , im using flash attention + q4 quants , with this i can push it to literally maximum 260k contex on rtx 3090 ! , and the models performs just aswell.

      I finally found the one that works for me , its the unsloth q3k_m quant , temperature 1 and top k sampling 40. i have a custom system prompt that im using which also might be helping.

      I've been testing it with opencode for the last 6 hours and i just cant stop , it cannot fail , it exiplained me the whole structure of the Open Code itself , and it is a huge , like the whole repo is 2.7GB so many lines of code and it has no issues traversing around and reading everything , explaining how certain things work , i think im gonna create my own version of open code in the end.

      It honestly feels like claude sonnet level of quality , never fails to do function calling , i think this might be the best model for agentic coding / tool calling / open claw or search engine.
      I prefer it over perplexity , in LM studio connected to search engine via a plugin delivers much better results than perplexity or google.

      As for vram consumption it is heavy , it can probably work on 16gb it not for tool calling or agents , u need 10-15k contex just to start it. My gpu has 24gb ram so it can run it at full contex no issues on Q4_0 KV

      submitted by /u/cviperr33
      [link] [comments]

    29. ๐Ÿ”— Jessitron Adding Correctness Conditions to Code Changes rss

      Today I looked at the first PR on our new project repo. It added a new run script, but the README didnโ€™t mention it. The proposed change was incomplete, because the documentation was out of sync.

      Did I comment on the PR? heck no. I want to fix this problem for all PRs, not just this one. We can automate this stuff now.

      Correctness condition: All PRs include updates to all relevant documentation files.

      How can we make this true?

      Instructions - We can change AGENTS.md to instruct our coding agent to look for documentation files and update them.
      Verification - We can add a reviewer agent to check each PR for missed documentation updates.

      This is two changes, so I can break this work into two parts. Which of these should we do first?

      Changing the instructions is easy.

      If we do instructions first, itโ€™s easy. It will work most of the time. When I try it on this little PR, it will certainly work, and then I can claim victory and move on to the next feature.

      Then later, on some future PR, the agent will miss updating some documentation. Will I notice? No. In fact: I hope not. If Iโ€™m looking through PRs to a level of detail that includes all documents in the PR and also documentation files not in the PR, then we have failed to automate enough of this project. (This project does not deserve that level of scrutiny.)

      Changing instructions without verification gives me no guarantee of my correctness condition.

      Adding validation is sufficient.

      If we do validation first, then every PR will be checked for missed documentation updates. Incorrect PRs will be rejected, so the coding agent will have to update the documentation.

      My correctness condition will be guaranteed. Well, as guaranteed as I can get it with this nondeterministic automation. The reviewing agent will have only one task, so it wonโ€™t forget to check for needed documentation updates. If we ever catch it being wrong, then we must update its instructions.

      If we never implement the instructions change, then PRs will take longer, because some agent has to respond to the PR comments, and then the feedback loop runs again.

      With verification in place, the instructions change is an optimization!

      Validation before implementation.

      Itโ€™s a little like test-first development, but at a higher level. Weโ€™re adding a check to every feature implementation, not just one.

      Itโ€™s more like property testing than unit testing. We arenโ€™t hard-coding โ€œevery feature should update the README.โ€ Weโ€™re stating a property: the documentation should be up-to-date after every feature change.

      Now my PR reviews are also system reviews: what about this PR should have been different? How can we change the agentโ€™s context and feedback to make that different? Now test that system change on this PR before we fix it.

      This is the new Boy Scout Rule. It went from โ€œleave the codebase cleaner than I found itโ€ to โ€œmake the whole develoment system stronger than it was.โ€

      Itโ€™s all part of programming the agents to program our software.

    30. ๐Ÿ”— Mitchell Hashimoto The Building Block Economy rss
      (empty)
  3. April 06, 2026
    1. ๐Ÿ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-06 rss

      IDA Plugin Updates on 2026-04-06

      Activity:

      • augur
        • aa5847a0: feat: update ida plugin stub and metadata
        • 9ff757e1: doc: improve compatibility info
      • capa
        • 70f275ac: build(deps-dev): bump types-protobuf (#2994)
        • 63aa5729: build(deps-dev): bump mypy from 1.19.1 to 1.20.0 (#2993)
        • 63edbedb: build(deps-dev): bump lodash from 4.17.23 to 4.18.1 in /web/explorer โ€ฆ
      • efiXplorer
        • 29960936: update guids submodule (#139)
      • Greffe
        • 19e36ac1: Merge pull request #72 from Lixhr/70-core-avoid-overwriting-instrumenโ€ฆ
        • 9458749b: add branch overlap detection on close targets
        • 17aa102d: Merge pull request #71 from Lixhr/69-test-instrument-every-instructions
        • 497cca77: Add batch adds
        • 90f54e65: Merge pull request #68 from Lixhr/65-core-set-and-call-handler
        • 638e8906: fix wrong register saving order
        • 11d9a781: fix segfault on targets added from config file
        • d6882068: Fix non-thumb branch / wrong ret sp offset
      • haruspex
        • dcf1bcba: feat: improve ida plugin stub and metadata
        • 9d2107db: doc: improve compatibility info
        • 2fa79f55: doc: improve compatibility info
      • ida-pro-mcp
        • 9f489ca3: Merge pull request #345 from JohnsterID/test/pr335-unsafe-gating-coveโ€ฆ
        • 66af3ff6: Merge pull request #337 from JohnsterID/fix/ida-rpc-query-params
        • cb6e84cd: Restrict GHA to this repo
        • 29f6ae93: Merge pull request #346 from hzqst/main
        • 30774f3a: Merge pull request #341 from ZehMatt/token-optimizations
        • bbca7351: Fix [MCP] ยซ notifications/initialized (0.0ms) ERROR: Method 'notificโ€ฆ
        • 256cc92e: Merge pull request #343 from hzqst/main
        • b8be0301: Use better approach to detect idalib headless mode: ida_kernwin.is_idโ€ฆ
        • 779d707d: Fix https://github.com/mrexodia/ida-pro-mcp/issues/342
        • a0bd04db: test: add coverage for @unsafe/@ext decorator sets and extension gating
        • c5360f62: fix: preserve ?ext= query params from -ida-rpc URL
      • python-elpida_core.py
        • fe666cbd: fix: add CONVERGENCE to Rhythm enum โ€” ECS crash on cycle 1
        • 7350af96: feat: D16 Stage 2 โ€” Witnessed Agency + Stage 1 gap closure
      • rhabdomancer
      • UltraKernelDumper
        • ea1cae2c: Add full project source excluding large target folder and build artifโ€ฆ
    2. ๐Ÿ”— backnotprop/plannotator v0.17.0 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.16.7 | Gemini CLI plan review, install script skills directory fix
      v0.16.6 | Perforce support, Pi shared event API, suggested code prefill, file tree expand fix
      v0.16.5 | Resize handle scrollbar fix, VS Code Marketplace publish
      v0.16.4 | Compound planning improvement hook, GitHub Enterprise + self-hosted GitLab, dockview workspace, new themes
      v0.16.3 | Pi phase configuration, CLI help, untracked file discovery fix, review scroll reset
      v0.16.2 | Draggable comment popovers, cross-file annotation visibility, custom diff fonts, OpenCode verbose log fix
      v0.16.1 | SSE stream idle timeout fix for external annotations API
      v0.16.0 | GitHub Copilot CLI, external annotations API, bot callback URLs, interactive checkboxes, print support, diff display options
      v0.15.5 | Custom display names, GitHub viewed file sync, expand/collapse all in file tree, search performance, WSL fix
      v0.15.2 | Compound Planning skill, folder annotation, /plannotator-archive slash command, skill installation via platform installers
      v0.15.0 | Live AI chat in code review, plan archive browser, folder file viewer, resizable split pane, Pi full feature parity
      v0.14.5 | GitLab merge request review, login page image fix, Windows install path fix


      What's New in v0.17.0

      v0.17.0 introduces AI-powered code review agents, token-level annotation in diffs, and merge-base diffs for PR-accurate comparisons. Three of the six PRs in this release came from external contributors, one of them a first-timer.

      AI Code Review Agents

      Codex and Claude Code can now run as background review agents directly from the Plannotator code review UI. Select an agent, launch it, and watch live log output stream into a detail panel while the agent works. When it finishes, its findings appear as external annotations in the diff viewer, tagged by severity.

      Codex agents use their built-in codex-review command and produce priority- level findings (P0 through P3). Claude agents use a custom multi-agent prompt covering bug detection, security, code quality, and guideline compliance, with each finding classified as important, nit, or pre-existing. Both agents' findings include reasoning traces that explain the logic behind each annotation.

      For PR reviews, the server automatically creates a local worktree so agents have full file access without affecting your working directory. Same-repo PRs use git worktree; cross-repo forks use a shallow clone with tracking refs for both branches. Pass --no-local to skip the worktree if you don't need file access.

      The Pi extension has full agent review parity: stdin/stdout/stderr handling, live log streaming, result ingestion, and vendored review modules with import rewriting.

      Token-Level Code Selection

      The diff viewer now supports clicking individual syntax tokens to annotate them. Hover a token to see it underlined; click to open the annotation toolbar with the token's text and position as context (e.g., "Line 47: processOrder"). Token metadata is stored on the annotation and surfaced in sidebar badges and exported feedback.

      Gutter-based line selection continues to work independently. The two selection modes don't interfere with each other.

      Merge-Base Diffs

      A new "Current PR Diff" option in the diff type selector uses git merge-base to find the common ancestor between your branch and the default branch, then diffs from that point. This produces the same diff you'd see on a GitHub pull request page. The existing "vs main" option (git diff main..HEAD) is still available but includes upstream changes that arrived after you branched, which can be noisy.

      Additional Changes

      • @ file reference support in annotate. OpenCode-style @file.md references now resolve correctly in /plannotator-annotate. The resolver strips the leading @ as a fallback when the literal filename doesn't exist, while still preferring real files named @something.md if present (#488 by @Exloz)
      • Markdown hard line breaks and list continuations. Two-trailing-space and backslash hard breaks now render as <br> elements. Indented continuation lines after list items merge into the preceding bullet instead of becoming orphan paragraphs (#483, closing #482)
      • Explicit local mode override. Setting PLANNOTATOR_REMOTE=0 or false now forces local mode, bypassing SSH auto-detection. Previously only 1/true had explicit meaning (#481 by @foxytanuki, closing #480)
      • PR file content merge-base fix. File contents for expandable diff context are now fetched at the merge-base commit instead of the base branch tip. When the base branch has moved since the PR was created, the old file contents didn't match the diff hunks, causing crashes in the diff renderer. The fix fetches the merge-base SHA via GitHub's compare API and falls back gracefully if unavailable

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      Copilot CLI:

      /plugin marketplace add backnotprop/plannotator
      /plugin install plannotator-copilot@plannotator
      

      Gemini CLI: The install script auto-detects ~/.gemini and configures hooks, policy, and slash commands. See apps/gemini/README.md for manual setup.

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      VS Code Extension: Install from the VS Code Marketplace. Tested with Claude Code running in VS Code's integrated terminal. Not currently compatible with Anthropic's official VS Code extension due to upstream hook bugs.


      What's Changed

      • feat(review): token-level code selection for annotations by @backnotprop in #500
      • feat(review): AI review agents, local worktree, and UI polish by @backnotprop in #491
      • fix(annotate): support @ markdown file references by @Exloz in #488
      • feat(review): add merge-base diff option for PR-style diffs by @yonihorn in #485
      • fix: handle markdown hard line breaks and list continuations by @backnotprop in #483
      • fix(remote): support explicit local override by @foxytanuki in #481
      • fix(review): use merge-base SHA for PR file contents by @backnotprop

      New Contributors

      Contributors

      @Exloz contributed the @ file reference fix for OpenCode's annotate mode (#488), including comprehensive test coverage for edge cases like real @-prefixed filenames and quoted input. First contribution.

      @yonihorn returned with the merge-base diff option (#485), giving PR reviews the same diff semantics GitHub uses.

      @foxytanuki continued contributing with the explicit local mode override (#481), their third PR after the CLI help message and SSE timeout fix.

      Community members who reported issues addressed in this release:

      Full Changelog : v0.16.7...v0.17.0

    3. ๐Ÿ”— r/Yorkshire Richmond gleaming in the spring sunshine today. rss
    4. ๐Ÿ”— r/Yorkshire No better place.. rss

      No better place.. | Average photo. submitted by /u/Melodic_Position_590
      [link] [comments]
      ---|---

    5. ๐Ÿ”— r/LocalLLaMA What it took to launch Google DeepMind's Gemma 4 rss

      What it took to launch Google DeepMind's Gemma 4 | ๐Ÿ’Ž๐Ÿ’Ž๐Ÿ’Ž๐Ÿ’Ž submitted by /u/jacek2023
      [link] [comments]
      ---|---

    6. ๐Ÿ”— r/york Whatโ€™s the name of the trio who play in York? rss

      They are a three piece, violin, guitair and double bass and they play covers in York. Theyโ€™re bloody fantastic but cannot remember their name

      submitted by /u/rjle_x
      [link] [comments]

    7. ๐Ÿ”— jesseduffield/lazygit v0.61.0 release

      The big one in this release is support for GitHub pull requests. They are shown as little GitHub icons next to each branch that has one, and you can open a PR in the browser by pressing shift-G. To enable this, all you need to do is install the gh tool if you haven't already, and log in using gh auth login.

      What's Changed

      Features โœจ

      Enhancements ๐Ÿ”ฅ

      • Add support for clicking on arrows in the file list to expand/collapse directories by @blakemckeany in #5365
      • Remove empty directories after discarding untracked files by @stefanhaller in #5408
      • Make file sort order and case sensitivity configurable, and default to mix files and folders by @stefanhaller in #5427
      • Allow customizing the window width/height thresholds for when to use portrait mode by @stefanhaller in #5452
      • Log hashes of local branches when deleting them by @stefanhaller in #5441
      • Add condition field to custom command prompts by @mrt181 in #5364

      Fixes ๐Ÿ”ง

      Maintenance โš™๏ธ

      Docs ๐Ÿ“–

      I18n ๐ŸŒŽ

      Performance Improvements ๐Ÿ“Š

      New Contributors

      Full Changelog : v0.60.0...v0.61.0

    8. ๐Ÿ”— @binaryninja@infosec.exchange Tired of unzipping your password-protected malware samples just to analyze mastodon

      Tired of unzipping your password-protected malware samples just to analyze them? We've got you covered.

      Our latest blog post covers Container Transforms and how Binja now handles nested binary formats with structure and provenance intact.

      Read it here: https://binary.ninja/2026/03/31/container- transforms.html

    9. ๐Ÿ”— r/wiesbaden Sprach Schule in Frankfurt/Wiesbaden rss
    10. ๐Ÿ”— r/Yorkshire Hand painted Yorkshire artworks by Paul Halmshaw. rss
    11. ๐Ÿ”— r/york My Visit To The City Today - lots of photos. rss

      My Visit To The City Today - lots of photos. | submitted by /u/danum1962
      [link] [comments]
      ---|---

    12. ๐Ÿ”— r/york Original Ghost Walk (1973) vs. Mad Alice, which one should I book ? rss

      Hi All, I'll be visiting York soon and I badly want to do a ghost tour. Ive been looking for choices and Im torn between 2.

      I really love the fact that the Original Ghost Walk is the oldest in the world, that authenticity is pulling me.

      But I see everyone raving about Mad Alice (The Bloody Tour) for the performance. For those who have done both, which one feels more like a genuine dive into York's history ? (or) should I even care about history and just look to have fun ?

      Iโ€™m staying overnight specifically to do one of these, so I want to make sure I pick the one that actually feels worth after dark.

      submitted by /u/Lanky_Cartoonist_743
      [link] [comments]

    13. ๐Ÿ”— r/Leeds 18f in leeds wanting a creative circle rss

      Hi, Iโ€™m 18 and based in Leeds. Iโ€™m really into the idea of filmmaking and creative stuff in general (making videos, trying out ideas, etc.), and Iโ€™d love to meet people around my age who are into the same kind of thing.

      Iโ€™m still pretty new to it and trying to build a creative circle, so Iโ€™m supperr desperate for people who want to make things together, collaborate, or just chat about creative ideas.

      If anyone knows of any places, groups, or communities in Leeds where people like this hang out, Iโ€™d really appreciate any suggestions too.

      Feel free to message me if youโ€™re interested ๐Ÿ˜ญ๐Ÿ™๐Ÿพ

      submitted by /u/Sufficient_Leg_5141
      [link] [comments]

    14. ๐Ÿ”— sacha chua :: living an awesome life YE12: Categorizing Emacs News, epwgraph, languages rss

      View in the Internet Archive, watch or comment on YouTube, or email me.

      Chapters:

      • 00:41:21 epwgraph
      • 00:54:56 learning languages

      Thanks for your patience with the audio issues! At some point, I need to work out the contention between all the different processes that I want to be listening to the audio from my mic. =)

      In this livestream, I categorize Emacs News for 2026-04-06, show epwgraph for managing Pipewire connections from Emacs, and share some of my language learning workflows.

      You can e-mail me at sacha@sachachua.com.

    15. ๐Ÿ”— r/Leeds Sharps bin disposal in Leeds? rss

      Hi there,

      Does anyone know where I can dispose of a sharps bin in Leeds?

      Itโ€™s for syringes and needles for a medication I am prescribed by an online company.

      Thanks in advance!

      submitted by /u/No-Stick9557
      [link] [comments]

    16. ๐Ÿ”— r/york Jumble sale! rss

      ๐Ÿ›๏ธ Jumble Sale โ€“ Saturday 11th April! ๐Ÿ›๏ธ

      A fantastic jumble sale will be taking place on Saturday 11th April, 2pm โ€“ 4pm at the Sheriff Hutton Village Hall, in support of Shopmobility York.

      The wonderful Sheriff Hutton Jumblies will be running the sale on our behalf โ€“ and if youโ€™ve been before, youโ€™ll know itโ€™s always a brilliant event with plenty of bargains to be found!

      โœจ Details:

      โ€ข โฐ Time: 2pm โ€“ 4pm

      โ€ข ๐Ÿ“ Location: Village Hall, Sheriff Hutton Road, York YO60 6RA

      โ€ข ๐Ÿ’ท Entry: Just 50p

      โ€ข ๐Ÿšถ Itโ€™s always popular โ€“ arriving early to join the queue is highly recommended!

      ๐ŸŽŸ๏ธ Donโ€™t miss the tombola, and be sure to visit the cake stall for some delicious homemade treats!

      ๐Ÿ™ Donations still welcome! If anyone is still wanting to donate items, please contact to arrange collection or drop off.

      Come along, grab a bargain, and support a great cause โ€“ weโ€™d love to see you there!

      JumbleSale #ShopmobilityYork #CommunityEvent

      submitted by /u/Single-Ad-5317
      [link] [comments]

    17. ๐Ÿ”— r/reverseengineering Cracking a Malvertising DGA From the Device Side rss
    18. ๐Ÿ”— r/york Walking into York by the Ouse rss

      Walking into York by the Ouse | submitted by /u/York_shireman
      [link] [comments]
      ---|---

    19. ๐Ÿ”— sacha chua :: living an awesome life 2026-04-06 Emacs news rss

      There's a lot of buzz around the remote code execution thing that involves Git, but it seems to be more of a Git issue than an Emacs one. This might be a workaround if you want, and in the meantime, don't check out git repositories you don't trust. There's no page for the Emacs Carnival for April yet, but you can start thinking about the theme of "newbies/starter kits" already, and I'm sure Cena or someone will round things up afterwards. Enjoy!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrรฉs Ramรญrez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    20. ๐Ÿ”— r/Yorkshire I donโ€™t think any place can match this vibe that Yorkshire hasโœจ rss

      I donโ€™t think any place can match this vibe that Yorkshire hasโœจ | @ travelandchill1 submitted by /u/ScrollAndThink
      [link] [comments]
      ---|---

    21. ๐Ÿ”— r/Leeds Few more photos rss

      Couple more photos this morning, although I got told off. Apparently Wellington Place don't permit commercial photography without prior agreement.

      I'm in my work clothes with a Red S9 posting pics on Reddit lol.

      submitted by /u/Phil-pot
      [link] [comments]

    22. ๐Ÿ”— Pagefind/pagefind v1.5.0 release

      Hey! This is a big one. Pagefind 1.5.0 has been fermenting for a while, and addresses a lot of long-standing issues and feature requests. This release brings an entirely new search UI built on web components, major improvements to search relevance and ranking, diacritics support, automatic CJK segmentation, Web Worker search, notably smaller indexes, and a much faster indexing binary. Enormous thanks to everyone who contributed features and fixes, as well as to everyone who tested the beta releases and provided feedback โค๏ธ - @bglw

      If you only read this far, I should mention up front: The existing Default UI and Modular UI remain available and supported for now, so you can upgrade your sites to Pagefind v1.5.0 without migrating to the Component UI.

      Pagefind Component UI

      Pagefind ships a brand new UI system built entirely on web components. The Component UI gives you searchboxes, modals, result lists, and filter controls as composable <pagefind-*> elements that you can mix, match, and style with CSS variables.

      The Component UI is available as vendored files in your /pagefind/ output directory, or as an npm package to install and import.

      The best way to get a feel for the new components is on the ๐Ÿ“˜ Pagefind Component UI page of the docs, where interactive examples of various components are shown.

      Extra goodies with the Component UI:

      • Greatly improved accessibility over the Default UI
      • Keyboard navigation through search results
      • Configurable keyboard shortcuts (thanks @miketheman !)
      • Full custom templates for rendering results and placeholders
      • Exported types for Component UI npm consumers (thanks @vanruesc !)
      • Support for multiple scoped Pagefind instances on one page
      • A range of CSS variables available for light-touch customization (thanks @miketheman for some of these!)
      • Improved RTL and locale-specific rendering

      Search Relevance, and Searching Metadata

      Pagefind now searches metadata by default! Importantly, this means it now searches the title metadata. Matches in titles are now taken into account, and search results are very hard to shake from prime positions if all (or much) of the title matches the search query.

      You can configure the weight of any metadata field. See ๐Ÿ“˜ Configuring Metadata Weights to change the title boost or apply custom weights to your own metadata fields.

      Beyond metadata searching, a bunch of weird and wonderful ranking bugs were resolved:

      • Metadata-only matches now return results. Previously, if a page matched the search query only in its metadata (e.g. the title) but not in the body content, it would be missed. These pages now correctly appear in results.
      • Word splitting and indexing was revisited to properly handle diacritics, stemming, and compound words together. This fixes a broad set of edge cases where compound word parts weren't indexed correctly.
      • Loading index chunks now correctly uses stemmed terms. This was a discrepancy in how chunks were identified, and could cause some hard to pin down issues where the wrong chunk would be loaded for a search term, leaving you with no (or fewer) results.
      • A couple of pathways left you with only the first matching chunk loaded, which would also give you fewer results. Words that straddle multiple chunks now behave better.
      • Fancy-pants unicode characters in words could really mess up the chunk loading, which has been fixed.

      Diacritics Support

      We finally properly support matching across diacritics. You can now find your cafรฉs without remembering how to type รฉ.

      By default, exact diacritic matches are preferred. So if you're searching "cafe", pages with "cafe" will rank higher than pages with "cafรฉ". Getting this relevance right by default was the final piece of the puzzle for shipping this, which is why it took a while to land. See ๐Ÿ“˜ Configuring Diacritic Similarity to adjust how this plays out on your site.

      If you need strict matching, set exactDiacritics: true to disable normalization entirely โ€” "cafe" will only match "cafe", and "cafรฉ" will only match "cafรฉ". ๐Ÿ“˜ Exact Diacritics

      Multilingual Improvements

      Thanks browsers! Pagefind now taps into Intl.Segmenter to chop search queries in CJK (Chinese, Japanese, Korean) non-whitespace-delimited languages. This was already done during indexing by Pagefind, but users searching still had to delimit their queries. Now searching "่ฟ™ๆ˜ฏไธ€ๆฎต็ฎ€ๅ•็š„ๆต‹่ฏ•ๆ–‡ๆœฌ" searches for the words "่ฟ™", "ๆ˜ฏ", "ไธ€ๆฎต", "็ฎ€ๅ•", "็š„", "ๆต‹่ฏ•", and "ๆ–‡ๆœฌ", which is also how that sentence was indexed.

      We also updated the underlying stemming library (thanks @uded !) which brings stemming support for Polish and Estonian (and Esperanto, if anyone is out there indexing some lang="eo" pages). The Snowball upgrade also improves stemming quality across many already-supported languages.

      Indexing Performance

      The indexing binary (the one you install through npx or your wrapper of choice) is now both smaller (so, faster to download) and faster to run, by quite a lot on both fronts. On some sites, indexing is more than twice as fast. Thanks to @zmre for much of this!

      Search Performance

      Pagefind's search now runs in a Web Worker automatically. This doesn't make the search faster, per se, but it dramatically improves perceived performance on large websites by keeping the main thread responsive. If Web Workers are unavailable, it falls back to the main thread automatically.

      Plus: Some low-hanging fruit was picked off, and Pagefind's index chunks are now ~45% smaller thanks to delta-encoding page numbers and word locations.

      New Search Options

      • metaCacheTag โ€” Allows you to configure the cache-busting tag on the metadata file (which is fetched fresh on every page load). For offline/PWA scenarios where assets need to be served with service workers, this can now be overridden.
      • plain_excerpt โ€” Search results and sub-results now include a plain_excerpt field containing the excerpt text without highlight mark tags, for those who want to handle highlighting themselves (or don't want it at all).
      • matchedMetaFields โ€” Search results now include a matchedMetaFields field listing which metadata fields matched the search query.
      • includeCharacters is now available in the Node and Python wrapper APIs.

      UI Translations

      • Added Greek (el) translations. (PR #1019 โ€” thanks @Yoda-Soda !)
      • Improved Chinese Traditional (zh-TW) translations. (PR #990 โ€”thanks @510208 !)
      • Improved German (de) translations. (PR #953 โ€”thanks @randomguy-2650 !)
      • Added translations for new Component UI strings across all existing languages.

      Other bits and bobs

      • Fixed relative image URLs (e.g. ./image.png) breaking when displayed in search results. (PR #1087)
      • Fixed Python x86_64 macOS wheel being incorrectly tagged as arm64. (PR #950 โ€” thanks @lioman !)
      • Fixed Python wheel tags being written in compressed form. (PR #989 โ€” thanks @ichard26 !)
      • Excluded the vendor directory from the main pagefind PyPI package. (PR #991)
      • Migrated Python wrapper build tooling from Poetry to uv. (PR #934 โ€” thanks @SKalt !)
      • Fixed subresult URLs ignoring page meta URL overrides. (PR #1076)
      • Fixed subresult highlight mark color. (PR #1024)
      • Index chunk fetches are now throttled to avoid overwhelming the network on large sites. (PR #1071)
      • Added Windows ARM64 (aarch64-pc-windows-msvc) as a supported platform. (PR #1079)
      • For crate consumers: Moved actix-web and related serving dependencies behind a serve feature flag (PR #1023)

      Looking Forward

      The Component UI is the new recommended way to add search to your site, and future UI work will focus there. The Default UI and Modular UI are sticking around for now, but the Component UI is where new features will land first.

      Thanks again to everyone who contributed to this release!

    23. ๐Ÿ”— r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    24. ๐Ÿ”— r/wiesbaden Karosseriebauer / Lackierer rss

      Moin,

      mir ist jemand gegen das geparkte Auto gedonnert.

      Verkleidung der FrontstoรŸstange und Kotflรผgel mรผssen gemacht werden.

      Der Unfallverursacher ist bekannt und seine Versicherung zahlt.

      Habt ihr Tipps fรผr einen wirklich guten Karosseriebauer / Lackierer?

      Und eventuell auch einen gescheiten Anwalt fรผr Verkehrsrecht?

      submitted by /u/BabaJoe
      [link] [comments]

    25. ๐Ÿ”— r/LocalLLaMA I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM rss

      I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM | Hardware: โ€ข Stock iMac G3 Rev B (October 1998). 233 MHz PowerPC 750, 32 MB RAM, Mac OS 8.5. No upgrades. โ€ข Model: Andrej Karpathyโ€™s 260K TinyStories (Llama 2 architecture). ~1 MB checkpoint. Toolchain: โ€ข Cross-compiled from a Mac mini using Retro68 (GCC for classic Mac OS โ†’ PEF binaries) โ€ข Endian-swapped model + tokenizer from little-endian to big-endian for PowerPC โ€ข Files transferred via FTP to the iMac over Ethernet Challenges: โ€ข Mac OS 8.5 gives apps a tiny memory partition by default. Had to use MaxApplZone() + NewPtr() from the Mac Memory Manager to get enough heap โ€ข RetroConsole crashes on this hardware, so all output writes to a text file you open in SimpleText โ€ข The original llama2.c weight layout assumes n_kv_heads == n_heads. The 260K model uses grouped-query attention (kv_heads=4, heads=8), which shifted every pointer after wk and produced NaN. Fixed by using n_kv_heads * head_size for wk/wv sizing โ€ข Static buffers for the KV cache and run state to avoid malloc failures on 32 MB It reads a prompt from prompt.txt, tokenizes with BPE, runs inference, and writes the continuation to output.txt. Obviously the output is very short, but this is definitely meant to just be a fun experiment/demo! Hereโ€™s the repo link: https://github.com/maddiedreese/imac-llm submitted by /u/maddiedreese
      [link] [comments]
      ---|---

    26. ๐Ÿ”— r/Yorkshire Anybody here ever been to Market Weighton? Easily one of the nicest small towns in East Yorkshire in my opinion. rss

      I haven't been to Market Weighton since around 2012 but plan on visiting again when I'm next in Hull again, always loved visiting Market Weighton when I lived in East Yorkshire.

      submitted by /u/AcadiaNo1039
      [link] [comments]

    27. ๐Ÿ”— badlogic/pi-mono v0.65.2 release

      No content.

  4. April 05, 2026
    1. ๐Ÿ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-05 rss

      IDA Plugin Updates on 2026-04-05

      Activity:

    2. ๐Ÿ”— badlogic/pi-mono v0.65.1 release

      Fixed

      • Fixed bash output truncation by line count to always persist full output to a temp file, preventing data loss when output exceeds 2000 lines but stays under the byte threshold (#2852)
      • RpcClient now forwards subprocess stderr to parent process in real-time (#2805)
      • Theme file watcher now handles async fs.watch error events instead of crashing the process (#2791)
      • Fixed stored session cwd handling so resuming or importing a session whose original working directory no longer exists now prompts interactive users to continue in the current cwd, while non-interactive modes fail with a clear error.
      • Fixed resource collision precedence so project and user skills, prompt templates, and themes override package resources consistently, and CLI-provided paths take precedence over discovered resources (#2781)
      • Fixed OpenAI-compatible completions streaming usage accounting to preserve prompt_tokens_details.cache_write_tokens and normalize OpenRouter cached_tokens, preventing incorrect cache read/write token and cost reporting in pi (#2802)
      • Fixed CLI extension paths like git:gist.github.com/... being incorrectly resolved against cwd instead of being passed through to the package manager (#2845 by @aliou)
      • Fixed piped stdin runs with --mode json to preserve JSONL output instead of falling back to plain text (#2848 by @aliou)
      • Fixed interactive command docs to stop listing removed /exit as a supported quit command (#2850)
    3. ๐Ÿ”— r/york Evil Eye rss

      I went to Evil Eye several years ago when it had a fantastic gin shop.

      Its website suggests thatโ€™s no longer the case.

      Im coming over next week - help get my expectations right! And if it doesnโ€™t, whereโ€™s the next best gin shop?

      submitted by /u/Sitheref0874
      [link] [comments]

    4. ๐Ÿ”— sacha chua :: living an awesome life YE11: Fix find-function for Emacs Lisp from org-babel or scratch rss

      Watch on Internet Archive, watch/comment on YouTube, download captions, or email me

      Where can you define an Emacs Lisp function so that you can use find-function to jump to it again later?

      • A: In an indirect buffer from Org Mode source block with your favorite eval function like eval-defun
        • C-c ' (org-edit-special) inside the block; execute the defun with C-M-x (eval-defun), C-x C-e (eval-last-sexp), or eval-buffer.

              (defun my-test-1 () (message "Hello"))
          
      • B: In an Org Mode file by executing the block with C-c C-c

          (defun my-test-2 () (message "Hello"))
        
      • C: In a .el file

        file:///tmp/test-search-function.el : execute the defun with C-M-x (eval-defun), C-x C-e (eval-last-sexp), or eval-buffer

      • D: In a scratch buffer, other temporary buffer, or really any buffer thanks to eval-last-sexp

        (defun my-test-4 () (message "Hello"))

      Only option C works - it's gotta be in an .el file for find-function to find it. But I love jumping to function definitions using find-function or lispy-goto-symbol (which is bound to M-. if you use lispy and set up lispy-mode) so that I can look at or change how something works. It can be a little frustrating when I try to jump to a definition and it says, "Don't know where blahblahblah is defined." I just defined it five minutes ago! It's there in one of my other buffers, don't make me look for it myself. Probably this will get fixed in Emacs core someday, but no worries, we can work around it today with a little bit of advice.

      I did some digging around in the source code. Turns out that symbol-file can't find the function definition in the load-history variable if you're not in a .el file, so find-function-search-for-symbol gets called with nil for the library, which causes the error. (emacs:subr.el)

      I wrote some advice that searches in any open emacs-lisp-mode buffers or in a list of other files, like my Emacs configuration. This is how I activate it:

      (setq sacha-elisp-find-function-search-extra '("~/sync/emacs/Sacha.org"))
      (advice-add 'find-function-search-for-symbol :around #'sacha-elisp-find-function-search-for-symbol)
      

      Now I should be able to jump to all those functions wherever they're defined.

      (my-test-1)
      (my-test-2)
      (my-test-3)
      (my-test-4)
      

      Note that by default, M-. in emacs-lisp-mode uses xref-find-definitions, which seems to really want files. I haven't figured out a good workaround for that yet, but lispy-mode makes M-. work and gives me a bunch of other great shortcuts, so I'd recommend checking that out.

      Here's the source code for the find function thing:

      (defvar sacha-elisp-find-function-search-extra
        nil
        "List of filenames to search for functions.")
      
      ;;;###autoload
      (defun sacha-elisp-find-function-search-for-symbol (fn symbol type library &rest _)
        "Find SYMBOL with TYPE in Emacs Lisp buffers or `sacha-find-function-search-extra'.
      Prioritize buffers that do not have associated files, such as Org Src
      buffers or *scratch*. Note that the fallback search uses \"^([^ )]+\" so that
      it isn't confused by preceding forms.
      
      If LIBRARY is specified, fall back to FN.
      
      Activate this with:
      
      (advice-add 'find-function-search-for-symbol
       :around #'sacha-org-babel-find-function-search-for-symbol-in-dotemacs)"
        (if (null library)
            ;; Could not find library; search my-dotemacs-file just in case
            (progn
              (while (and (symbolp symbol) (get symbol 'definition-name))
                (setq symbol (get symbol 'definition-name)))
              (catch 'found
                (mapc
                 (lambda (buffer-or-file)
                   (with-current-buffer (if (bufferp buffer-or-file)
                                            buffer-or-file
                                          (find-file-noselect buffer-or-file))
                     (let* ((regexp-symbol
                             (or (and (symbolp symbol)
                                      (alist-get type (get symbol 'find-function-type-alist)))
                                 (alist-get type find-function-regexp-alist)))
                            (form-matcher-factory
                             (and (functionp (cdr-safe regexp-symbol))
                                  (cdr regexp-symbol)))
                            (regexp-symbol (if form-matcher-factory
                                               (car regexp-symbol)
                                             regexp-symbol))
      
                            (case-fold-search)
                            (regexp (if (functionp regexp-symbol) regexp-symbol
                                      (format (symbol-value regexp-symbol)
                                              ;; Entry for ` (backquote) macro in loaddefs.el,
                                              ;; (defalias (quote \`)..., has a \ but
                                              ;; (symbol-name symbol) doesn't.  Add an
                                              ;; optional \ to catch this.
                                              (concat "\\\\?"
                                                      (regexp-quote (symbol-name symbol)))))))
                       (save-restriction
                         (widen)
                         (with-syntax-table emacs-lisp-mode-syntax-table
                           (goto-char (point-min))
                           (if (if (functionp regexp)
                                   (funcall regexp symbol)
                                 (or (re-search-forward regexp nil t)
                                     ;; `regexp' matches definitions using known forms like
                                     ;; `defun', or `defvar'.  But some functions/variables
                                     ;; are defined using special macros (or functions), so
                                     ;; if `regexp' can't find the definition, we look for
                                     ;; something of the form "(SOMETHING <symbol> ...)".
                                     ;; This fails to distinguish function definitions from
                                     ;; variable declarations (or even uses thereof), but is
                                     ;; a good pragmatic fallback.
                                     (re-search-forward
                                      (concat "^([^ )]+" find-function-space-re "['(]?"
                                              (regexp-quote (symbol-name symbol))
                                              "\\_>")
                                      nil t)))
                               (progn
                                 (beginning-of-line)
                                 (throw 'found
                                         (cons (current-buffer) (point))))
                             (when-let* ((find-expanded
                                          (when (trusted-content-p)
                                            (find-function--search-by-expanding-macros
                                             (current-buffer) symbol type
                                             form-matcher-factory))))
                               (throw 'found
                                       (cons (current-buffer)
                                             find-expanded)))))))))
                 (delq nil
                       (append
                        (sort
                         (match-buffers '(derived-mode . emacs-lisp-mode))
                         :key (lambda (o) (or (buffer-file-name o) "")))
                        sacha-elisp-find-function-search-extra)))))
          (funcall fn symbol type library)))
      

      I even figured out how to write tests for it:

      (ert-deftest sacha-elisp--find-function-search-for-symbol--in-buffer ()
        (let ((sym (make-temp-name "--test-fn"))
              buffer)
          (unwind-protect
              (with-temp-buffer
                (emacs-lisp-mode)
                (insert (format ";; Comment\n(defun %s () (message \"Hello\"))" sym))
                (eval-last-sexp nil)
                (setq buffer (current-buffer))
                (with-temp-buffer
                  (let ((pos (sacha-elisp-find-function-search-for-symbol nil (intern sym) nil nil)))
                    (should (equal (car pos) buffer))
                    (should (equal (cdr pos) 12)))))
            (fmakunbound (intern sym)))))
      
      (ert-deftest sacha-elisp--find-function-search-for-symbol--in-file ()
        (let* ((sym (make-temp-name "--test-fn"))
               (temp-file (make-temp-file
                           "test-" nil ".org"
                           (format
                            "#+begin_src emacs-lisp\n;; Comment\n(defun %s () (message \"Hello\"))\n#+end_src"
                            sym)))
               (sacha-elisp-find-function-search-extra (list temp-file))
               buffer)
          (unwind-protect
              (with-temp-buffer
                (let ((pos (sacha-elisp-find-function-search-for-symbol nil (intern sym) nil nil)))
                  (should (equal (buffer-file-name (car pos)) temp-file))
                  (should (equal (cdr pos) 35))))
            (delete-file temp-file))))
      
      This is part of my Emacs configuration.

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    5. ๐Ÿ”— jellyfin/jellyfin 10.11.8 release

      ๐Ÿš€ Jellyfin Server 10.11.8

      We are pleased to announce the latest stable release of Jellyfin, version 10.11.8! This minor release brings several bugfixes to improve your Jellyfin experience. As always, please ensure you take a full backup before upgrading!

      Note : This release fixes several regressions from 10.11.7, with the goal to get people onto an updated release due to the forthcoming (t-minus 9 days) release of the GHSAs/CVEs that were fixed in 10.11.7. Please upgrade to this release as soon as you can.

      You can find more details about and discuss this release on our forums.

      Changelog (3)

      ๐Ÿ“ˆ General Changes

    6. ๐Ÿ”— r/LocalLLaMA Gemma 4 just casually destroyed every model on our leaderboard except Opus 4.6 and GPT-5.2. 31B params, $0.20/run rss

      Gemma 4 just casually destroyed every model on our leaderboard except Opus 4.6 and GPT-5.2. 31B params, $0.20/run | Tested Gemma 4 (31B) on our benchmark. Genuinely did not expect this. 100% survival, 5 out of 5 runs profitable, +1,144% median ROI. At $0.20 per run. It outperforms GPT-5.2 ($4.43/run), Gemini 3 Pro ($2.95/run), Sonnet 4.6 ($7.90/run), and absolutely destroys every Chinese open-source model we've tested โ€” Qwen 3.5 397B, Qwen 3.5 9B, DeepSeek V3.2, GLM-5. None of them even survive consistently. The only model that beats Gemma 4 is Opus 4.6 at $36 per run. That's 180ร— more expensive. 31 billion parameters. Twenty cents. We double-checked the config, the prompt, the model ID โ€” everything is identical to every other model on the leaderboard. Same seed, same tools, same simulation. It's just this good. Strongly recommend trying it for your agentic workflows. We've tested 22 models so far and this is by far the best cost-to-performance ratio we've ever seen. Full breakdown with charts and day-by-day analysis: foodtruckbench.com/blog/gemma-4-31b FoodTruck Bench is an AI business simulation benchmark โ€” the agent runs a food truck for 30 days, making decisions about location, menu, pricing, staff, and inventory. Leaderboard at foodtruckbench.com EDIT โ€” Gemma 4 26B A4B results are in. Lots of you asked about the 26B A4B variant. Ran 5 simulations, here's the honest picture: 60% survival (3/5 completed, 2 bankrupt). Median ROI: +119%, Net Worth: $4,386. Cost: $0.31/run. Placed #7 on the leaderboard โ€” above every Chinese model and Sonnet 4.5, below everything else. Both bankruptcies were loan defaults โ€” same pattern we see across models. The 3 surviving runs were solid, especially the best one at +296% ROI. But here's the catch. The 26B A4B is the only model out of 23 tested that required custom output sanitization to function. It produces valid tool-call intent, but the JSON formatting is consistently broken โ€” malformed quotes, trailing garbage tokens, invalid escapes. I had to build a 3-stage sanitizer specifically for this model. No other model needed anything like this. The business decisions themselves are unmodified โ€” the sanitizer only fixes JSON formatting, not strategy. But if you're planning to use this model in agentic workflows, be prepared to handle its output format. It does not produce clean function calls out of the box. TL;DR: 31B dense โ†’ 100% survival, $0.20/run, #3 overall. 26B A4B โ†’ 60% survival, $0.31/run, #7 overall, but requires custom output parsing. The 31B is the clear winner. Updated leaderboard: foodtruckbench.com submitted by /u/Disastrous_Theme5906
      [link] [comments]
      ---|---

    7. ๐Ÿ”— r/york Free tourist places to visit in York? rss

      I'm coming from Ireland to visit York for the first time as a tourist. What free places can I visit in York? Also, could you recommend some less expensive places to visit?

      Thank you.

      submitted by /u/BeNicetoHuman
      [link] [comments]

    8. ๐Ÿ”— r/reverseengineering IW8 is safe and works fine? I'm interested in using it to learn how the game was developed, as well as the files, compilation process, etc. rss
    9. ๐Ÿ”— r/Harrogate What is the working class accent of Harrogate? rss

      I've got a weird obsession with accents across the UK, especially Yorkshire. Harrogate is by far the nicest big place in Yorkshire I've been to but I never met anyone who's actually from there when I went. With it being a quite posh place, I'd assume it has a posh northern accent?

      submitted by /u/montgomery_quinckle
      [link] [comments]

    10. ๐Ÿ”— r/Yorkshire We went to see the jousting at Leeds Royal Armouries! Here are some highlights rss

      We went to see the jousting at Leeds Royal Armouries! Here are some highlights | It was a fantastic day out and the museum is free but they told us the next jousting tournament over the summer had been cancelled due to funding :( submitted by /u/mbloomer04
      [link] [comments]
      ---|---

    11. ๐Ÿ”— r/LocalLLaMA Real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B rss

      Real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B | Sure you can't do agentic coding with the Gemma 4 E2B, but this model is a game-changer for people learning a new language. Imagine a few years from now that people can run this locally on their phones. They can point their camera at objects and talk about them. And this model is multi-lingual, so people can always fallback to their native language if they want. This is essentially what OpenAI demoed a few years ago. Repo: https://github.com/fikrikarim/parlor submitted by /u/ffinzy
      [link] [comments]
      ---|---

    12. ๐Ÿ”— r/york Little walk around Walmgate Stray rss

      Little walk around Walmgate Stray | Was cold windy and rainy but I love this place. Interesting to see how the development of the Retreat is progressing. submitted by /u/DentistKitchen
      [link] [comments]
      ---|---

    13. ๐Ÿ”— r/york The Retreat, Heslington, York rss

      The Retreat, Heslington, York | submitted by /u/DentistKitchen
      [link] [comments]
      ---|---

    14. ๐Ÿ”— r/reverseengineering Revived "Sniper Shooter Free" โ€” patched to work on modern Android rss
    15. ๐Ÿ”— r/Yorkshire Easby Abbey! rss

      Easby Abbey! | I visited Easby Abbey today, and it was great - so here's some pictures I took. I will admit that I added a bit more turqouise to the colour mix for the sky - it was a lovely day, but not quite that continental. Anyway, thank you for having me Yorkshire, I had a lovely time this weekend. edit - I have no idea why that third picture is all janky resolution wise! submitted by /u/ErsatzNihilist
      [link] [comments]
      ---|---

    16. ๐Ÿ”— r/Yorkshire Perfect Easter Sunday in N. Yorks rss

      Perfect Easter Sunday in N. Yorks | Down from Glasgow visiting this weekend and youโ€™ve done us proud as per usual submitted by /u/damo74uk
      [link] [comments]
      ---|---

    17. ๐Ÿ”— r/LocalLLaMA Per-Layer Embeddings: A simple explanation of the magic behind the small Gemma 4 models rss

      Many of you seem to have liked my recent post "A simple explanation of the key idea behind TurboQuant". Now I'm really not much of a blogger and I usually like to invest all my available time into developing Heretic, but there is another really cool new development happening with lots of confusion around it, so I decided to make another quick explainer post.

      You may have noticed that the brand-new Gemma 4 model family includes two small models: gemma-4-E2B and gemma-4-E4B.

      Yup, that's an "E", not an "A".

      Those are neither Mixture-of-Experts (MoE) models, nor dense models in the traditional sense. They are something else entirely, something that enables interesting new performance tradeoffs for inference.

      What's going on?

      To understand how these models work, and why they are so cool, let's quickly recap what Mixture-of-Experts (MoE) models are:

      gemma-4-26B-A4B is an example of an MoE model. It has 25.2 billion parameters (rounded to 26B in the model name). As you may know, transformer language models consist of layers, and each layer contains a so-called MLP (Multi-Layer Perceptron) component, which is responsible for processing the residual vector as it passes through the layer stack. In an MoE model, that MLP is split into "experts", which are sub-networks that learn to specialize during training. A routing network decides for each token which experts are the most appropriate for the token, and only those expert networks are actually used while processing that token.

      In other words, while an MoE model has many parameters, only a fraction of them are required to predict the next token at any specific position. This is what the model name means: gemma-4-26B-A4B has 26 billion (actually 25.2 billion) total parameters, but only 4 billion of those (actually 3.8 billion) are active during any single inference step.

      The good news is that this means that we can do inference much faster than for a dense 26B model, as only 3.8 billion parameters are involved in the computations. The bad news is that we still need to be able to load all 25.2 billion parameters into VRAM (or fast RAM), otherwise performance will tank because we don't know in advance which parameters we'll need for a token, and the active experts can differ from token to token.

      Now gemma-4-E2B is a very different beast: It has 5.1 billion parameters, but 2.8 billion of those are embedding parameters. Google claims that those parameters "don't count", so they say that there are only 2.3 billion effective parameters. That's what the "E2B" part stands for.

      Wut? Why don't the embedding parameters count?

      If you have read or watched even a basic introduction to language models, you probably know what embeddings are: They are high-dimensional vectors associated with each token in the vocabulary. Intuitively speaking, they capture the "essence" of what a token stands for, encoded as a direction- magnitude combination in the embedding space.

      Embeddings are static and position-independent. The embedding vector associated with a specific token is always the same, regardless of where the token occurs in the input and which other tokens surround it. In the mathematical formulation, embeddings are often expressed as a matrix, which can be multiplied with a matrix of one-hot encoded tokens, giving a matrix of embedding vectors for those tokens.

      The small Gemma 4 models make use of Per-Layer Embeddings (PLE): Instead of a single large embedding matrix that is applied right after the tokenizer at the beginning of processing, there are additional (smaller) embedding matrices for each layer. Through training, they acquire specialized knowledge that can re-contextualize the token for the semantic specialization of each layer, which greatly improves processing quality. The layer-based embedding vectors are combined with the residuals through a series of operations, adding locally relevant information.

      For gemma-4-E2B, the matrices holding these Per-Layer Embeddings make up more than half of all model parameters.

      Okay, but why don't the embedding parameters count?!?

      Because the "Introduction to Transformers" tutorials you've been watching have lied to you. While applying embeddings via matrix multiplication is incredibly elegant mathematically, it's complete dogshit in practice. No inference engine actually does that.

      Remember that embedding vectors are:

      • Static (they only depend on the token itself)
      • Position-independent (there is only one embedding vector for each token)
      • Fixed (they are precomputed for the entire vocabulary)

      So the "embedding matrix" is a list of embedding vectors, with as many elements as there are tokens in the vocabulary. There are no cross-column interactions at all. That's not a matrix, that's a lookup table. So we don't actually have to do matrix multiplication to get the embeddings. We just pull the entries for the token IDs from a fixed-size array. And we aren't even going to need the vast majority of entries. Modern tokenizer vocabularies typically contain around 250,000 different tokens. But if our input is 1000 tokens, we are only going to look at a tiny fraction of those.

      We don't need CUDA cores or optimized kernels for that. We don't need those embedding matrices to be in VRAM. We don't even necessarily need to store them in CPU RAM. In fact, we can store them on disk. The plan seems to be to store them in flash memory on mobile devices, and possibly combine that with in-flash processing for further speedups in the future.

      And that's the secret of Per-Layer Embeddings: They are huge, but we need such a tiny part of them for each inference step that we can store them wherever we like. And that's why they are fast.

      submitted by /u/-p-e-w-
      [link] [comments]

    18. ๐Ÿ”— r/Yorkshire A great view through Mercury Bridge on the Swale in Richmond. rss

      A great view through Mercury Bridge on the Swale in Richmond. | for more about Richmond Yorks visit the new subreddit. submitted by /u/Still_Function_5428
      [link] [comments]
      ---|---

    19. ๐Ÿ”— r/reverseengineering Inside WannaCry: Exploit, Worming, and TOR Communication Explained rss
    20. ๐Ÿ”— r/Leeds Anyone up for table tennis / badminton at Quarry House? rss

      Hey everyone,
      Looking to see if anyoneโ€™s up for playing table tennis (or even badminton) at the Quarry House Leisure Centre in Leeds?
      Trying to get a bit more active, donโ€™t really know many folks here who play sports, so thought Iโ€™d put this out there. Let me know if youโ€™re interested!

      submitted by /u/SubstantialHorror422
      [link] [comments]

    21. ๐Ÿ”— r/reverseengineering Reverse engineering PerimeterXโ€™s new VM rss
    22. ๐Ÿ”— r/wiesbaden Beachvolleyball-Gruppe rss

      Hallo,

      ich bin noch relativ neu in Wiesbaden und suche eine gemischte Beachvolleyballgruppe, die sich zum Beachen und abhรคngen trifft. Bin kein Profispieler - eher eine Mischung aus Semi und SpaรŸ, sich auch Mal in den Sand zu werfen.

      Am Schlachthof scheint ein guter Treff. Freue mich auf Vorschlรคge bzw. vielleicht kรถnnen wir ja auch eine Gruppe bilden. Danke vorab und schรถnen Sonntag :)

      submitted by /u/nate23x
      [link] [comments]

    23. ๐Ÿ”— r/LocalLLaMA Minimax 2.7: Today marks 14 days since the post on X and 12 since huggingface on openweight rss

      Minimax 2.7: Today marks 14 days since the post on X and 12 since huggingface on openweight | I think it would make a nice Easter egg to release today! submitted by /u/LegacyRemaster
      [link] [comments]
      ---|---

    24. ๐Ÿ”— r/Yorkshire [3 free Kindle books] SLICES OF SCARBOROUGH - 3 stand-alone horror novellas set in and around the North Yorkshire town. They are available for free until 9th April. Hope you enjoy. rss
    25. ๐Ÿ”— r/LocalLLaMA Gemma 4 26b is the perfect all around local model and I'm surprised how well it does. rss

      I got a 64gb memory mac about a month ago and I've been trying to find a model that is reasonably quick, decently good at coding, and doesn't overload my system. My test I've been running is having it create a doom style raycaster in html and js

      I've been told qwen 3 coder next was the king, and while its good, the 4bit variant always put my system near the edge. Also I don't know if it was because it was the 4bit variant, but it always would miss tool uses and get stuck in a loop guessing the right params. In the doom test it would usually get it and make something decent, but not after getting stuck in a loop of bad tool calls for a while.

      Qwen 3.5 (the near 30b moe variant) could never do it in my experience. It always got stuck on a thinking loop and then would become so unsure of itself it would just end up rewriting the same file over and over and never finish.

      But gemma 4 just crushed it, making something working after only 3 prompts. It was very fast too. It also limited its thinking and didn't get too lost in details, it just did it. It's the first time I've ran a local model and been actually surprised that it worked great, without any weirdness.

      It makes me excited about the future of local models, and I wouldn't be surprised if in 2-3 years we'll be able to use very capable local models that can compete with the sonnets of the world.

      submitted by /u/pizzaisprettyneato
      [link] [comments]

    26. ๐Ÿ”— navidrome/navidrome v0.61.1 release

      This patch release addresses a WebP performance regression on low-power hardware introduced in v0.61.0, adds a new EnableWebPEncoding config option and a configurable UI cover art size, and includes several Subsonic API and translation fixes.

      Configuration Changes

      Status | Option | Description | Default
      ---|---|---|---
      New | EnableWebPEncoding | Opt-in to WebP encoding for resized artwork. When false (default), Navidrome uses JPEG/PNG (preserving the original source format), avoiding the WebP WASM encoder overhead that caused slow image processing on low-power hardware in v0.61.0. Set to true to re-enable WebP output. Replaces the internal DevJpegCoverArt flag. (#5286) | false
      New | UICoverArtSize | Size (in pixels, 200โ€“1200) of cover art requested by the web UI. It was increased from 300px to 600px in 0.61.0; now configurable and defaulting to 300px to reduce image encoding load on low-power hardware. Users on capable hardware can raise it for sharper thumbnails. (#5286) | 300
      Changed | DevArtworkMaxRequests | Default lowered from max(4, NumCPU) to max(2, NumCPU/2) to reduce load on low-power hardware. (#5286). (Note: this is an internal configuration and can be removed in future releases) | max(2, NumCPU/2)
      Removed | DevJpegCoverArt | Replaced by the user-facing EnableWebPEncoding option. (#5286) | โ€”

      For a complete list of all configuration options, see the Configuration Options documentation.

      Server

      • Add missing viper defaults for MPVPath, ArtistImageFolder, and Plugins.LogLevel so they can be overridden via environment variables and config files. (220019a9f by @deluan)
      • Update go-sqlite3 to v1.14.38 and go-toml to v2.3.0. (6109bf519 by @deluan)

      Artwork

      • Address WebP performance regression on low-power hardware by preserving original image format when WebP encoding is disabled, and adding encoder/decoder selection logging. (#5286 by @deluan)
      • Preserve animation for square thumbnails with animated images. (4030bfe06 by @deluan)

      Smart Playlists

      • Add sampleRate, codec, and missing fields for smart playlist criteria. (80c1e6025 by @deluan)

      Subsonic API

      • Strip OpenSubsonic extensions from playlists for legacy clients to improve compatibility. (23f355637 by @deluan)
      • Return proper artwork ID format in getInternetRadioStations. (c60637de2 by @deluan)

      Translations

      Full Changelog : v0.61.0...v0.61.1

      Helping out

      This release is only possible thanks to the support of some awesome people!

      Want to be one of them?
      You can sponsor, pay me a Ko- fi, or contribute with code.

      Where to go next?

    27. ๐Ÿ”— r/LocalLLaMA One year ago DeepSeek R1 was 25 times bigger than Gemma 4 rss

      I'm mind blown by the fact that about a year ago DeepSeek R1 came out with a MoE architecture at 671B parameters and today Gemma 4 MoE is only 26B and is genuinely impressive. It's 25 times smaller, but is it 25 times worse?

      I'm exited about the future of local LLMs.

      submitted by /u/rinaldo23
      [link] [comments]

    28. ๐Ÿ”— r/Leeds Looking for small venues to play gigs as new band rss

      Hi,

      I have recently assembled a quartet of local musicians (Sax, piano, drums, bass), we play a mix of Jazz, Funk and RNB.

      We are looking for small venues in Leeds and the surrounding area that may be open to new bands, wanted to ask on here for reccomendations.

      We have played in one of the Uni of Manchester restaurants recently, and have released our debut single, we also have a summer gig planned in North Wales.

      submitted by /u/OJB2800
      [link] [comments]